staff Journal
Leaders
Writing/Editorial
Design
UC Berkeley
Berkeley Scientific Journal
Prashant Bhat Malone Locke
Aditya Limaye Alex Powers
Innovation Design Team
Cambridge
Nathan Smith
Toby McMaster Sophie Harrington
Duke
Manoj Kanagaraj Dharshan Sivaraj
Abhishek Balakrishnan Elisa Berson Yash Bhatnagar Komal Kinger Alissa Wall
Harvard
Alexandra Rojek Roxanna Haghighat
Caitlin Andrews Francisco Galdos Suraj Kannan Carrie Sha
Oxford
Sophie McManus Ellen Foley-Williams
James Cooke Charlie Coughlan Amy Lineham Marco Narajos
Eugene Lee Marisa Chow Sarah Santucci Erika Davidoff Tehila Stone Hyewon Kim Dennis Guo Lamia Ateshian Jessica Deng Crystal Wang Sarah Wang Erica Tsai Jessica Vo Sonia Hashim
Stephen Cognetta Cissy Chen Abrar Choudhury
Benjamin Huang Jennifer Lee Amelia Warshaw Bridget Zakrzewski
Julia Zhao Vijay Venkatesan Amber Mirajkar
Alex Alexander Henry Bair Daniel Colchado Natalie Danckers Teja Dasari Pooja Yesantharao
Jordan Shapiro Amanda Zerbe Dana Huh Joyce Kang
Sarah Hirshorn Amanda Zerbe
Amee Azad Rachel Hoffman Olivia Sutton
Dan Cohen Katelyn Mae Petrin Alex Wess Linda Xu
Bluesci
Vertices
Harvard Science Review
Bang! Science Magazine
Princeton
Princeton Innovation
Rice
Catalyst
Stanford Intersect
Washington University in St. Louis
Frontiers: WU Review of Health
Catalyst Design Team Parth Agrawal Saumya Rajvanshi Gloria Kim Vidya Giri Madeleine Tadros Rohit Kavunkala Claire Peng Kris Sheng Danielle Robertson Meghana Pannala Eric Lee Scarlett Xu Avinash Pyla
table of contents UC Berkeley 4 7
DNA: The Building Blocks of Nanotechnology Carbon Nanotubes Cambridge
10 12
Neglected Tropical Diseases: Beyond HIV/AIDS, Malaria, TB A Golden Opportunity: Combating Malnutrition via GM Rice Duke
14
Microbiology and the Final Frontier Harvard
17 20
The 3D Bioprinting Revolution Exploring the Avian Mind with Dr. Irene Pepperberg Oxford
23 24
No Heartbeat No Hope? Understanding the World: How Your Brain Constructs a Simulation of the World Princeton
26 28
Laughing with John Nash: The Person Behind the Mind The Science of Exercise News
30
Trust Your Gut: Treating Autism with Antibiotics Rice
31 34
Farming the Unknown Personalized Healthcare: The New Era Stanford
36 38
The Intersection Between Reality and Virtual Reality Current Science and Policy of Bycatch Reduction Washington University
40 42
Breaking Down the Affordable Care Act The Image of the Doctor: Television and Reality
UC BERKELEY
DNA:
THE BUILDING BLOCKS OF NANOTECHNOLOGY ALEX POWERS A friend of mine suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. You put the mechanical surgeon inside the blood vessel and it goes into the heart and ‘looks’ around...it finds out which valve is faulty and takes a little knife and slices it.1
Richard Feynman offered this prophetic vision at his famous 1959 Caltech lecture “There’s Plenty of Room at the Bottom,” a seminal event in the history of nanotechnology. When developing such nanoscale machines, Feynman suggested that scientists take a hint from biology. After all, proteins zip around cells on elaborate transport systems while DNA molecules encode vast quantities of information with ease. Feynman asked innovators to “consider the possibility that we too can make an object that maneuvers at that level.”1 Little did he know that biology would be key to making his vision a tangible reality nearly 50 years later. The burgeoning field of deoxyribonucleic acid (DNA) nanotechnology, using nucleic acids as a building material in a nonbiological context, has recently yielded some incredible breakthroughs ranging from programmable drug delivery capsules to enzyme “spiders” and chemical logic gates. DNA nanotechnology has the potential to finally realize the nano-surgeon. DNA is an extremely effective building material at the nanoscale; after all, nucleic acids are life’s information storage molecule of choice. Nearly everyone learns about DNA by the time they graduate middle school—and with good reason. Just as computers derive complex
information from a simple code of 1s and 0s stored electronically, DNA encodes the vast complexity of life in simple chemicals. Deoxyribonucleic acids are composed of long strands of repeating subunits known as nucleotides. DNA uses four different kinds of nucleotides with different chemical structures: adenine, guanine, thymine, and cytosine. The sequence of these nucleotides describes the information available for “building” an organism, similar to the way in which letters of the alphabet appear in a certain order to form words and sentences. Two strands of DNA pair up according to rules dictated by the molecular geometry of each nucleotide. Thymine pairs with adenine, and cytosine pairs with guanine.
Adenine
Guanine
This base pairing specificity is the foundation of designing DNA nanostructures. The key to building small is encoding the assembly information into the molecules themselves, rather than using external forces to arrange them. Early successes often relied on such external forces as atomic force microscopy or scanning tunneling microscopy to build structures molecule by molecule— approaches that cannot be easily scaled up to create large, complex 3D structures.2 The main advantages of DNA as a building material are that it can spontaneously self assemble, the sequence of nucleotides can be precisely controlled, and the 3D structure is well understood (in contrast to the complexity of proteins).
Thymine
The sequence in which these bases combine describes the information available for building an organism.
Cytosine
Think of this like the alphabet: combining letters in different ways gives us different words.
5
DNA nanostructures fall into one of two categories: structural and dynamic. Static structures are fixed arrangements of DNA in specific shapes. A variety of strategies exist to do this, one of the most successful of which is DNA origami. Dynamic structures are formed similarly but are designed to move using techniques like strand displacement, and such dynamism is essential for any sort of computational or mechanical functionality. In a 2006 paper in Nature, Paul Rothemund, a Caltech researcher, coined the term “DNA origami” to describe his successful manipulation of DNA strands into a variety of shapes.3 He synthesized six different shapes including squares, triangles, and five-pointed stars consisting of flat lattices of DNA. His revolutionary technique utilized a single long “scaffold” strand of genomic DNA from a virus (7,000 nucleotides long), which was coiled, twisted, and stacked by small custom “staple” strands. The long single strand will not bind to itself, so a computer algorithm designs short strands complementary to certain regions, which are designed to maximize the connectivity and tightly hold everything together. Perhaps the most surprising part of the method is its simplicity: staple strands are mixed with the
long strand and heated for two hours at 95 oC. The process is entirely spontaneous. Strands join together to maximize complementary binding, thus forming the correct structure. With the steadily falling cost of synthesizing custom DNA strands, this method is relatively inexpensive. Synthetic DNA strands have been available by mail order for the past 20 years; now, most cost less than 10 cents per base.4 Rothemund’s second achievement was developing a method to pattern the 2D shapes. He designed staple strands that would stick up from the flat lattice, increasing the height of the nanostructure at desired locations. The staple strand—normally in plane with the flat DNA lattice—contains “hairpins,” short regions that do not bind to the scaffold. The hairpin structures were clearly visible by atomic force microscopy. A word map as well as the word “DNA” were successfully created and visualized. These letters are about 30 nm tall, or 6000 times smaller than the width of a human hair. Rothemund’s method has been extended to the construction of dynamic structures, such as a hollow cube with a controllable lid composed of six sheets.5 The main application of this technology is targeted drug delivery that concentrates therapeutics in some regions of the body relative to others in response to desired stimuli. The entire cube is composed solely of a single long strand of DNA from the M13 bacteriophage and hundreds of staple strands. The lid is functionalized with a lock key system consisting of two strands of DNA—a mere 2.5 nm wide. The lid is initially closed by these two nearly complementary strands, one attached to the lid and the other to the cube side wall. This system takes advantage of a method called “toehold strand displacement” to allow the box to open in response to the presence of a unique “key” strand that displaces one of the lock strands.
This key strand attaches to the toehold region initially and, having a better match than one of the lock strands, replaces the other lock strand. The lid is now free to open. The researchers detected the opening of the box by incorporating two fluorescent dyes into the faces of the box. When close together, fluorescence is increased. Therefore, when the box opens and the dyes are further apart, fluorescence decreases and is detectable via spectroscopy. Further experiments utilized two locks each with distinct keys. In order for the box to open, both keys had to be present. Further experiments demonstrated that the box could respond to complex combinations of strands and even cellular messenger RNAs. This opens the possibility for smart systems that could respond to disease markers inside specific cells. The aforementioned methodologies— DNA origami and dynamic strand displacement strategies—provide the
MOVING MACHINERY SYNTHESIZED AT THE NANOSCALE IS A DAUNTING CHALLENGE
foundation for more complex functional devices. Nanoscale machinery will require tiny moving parts to interact with and manipulate their environment. Moving machinery synthesized at the nanoscale is a daunting challenge; relatively simple molecules must move in complex desired paths. To tackle this problem, scientists again looked to biology for inspiration. While cells might appear to be stationary and static, they are in fact buzzing with tiny protein machines. Cellular motors, like the enzyme ATP synthetase or proteins that power flagella, spin at up to 6,000 to 17,000 rpm.6 Other motor proteins include “walkers” like kinesin, which transport payloads along cellular highways of microtubule filaments. Kinesin travels in a controlled, specific direction because it only attaches in one
6 orientation, dictated by the microtubules structure. Motor proteins like kinesin have inspired scientists to artificially create walkers using DNA nanotechnology.7 One of the biggest obstacles to an artificial walker is its simple molecular structure, which prevents it from containing “programmed” instructions. Thus, it must take its cues for movement from its environment—in this case, patterning of short strands sticking up from a 2D DNA origami sheet. Published in Nature in 2010, one paper describes a nanoscale robot called a “DNA spider” that can walk across a flat sheet of DNA in a complex pre-determined path. This development is comparable to creating a robot that moves forward and turns based on preprogrammed instructions, except about 109 times smaller in size. The walker is not actually composed of DNA but rather proteins and enzymes that act on DNA. These include streptavidin, a protein often used as a connector between different proteins of interest, as the body of the spider and three deoxyribozyme “legs,” each connected to the streptavidin. Using the DNA origami patterning techniques developed by Rothemund, surfaces are designed that lay out paths for the spiders to follow. Strands that stick up perpendicular to the 2D surface specify paths. The legs of the spider bind to the short strands, cleaving them into two upon contact with the enzymatic leg. Each leg moves independently from one site to an accessible substrate site. Thus, the body of a spider at the interface between cleaved strands and fresh substrate (uncleaved strands) will move toward the substrate region. This amounts to the spider moving directionally along a track as the substrates are cleaved. In comparison to protein walkers, these are more predictable, programmable, and can interact with designed landscapes. In conclusion, DNA has great potential beyond its biological role. Its capacity for information storage and self-assembly makes DNA a powerful tool for nanotechnology.
REFERENCES
[1] Feynman, R. P. Engineering and Science 1960, 23, 22-36. [2] Shankland, S. Ibm’s 35 xenon atoms and the rise of nano tech. CNET [Online], Sept. 28, 2009. http://news.cnet.com/830130685_3-10362747-264.html (accessed Mar. 2, 2014). [3] Rothemund, P. W. K. Nature 2006, 440, 297-302. [4] Carlson, R. Nat. Biotech. 2009, 27, 1091. [5] Andersen, E. S. et al. Nature 2009, 459, 73-76. [6] Rice, S. et al. Nature 1999, 402, 778-784. [7] Lund, K. et al. Nature 2010, 465, 206-210.
CREATE Develop your startup in the summer accelerator eLab or semester-long incubator. Get advice from experiences mentors and advisors. Tinker in our maker space.
LEARN Take a class in innovation, design, and entrepreneurship. Earn a certificate in technology and society. Learn integrated engineering, math, and physics. Apply engineering to community service.
EXPLORE Spend your summer abroad. Join a growing startup. Find an internship that excites you. Fund your project. Become a fellow or student advisor.
ENGAGE Attend a public lecture. Participate in our annual Innovation Forum. Immerse yourself in the local entrepreneurial ecosystem.
kellercenter.princeton.edu facebook.com/kellercenter twitter.com/kellercenter
or om ur
ogy th,
und or.
ual al
7 UC BERKELEY
CARBON
NANOTUBES ADITYA LIMAYE
Carbon, element number six, is often considered the backbone of life on Earth. With four valence electrons and many different bonding geometries, carbon is present in all biological macromolecules and plays an integral role in fundamental biological processes, making it truly deserving of its own field, organic chemistry. While carbon is usually anecdotally known for its abundance in biological systems, carbon’s many bonding geometries and versatile electronic configurations make it an exceptional material for synthetic molecules for physical applications, such as building materials or semiconductors. Serious investigation into carbon for physical applications began in 1985, when a group of researchers at Rice University designed a
THE CARBON NANOTUBE PRESENTS A VERY GOOD CHOICE FOR A STRUCTURAL MATERIAL, FIVE TO TEN TIMES GREATER THAN HIGH-STRENGTH STEEL
“buckyball,” a molecule known as buckminsterfullerene with the chemical formula C60 arranged in a structure akin to a soccer ball, with six and five-membered rings positioned adjacent to each other. In fact, buckminsterfullerene, named after the American architect R. Buckminster Fuller, who built geodesic domes resembling the molecule’s shape, was only one molecule in a class of many fullerenes, molecules made entirely out of carbon, arranged in the shape of a hollow sphere or tube. After the 1996 Nobel Prize in Chemistry was awarded to a team for the discovery of fullerenes, research into the fullerenes was taken up in earnest by much of the scientific community. During this time of high interest in the fullerene molecules, a Japanese team of scientists led by Dr. Sumio Ijima designed a tubular fullerene designed entirely out of six-membered carbon rings; forming a large cylindrical structure they termed the “carbon nanotube.”1 Since this fortuitous discovery in 1991, research into carbon nanotubes increased rapidly, spanning from the original field of chemistry into related disciplines such as physics, materials science, and biology.
Research into the properties of carbon nanotubes continues in full force even today, and new applications for carbon nanotubes are currently being studied at the forefront of scientific research. One of the most important properties of the carbon nanotube is its incredible ability to withstand applied tensile forces. When choosing the appropriate material for structural applications, materials engineers often need to consider the way in which a material responds to outside stresses, such as the tensile forces applied in cabling for bridges or the compressive forces applied against reinforcement beams in buildings. In these cases, it is important to select a material that can withstand an appreciable amount of stress without fracturing, and the carbon nanotube presents quite an enticing choice. The stress response of materials is often quantified using the Young’s modulus or elastic modulus, which is the ratio of the stress applied to a material to the subsequent strain, either compressive or expansive, that the applied stress causes. Materials with high elastic moduli, such as a steel beam, are stiff, while materials with low elastic moduli,
such as rubber bands, are flexible. For most building applications, a delicate balance must be struck between stiffness and flexibility, since very stiff materials such as ceramics can break very easily, while very flexible materials support little weight. Based on these constraints, the carbon nanotube presents a very good choice for a structural material, with an elastic modulus five to ten times greater than high-strength steel, but an ability to flex under certain stress conditions. Based on these properties, carbon nanotubes have been studied in many different stress-bearing applications, with the goal of exploiting the molecular structure and mechanical properties of the carbon nanotube to design strong materials for myriad applications. While carbon nanotubes present extraordinary properties useful for a wide range of physical applications, the properties of any material are inherently limited by its ability to be synthesized properly, and carbon nanotubes are no exception. Since carbon nanotubes derive their unique mechanical properties from their carefully arranged hexagonal bonding structure, even small deviations, such as a void or a similar defect at one point along the nanotube, can cause a severe degradation in the mechanical properties of the nanotube,1 making it important to use a highly precise synthesis process. Carbon nanotubes were originally discovered through arc-discharge synthesis, which runs a current through two carbon electrodes spaced 1 millimeter apart, stripping the carbon atoms from the electrode and forming a nanotube structure on the opposite electrode.1 Unfortunately, this synthesis also leads to the creation of other fullerenes and amorphous carbon by-products, such as soot and ash, which lower both the purity and quality of the final product, making it unsuitable for industrial-scale generation.1 In order to improve the nanotube product yield, new methods such as chemical vapor deposition (CVD) were developed in order to create long nanotubes with very few imperfections. This process involves using small organic molecules such as acetylene or ethylene in the vapor phase, stripping away the carbon from them and “growing� the nanotube by depositing the carbon atoms stripped from the vapor phase onto a metal catalyst, creating a large, tubular assembly of carbon atoms. The CVD process shows much promise for industrial production of carbon nanotubes, and can be used to produce very high-purity products with very few defects or voids.
While carbon nanotubes can now be synthesized nearly perfectly, nanotubes by themselves, due to their small size, are not well suited for structural and stress-bearing applications. Instead, these nanotubes must be embedded into a different material to enhance its mechanical properties. Currently, this is accomplished by using carbon fiber, a carbon-based material drawn or woven into fibers 5-10 Âľm in diameter and embedded into a host matrix, or surrounding material. At the moment, carbon fiber represents a $13 billion market worldwide, with an annual growth rate of over 7% and expanding applications in areas such as aerospace, wind energy, and automobiles. Most of these applications use carbon fiber oriented in one direction embedded into a host matrix such as a metal airframe in the aerospace industry or a structural polymer in the automotive industry. While current carbon fiber composites show promise for building materials, carbon nanotube nanocomposites offer an opportunity for much greater property enhancement due to their small size. Industrial focus on these polymer-nanoscale filler nanocomposites began when industrial researchers at Toyota demonstrated they could create a fivefold increase in the strength of nylon composites by embedding nano-scale mica sheets into the material instead.2 While these mechanical property improvements are certainly enticing, the advent of carbon nanotubes as composite fillers presents even greater opportunity for structural polymer nanocomposites to replace the current carbon-fiber market. Not only does the carbon nanotube independently have
superior elastic properties as compared to woven carbon fiber, but also the nano-scale size of the carbon nanotube unlocks a much larger range of interactions with the polymer that strengthen the structural properties of the overall composite. When materials are embedded into a composite, the total surface area of interaction between the polymer and
9
CARBON NANOTUBES WILL UNDOUBTEDLY PLAY A LARGE ROLE IN THE FUTURE OF STRUCTURAL MATERIALS, ESPECIALLY POLYMER-NANOTUBE COMPOSITES IN THE AEROSPACE, AUTOMOTIVE, AND RENEWABLE ENERGY SECTORS the filler often determines the property changes it effects. Since the carbon nanotube is so small, it can span a much larger surface area of interaction while maintaining the same weight fraction in the material as a regular carbon fiber composite. Due to their size, many more nanotubes can be inserted into the material at the same weight fraction, leading to a stronger composite overall. Due to this surface area effect, carbon nanotube-polymer composites have been created that confer a 23% increase in the stiffness of an epoxy resin at a paltry 1 wt% loading,3 meaning that even at such sparse dispersion, the nanotubes can change the stiffness of a composite by an appreciable amount. Another team of researchers was able to combine Kevlar, an incredibly stiff polymer, with carbon nanotubes to form a nanocomposite with an elastic modulus of 1 TPa.4 For comparison, the elastic modulus of steel is only 200 GPa, five times lower than the modulus of this composite. Given these dramatic mechanical property improvements, the carbon nanotube appears to be poised to take over the carbon fiber composite market share as a structural material, as they perform nearly all of the same functions that polymer-carbon fiber composite materials do, but to a greater extent. While most applications for carbon nanotubes show significant amounts of promise, some barriers, both economic and scientific, exist that currently block the widespread use of carbon nanotubes. One major obstacle for carbon nanotubes in the polymer nanocomposite market is the tendency of carbon nanotubes to aggregate when placed into a larger polymer matrix. Since carbon nanotubes are usually grown to large lengths that span a significant fraction of the composite’s length, they can entangle
very easily, especially due to the favorable interactions between adjacent carbon nanotubes. Not only does this aggregation effect lower the overall mechanical properties of the composite, but it also confounds any process to predictably align the carbon nanotubes within the composite, one of the main reasons why carbon fiber composites achieve such a high elastic modulus to begin with.5 While this does present a pressing problem for industrial adoption of polymercarbon nanotube composites, current research is making leaps and bounds in finding a solution, from computationally modeling the energy effects that cause aggregation in the first place to attaching molecules to the outside of carbon nanotubes in order to discourage, at a molecular level, the observed aggregation behavior. Based on the current trajectory of carbon nanotube research and the extraordinary properties this unique molecular arrangement brings to the table, it is clear that carbon nanotubes will undoubtedly play a large role in the future of structural materials, especially polymer-nanotube composites in the aerospace, automotive, and renewable energy sectors. If the past is any indicator of the future trajectory of carbon nanotubes, it appears that new applications and interesting properties will be discovered in the future, paving the way for new and exciting structural, stressbearing applications of carbon nanotubes.
REFERENCES [1] Popov, V. N. Mater. Sci. 2004, 43, 61-102. [2] Balzas, A. C. et. al. Science 2006, 314, 1107-1110. [3] De Volder, M. F. L. et al. Science 2013, 339, 535-539 [4] Endo, M. et al. In Carbon Nanotubes. Jorio, A. et al., Eds.; Springer-Verlag: Berlin, 2008. [5] Coleman, J. N. et al. Carbon 2006, 44, 1624-1652.
ALIGNED COMPOSITE IMAGE
UNIVERSITY OF CAMBRIDGE
NEGLECTED TROPICAL DISEASES BEYOND HIV/AIDS, MALARIA, TB TOBY MCMASTER
WHO’S NEGLECTED Ali has just drunk some water infested with guinea worms. He won’t have any external symptoms for almost a year, but already inside him are fleas which carry the worms’ larvae. After several months mature female worms will gradually make their way down his body towards his extremities and into tissues just below his skin, causing blistering. They will emerge from his skin several days later. It will be excruciatingly painful and prevent him from working. The journey towards eradicating this horrific disease, dracunculiasis, has been in progress for around 30 years1 and is nearing its final stages.2 It is one of 17 Neglected Tropical Diseases (NTDs), a group defined by the World Health Organization (WHO). These ailments vary in their causes, symptoms and prevalence but are united by a single feature: neglect. Despite affecting over one billion people in almost 150 countries,3 these maladies are grossly under-researched, under-treated, and relatively unknown. Most people have probably heard of five at most. As well as a lack of awareness about their impact and even existence, a major obstacle to combatting these diseases is the massive
public profile of the ‘Big Three’ tropical diseases: HIV/AIDS, malaria and TB. All of these are united in their high mortality rates, which partially explain the high profile of these diseases relative to their neglected cousins. Still, although many of the 17 NTDs are capable of killing, mortality is often not the major issue. Rather, many of the diseases are chronic infections, like dracunculiasis, and many cause gradual deterioration of physical health and quality of life. They often prevent individuals from working and financially supporting themselves and their families. Furthermore, these diseases can prevent children in these families from receiving the education any child should be entitled to. Coupled with the fact that the vast majority of NTD sufferers are already living in difficult conditions, the diseases’ impact is often massive, perpetuating cycles of poverty and suffering. Historically, health and disease funding has focused on increasing quantity rather quality of life. Whilst this trend is showing signs of a shift to a more balanced approach with the rise of the Disability Adjusted Life Year (DALY) in assessing the burden of a disease, even this approach has been widely
reported to have limitations, particularly in relation to NTDs. While offering a more holistic view of a disease’s impact, the DALY’s focus is on individual risk, rather than the wider impact of a disease on society.4 Schistosomiasis is a disease in which individuals are infected with minute worms, which lay their eggs and cause huge suffering, chiefly through the body’s own reaction to infection. It is a chronic condition but can be treated using a single drug. In many areas of Africa the condition is so prevalent that rather than screen populations to determine who is infected, it is more cost-effective and practical simply to treat all those with a high risk of the disease. In the case of schistosomiasis this includes individuals with a high chance of coming into contact with water containing the snails which transmit the disease. This includes those whose jobs require regular contact with water, such as fishermen, and school-aged children who are likely to play near infested regions.5 There are effective drugs against schistosomiasis, and the cost of treating each child is around 32 cents, which makes mass drug administration feasible.6 As a result of the relatively low cost of the drugs required, treating many of the NTDs, including schistosomiasis, is not
11
SCHISTOSOMIASIS
BURULI ULCER
DRACUNCULIASIS
CAUSED BY A GERM THAT MAINLY AFFECTS SKIN AND SOMETIMES BONE
FEVER
SMALLER BLISTERS
LARGE LESIONS NAUSEA EARLY STAGES: PAINLESS LUMP
ABDOMINAL PAIN
* BOTH DISEASES ARE SIMILAR BUT CAN BE DISTINGUISHED BY LESIONS
LATER STAGES: UCLERATED LESION
LATER STAGES: EXPOSED LESION
BOTH CAUSED BY INGESTING WORM LARVAE FROM CONTAMINATED WATER
only the right thing to do but also highly cost effective. Such work helps alleviate the economic burden imposed on a country when large proportions of its workforce suffer from a chronic condition. Donations towards treatment of schistosomiasis are frequently cited as one of the most effective ways to use aid money, based on studies by ‘effective altruism’ organisation such as Cambridge’s Giving What We Can who work to evaluate how to get the most bang for your charitable buck.7 Drug companies now donate the drugs required for treatment of many of the NTDs. For example, in 1987 the multinational pharmaceutical giant Merck committed to donate Mectizan®, the best tolerated drug for river blindness (onchocerciasis), in as much quantity as needed for as long as required. In 2012 Merck raised its donation of praziquantel, the leading drug against Schistosomiasis, to 250 million tablets a year. 2
These neglected diseases are bound to enter the spotlight over the next few years; dracunculiasis itself has been targeted by the WHO for global eradication in 2015.8 Eradication of a disease so horribly impactful on those infected would be an incredible
feat, however there have been, and remain, many obstacles to the process. Dracunculiasis eradication requires preventing individuals from drinking contaminated water, as well as preventing infectious individuals from stepping in water sources. This often involves changes in traditional behaviours, and as such a tailored approach is needed for each community. Engagement rather than cold instruction is key. The eradication program, which began in 1980, has had to overcome political instability and war zones in many of the countries where the disease is endemic, such as Sudan. In addition a major obstacle to eradicating any disease lies in the final stages, where a reduced disease impact can lead to complacency and a lack of financial support. However, the scale of the progress made so far is truly incredible. For every individual case of dracunculiasis reported last year, three decades ago there were 10,000. 2015 may see it become only the second disease ever to be eradicated, after smallpox. Whether or not this will be the case remains to be seen, but regardless it remains an inspirational case study for all those working to end the suffering caused by NTDs. The journey to end the suffering of 30
million individuals began with a single step. The final stride may be fast approaching.
REFERENCES [1] Guinea worm eradication program. http://www. cartercenter.org/health/guinea_worm/index. html (accessed Jan. 5, 2015). [2] Fighting schistosomiasis. http://www. merckgroup.com/en/responsibility/society/ global_responsibility_projects/praziquantel. html (accessed Jan. 5, 2015). [3] Neglected tropical diseases. http://www.who. int/neglected_diseases/about/en/ (accessed Jan. 5, 2015). [4] King, C. H. PLoS Negl. Trop. Dis. [Online] 2008, 2, e209, doi:10.1371/journal. pntd.0000209(accessed Jan. 5, 2015). [5] WHO. Preventing chemotherapy in human helminthiasis; World Health Organization: Geneva, Switzerland, 2006. [6] Gabrielli A. F. et al. Acta Trop. 2006, 99, 234-242. [7] Giving what we can. http://www. givingwhatwecan.org (accessed Jan. 5, 2015). [8] WHO heralds “new phase” in the fight against neglected tropical diseases. http://www.who. int/mediacentre/news/releases/2013/ntds_ report_20130116/en/ (accessed Jan. 5, 2015).
UNIVERSITY OF CAMBRIDGE
A GOLDEN OPPORTUNITY Combating malnutrition via GM rice SOPHIE HARRINGTON
Malnutrition is a hidden killer throughout much of the developing world. Children in particular receive sufficient calories but lack the micronutrients needed for healthy growth and development. Malnutrition is recognized as a serious issue by many international organizations and domestic governments, and current attempts to tackle this problem often rely upon distributing nutritional supplements and promoting more varied diets with more nutritionally rich foods.1 However, in many places the infrastructure necessary to efficiently distribute supplements is lacking. At the same time, poverty and cultural traditions can combine to make it difficult to create long-lasting changes in eating habits. To address these issues, many scientists suggest genetically modified (GM) crops to increase their nutrient content could play a role. By modifying crops that are already well established in the diet, such as rice in Asian countries, there is no need to drastically shift eating traditions. At the same time, increased nutrition can be obtained through everyday meals, potentially removing or decreasing the need for nutritional supplements.
GOLDEN RICE One of the earliest suggested uses of GM crops to ameliorate malnutrition was “Golden Rice”, fortified with increased quantities of endogenous vitamin A. Deficiency of this vitamin is one of the leading causes of childhood blindness; according to the World Health Organization, 5.2 million preschoolage children and 9.8 million pregnant women suffer from vitamin A deficiencyinduced night blindness globally. Vitamin A deficiency leads to a host of other related disorders, including increased anaemia and increased infection severity. Such deficiencies
are principally localized to Sub-Saharan Africa and much of Asia.2 The creation of so-called Golden Rice relied upon increased expression of betacarotene in the endosperm (the edible grain) of the rice. Transgenic expression of the psy (phytoene synthase) and crt1 (carotene desaturase) genes in the rice plants leads to the synthesis of the carotenoid lycopene. Lycopene, which gives tomatoes their rich red colour, is broken down into beta-carotene by endogenous lycopene cyclase. The change in the carotenoid structure between betacarotene and lycopene leads to the golden colour of the rice grains, rather than the red colour in tomatoes.3 Once present in the endosperm, betacarotene can then be converted to vitamin A upon digestion. Concerns have been raised regarding the bioavailability of vitamin A from golden rice. Early studies suggested that impractically large amounts of rice would be needed to obtain a sufficient amount of vitamin A to justify the change in dietary practices. However, further studies have shown that the bioavailability of vitamin A in golden rice is equal to that of beta-carotene in oil and greater than that in spinach.4 As a result, only 50 g dry weight of golden rice can provide approximately 60% of the Chinese recommended daily intake of vitamin A for young children.5 One argument often raised against the use of GM crops is the unintended consequences of integrating new genes into the plant genome. In this particular case, concerns were raised that the beta-carotene levels of the rice grains could be toxic. However, studies from the University of Nebraska’s Food Allergy Research and Resource Program have shown no evidence of increased toxicity
or allergenicity stemming form the protein products of beta-carotene synthesis.6
IRON-CLAD RICE Malnutrition leads to deficits in many other minerals and nutrients, including iron. So-called “iron-clad” rice has also been suggested as a means to prevent iron deficiencies in diets. While the seed coat is rich in iron, rice in the tropics is typically “polished,” stored, and sold with the seed coat removed. This prevents the early spoilage of the rice in the hot, humid tropical environment but has the unfortunate consequence of removing most of the iron content from the rice.7 Iron deficiency leads to anaemia and a host of other health problems, including delayed cognitive and physical development of children. Over two billion people globally suffer from anaemia, principally children and pregnant women.8 The International Rice Research Institute (IRRI) has facilitated a concerted approach to developing rice varieties with increased iron retention in the endosperm so that the polished rice retains the required amount of iron. The Australian Centre for Plant Functional Genomics has been able to produce rice varieties with three times the amount of iron in the endosperm. The key transformation in these plants involves upregulating the expressing of three rice nicotianamine synthase (NAS) genes, OsNAS1, OsNAS2, and OsNAS3, which result in increased nicotianamine (NA) levels in the endosperm.9 NA acts as a metal chelator, transporting iron, zinc, and other metals through the plant and into the developing seed. The aim in increasing NAS levels is to increase NA levels, resulting in greater transportation of NA to the
13
Rice endosperm cannot synthesize beta-carotene
GOLDEN RICE: INCREASING VITAMIN A AVAILABILITY
Add two genes: phytoene synthase and carotene desaturase
endosperm. Further research has also involved proteins for iron storage, such as ferritin. At ETH Zurich, researchers were able to upregulate NAS throughout the entire plant to increase iron chelation; simultaneously, they were able to upregulate ferritin in the developing seed only. This results in the sequestration of iron preferentially in the rice grain, increasing iron content for consumption. Indeed, the ETH Zurich team was able to produce rice varieties with iron content six times that of wild type.10
ISSUES AND CONCERNS The significant effort and funding spent on developing such nutritionally valuable strains of rice suggest global support for the projects. However, these projects are not immune from the general GM phobia found throughout much of the world. Serious protests have resulted in the destruction of test crops of genetically modified Golden Rice, such as in the Philippines in August 2013. A general protest held outside the field trial overran the facilities and resulted in destruction of the rice. While activists from organizations such as Greenpeace claimed that those who destroyed the rice were local farmers protesting the trials, some at the scene have claimed that the farmers were protesting peacefully and that a small band of activists
IRON-CLAD RICE: IMPROVING IRON RETENTION
The pathway that synthesizes betacarotene is turned back on
Beta-carotene accumulates in the endosperm
destroyed the field.11 Besides the frequent negative reaction to CM crops, other concerns have been raised regarding the significant investment into research. Willy Marbella of the Farmer’s Movement of the Phillippines (KMP) group has argued that investment into GM crops will do little to prevent malnutrition when its main cause is poverty.12 This illustrates a key problem in pro-GM groups that see such crops as a “magic bullet” to prevent malnutrition in one fell swoop. However, as Marbella points out, the reality is far more complex. While GM crops such as Golden Rice and iron-clad rice have the potential to significantly improve nutrition, they are only a stop-gap solution without social programs dealing with the underlying causes of poverty and social immobility. Nevertheless, it would be foolish to discount GM crops entirely. Ongoing trials into the nutritional quality and safety of such crops continue to provide predominantly positive results. While it will still be some time before consumer and governmental resistance is overcome, the tide of opinion may slowly be shifting in favor of an intelligent and well-managed introduction of GM crops.
Polished rice—with the seed coat removed—contains very little iron
Increase expression of Nicotianamine Synthase genes in rice plant
Beta-carotene is converted to Vitamin A once ingested
REFERENCES
[1] How WFP fights malnutrition. https://www.wfp.org/ nutrition/how-wfp-fights-malnutrition (accessed July 20, 2014). [2] WHO. Global prevalence of vitamin A deficiency in populations at risk 1995–2005; WHO Global Database on Vitamin A Deficiency. World Health Organization: Geneva, Switzerland, 2009. [3] Paine, J. A. et al. Nature Biotechnol. 2000, 23, 482-487. [4] Tang, G. et al. Am. J. Clin. Nutr. 2012, 96, 658-664. [5] Testing the performance of Golden Rice. http://www. goldenrice.org/Content2-How/how8_tests.php (accessed July 5, 2014). [6] Golden Rice may take a while before reaching Filipino plates. http://www.philrice.gov. ph/?page=resources&page2=news&id=271 (accessed July 5, 2014). [7] The state of play: genetically modified rice. http:// irri.org/rice-today/the-state-of-play-genetically-modified-rice (accessed July 6, 2014). [8] Grain of truth. http://www.scribd.com/doc/94387583/RTVol-10-No-3-Grain-of-truth#fullscreen=1 (accessed July 10 2014). [9] Iron biofortification. http://www.acpfg.com.au/index. php?id=16 (accessed July 5, 2014). [10] Combating iron deficiency: rice with six times more iron than polished rice kernels developed. http:// www.sciencedaily.com/releases/2009/07/090721090129.htm (accessed July 7, 2014). [11] On Green Dread and Agricultural Technology. http:// dotearth.blogs.nytimes.com/2011/07/22/on-green-dread-andagricultural-technology/ (accessed July 14, 2014). [12] Militant Filipino farmers destroy Golden Rice GM crop. http://www.newscientist.com/article/dn24021-militantfilipino-farmers-destroy-golden-rice-gm-crop.html (accessed July 10, 2014).
Nicotianamine transports iron and other metals to the seed as it grows
Iron retention in the rice grain is increased
DUKE UNIVERSITY
MICROBIOLOGY AND THE FINAL FRONTIER ALISSA WALL
REVIEWING THE IMPACT OF SPACE’S UNIQUE ENVIRONMENT ON MICROBIAL ECOLOGY AND HUMAN HEALTH
INTRODUCTION: OVERVIEW OF ASTROMICROBIOLOGY The field of astromicrobiology is concerned with the origin, evolution, and distribution of life in space. Space offers a unique environment for both humans and microbes, with selective forces including microgravity and high radiation levels. There are two particular lenses through which one can study astromicrobiology: (1) exchange from planet to planet, and (2) exchange between Earth and space. Although the first lens is of intellectual and academic interest because of its implications for extraterrestrial life, the second is actively examined and researched by NASA because of the immediate, potentially dangerous consequences of microbial evolution in space. This review will provide a summary of current research in astromicrobiology relevant to the diversity and evolution of Earth-originated microorganisms. Much of this research is observational and focuses on describing the physiological, metabolic, genetic, and regulatory changes microbial organisms undergo in space.
INCREASE OF MICROBIAL VIRULENCE AND GROWTH IN MICROGRAVITY CONDITIONS Bacterial virulence increases in conditions of microgravity in both simulated microgravity and in-space experiments. In a 2007 study conducted by Wilson et al., Salmonella typhimurium, an enteric mouse pathogen, was grown both in the International Space Station (ISS) and on the ground.1 Other than space-related factors (e.g. microgravity and potentially increased exposure to radiation), conditions (e.g. humidity and temperature) were kept identical between the sites. Mice infected with in-flight cultures had a lower survival percentage and died faster than mice infected with ground cultures of the same inoculum size. When researchers took images of the different cultures using scanning electron microscopy (SEM), they noticed the novel formation of an extracellular matrix, an accumulation of protective molecules secreted by microorganisms, in the inflight cultures. Previous studies have linked extracellular matrices to increased bacterial virulence.2 A mutation in the DNA sequence
of a critical transcription regulator molecule is thought to underlie the observed in-space phenotypes, thereby producing a landscape of protein products distinct from ground cultures.1 Furthermore, the bacteria in space began growing exponentially sooner, consistent with other researchers’ findings that microgravity stimulates bacterial proliferation.3 The aforementioned study utilized both in-space and simulated microgravity (SMG) experiments to reach its conclusions. SMG involves a High Alpha Research Vehicle (HARV) bioreactor that offsets the gravitational force experienced on earth with a hydrodynamic force.4 This hydrodynamic force is generated by rapidly rotating a circular, fluid-filled chamber in the vertical plane such that the rotating fluid cancels the gravitational force in selected “sampling ports” across the chamber. The loaded sample then enters free-fall, or experiences microgravity (comparable to that experienced on the ISS). To check the background noise of the minimal shear forces present in a vertically rotating HARV, a HARV that rotates in the horizontal plane is used as a control
15
experiment since it does not offset Earth’s gravitational force and allows samples to grow under regular gravitational conditions.4 Another study in 2013 analyzed the change in relationship between symbiotic bacteria and host organisms when subjected to microgravity.5 A luminescent bacterium, Vibrio fischeri, and its host, the Hawaiian squid Euprymna scolopes, were used as a model system in HARVs to simulate microgravity conditions. Compared to controls subjected to horizontal (not vertical) rotation in the HARV, squids grown in SMG presented greater concentrations of bacteria,
perturbed hematocyte trafficking (indicative of immune suppression), and higher sensitivity to chemicals causing the initiation of cellular apoptosis. Researchers worry that increased bacterial virulence in space, in conjunction with suppression of astronauts’ immune systems, will contribute to health problems for astronauts. A 2005 study implicated increased nanobacterial growth in the highly prevalent kidney stones of returning astronauts. A 2010 report determined that both spaceflight and SMG result in increased survival of bacteria in macrophages.6 However, it remains
increased proliferation in kidney
bacterial proliferation + virulence measured by extracellular matrix production
unclear whether the increased bacteria survival is due to host immune suppression, increased bacterial fitness, or a combination of the two.
INCREASED TOLERANCE TO ANTIMICROBIAL AGENTS IN MICROGRAVITY CONDITIONS Microbial tolerance to antimicrobial agents increases in both simulated microgravity and in-space experiments. Biofilms are dynamic groups of microorganisms that adhere to themselves and a solid surface. Bacterial biofilms are
Researchers worry that increased bacterial virulence in space, in conjunction with suppression of astronauts’ immune systems, will contribute to health problems for astronauts.
microgravity thought to increase resistance to antibiotic agents by filtering them through layers of functionally-diverse species, often including stress-resistant strains with pre-existing antimicrobial tolerance that act to physically separate other, less tolerant strains from the stimulus. Additionally, the extracellular matrix often secreted by biofilms forms another physical barrier to protect the underlying bacteria. This process disrupts the efficacy of antimicrobial agents, which are modern medicine’s first line of defense against bacterial infection. Over the years, researchers have demonstrated that SMG and in-space conditions might potentiate, increase, or precipitate bacterial production of biofilms and bacterial attachment.1, 7-9 Beyond the impact on crewmember health, biofilms pose serious concerns for spacecraft physical integrity. They can disrupt air-tight rubber seals that maintain
spacecraft internal pressure, contaminate the potable water supply, and corrode electric conductivity.8 Multi-species biofilms form dynamic communities that perform a variety of intra-group functions and are even more resistant to antimicrobial agents than singlespecies biofilms. Currently, the best defense against these biofilms in space includes heat and the application of a potent, broadspectrum of antibiotics. Improved methods are currently being developed and may include the use of noxious chemical agents.10
IMPACT OF HZE PARTICLES ON CELLULAR AND ORGANISMAL DEVELOPMENT High energy heavy ion (HZE) particles are detrimental to cellular development, and now these HZE particles have been shown to
compound the negative effects of microgravity in experiments studying DNA-repair in multicellular organisms. High energy heavy ion (HZE) particles pose serious threats to life in space.11-13 They belong to a class of radiation called galactic cosmic radiation, which describes radiation that originates outside our solar system, and are capable of penetrating up to one millimeter of protective space suits and radiation shields.13 The effects of ionizing radiation on biological systems include disrupted DNA repair processes, increased DNA mutation rates, and increased singleand double-stranded DNA breakage.14 When the stress-resistant endospores (dormant structures formed under duress that are capable of revival when the environment is more amenable) of Bacteria subtillis were bombarded with HZE particles, germination (the process by which the spore grows into
viable cells) was not affected, but outgrowth (the act of projecting outwards from the original spore) was inhibited.12 Experiments on multicellular organisms have attempted to determine the combinatorial effect of microgravity and HZE particle bombardment, but no research has of yet been conducted on microbes. A review by Horneck et al. published in 1999 surveyed research on this matter, and organisms ranging from Drosophila melanogaster to stick bugs grown in conditions of combined HZE particle radiation and simulated microgravity saw increased mortality in comparison to organisms from HZE particleonly conditions, SMG-only conditions, and control experiments.11 Furthermore, in multicellular organisms DNA repair mechanisms were disrupted and DNA breakage increased. Based on these studies, the combined effect of HZE particle radiation and microgravity was determined to be synergistic, and Horneck et al. recommends further studies investigating the physiological, metabolic, genetic, and structural changes of both eukaryotic and prokaryotic organisms. Microgravity’s effects seem to be exacerbated in eukaryotic, multicellular organisms as compared to prokaryotic organisms,6,9-10,15 giving microorganisms a pathogenic advantage in immuno-compromised hosts.
SUMMARY Among other factors, the increased virulence, antimicrobial tolerance, biofilm formation, and proliferation rate of microbes in space pose serious threats to both human health in the final frontier and spacecraft structural integrity. These risks require the serious attention of astromicrobiologists. Although radiation from HZE particles damages cellular structure and disrupts cellular function, more studies are required
Multi-species biofilms form dynamic communities that perform a variety of intra-group functions and are even more resistant to antimicrobial agents than single-species biofilms.
antimicrobial agents
extracellular matrix (found in microgravity)
biofilm
to investigate the synergistic effects of microgravity and radiation on host immune systems and microbial development. This would provide insight towards the implications of symbiotic relationships between microbiota and crewmembers during long-term space travel. Additionally, research into the poorly elucidated mechanisms behind the physiological, regulatory, genetic, and metabolic changes in space-bound microbial organisms is needed. As long-term missions become more prevalent, further studies will undoubtedly shape the still-developing landscape of astromicrobiology. These will prove invaluable to the development of proper sterilization practices and crewmember health-risk evaluation.
DNA repair is inhibited High Energy Heavy Ion Particles
High energy heavy ion particles (HZE), components of radiation in space, have been found to inhibit DNA repair mechanisms in multicellular organisms.
REFERENCES [1] Wilson, J. et al. Proc. Natl. Acad. Sci. U.S.A. 2007, 104, 16299-16304. [2] Koo, H. et al. Int. J. Oral Sci. 2009, 1, 229-234. [3] Ciftçioglu, N. et al. Kidney Int. 2005, 67, 483-491. [4] Nickerson, C. et al. Microbiol. Mol. Biol. R. 2004, 68, 345-361. [5] Foster, J. et al. Sci. Rep. 2013, 3, 1340. [6] Rosenzweig, J. et al. Appl. Microbiol. Biotechnol. 2010, 85, 885-891. [7] Mauclaire, L. et al. FEMS Immunol. Med. Mic. 2010, 59, 350-356. [8] Schiwon, K. et al. Microb. Ecol. 2013, 65, 638-651. [9] Klaus, D. et al. Trends Biotechnol. 2006, 24, 131-136. [10] Mermel, L. Clin. Infect. Dis. 2013, 56, 123-130. [11] Horneck, G. Mutat. Res. 1999, 430, 221-228. [12] Horneck, G. Biol. Sci. Space 1992, 20, 185-205. [13] Grahn, D. HZE Particle Effects in Manned Spaceflight; Radiobiological Advisory Panel, Committee on Space Medicine, National Academy of Sciences; National Academy Press: Washington DC, 1973. [14] Moeller, R. et al. J. Bacteriol. 2008, 190, 1134-1140. [15] Roberts, M. et al. Microb. Ecol. 2004, 47, 137-149.
17
HARVARD UNIVERSITY
BIOPRINTING REVOLUTION SURAJ KANNAN
IMAGE FROM WIKIMEDIA COMMONS Perhaps no technology has grown as rapidly and carried as much promise in the last decade as 3D printing. Although the first industrial 3D printer was built in the 1980s, improvements in design and function over the last five years have led to a dramatic rise in production and usage; indeed, forecasts predict that the sale of 3D printing products and services will reach $10.8 billion by 2021, up from $2.2 billion in 2012.1 The customizable and fast nature of 3D printing has made it an integral tool in rapid prototyping in a variety of industrial and research settings, ranging from academic to aerospace and military. 3D printing has also increasingly seen application in producing a wide variety of objects, ranging from household tools, furniture, and utensils to cars,2 aircraft,3 and weaponry.4 Along the way, this new technology has prompted ethical debates in gun control4 and intellectual property.5 With the first commercial 3D printers now appearing on the market for hobbyists, it is easy to understand the Economist’s comment that 3D printing “may have as profound an impact on the world as the coming of the factory did.”6 A particular application of interest for 3D printing, one that has already shown promising leads, is in the field of tissue engineering. While 3D printing has long been applied to the production of biotechnology devices, recent interest has
been directed towards printing cells in customizable fashion to produce functional tissues. Taking antecedents from earlier lithographic methods as well as breakthroughs in developmental biology, bioprinting aims to develop tissues and organs that can play a role in both laboratory investigation and disease modeling as well in therapeutics. With advances coming from large research universities such as Harvard and companies such as Organovo, bioprinting is likely to become one of the biggest areas of investment and research in this decade.
A CUSTOMIZABLE APPROACH The classic definition of tissue engineering, as described by Langer and Vacanti, is of “an interdisciplinary field that … [works] toward the development of biological substitutes that restore, maintain, or improve tissue function or a whole organ.”7 Traditionally, tissue engineering has followed a topdown approach, in which a scaffold (either synthetic, natural, or from a decellularized organ) is seeded homogeneously with cells and then matured in a bioreactor.8,9 While the strategy has yielded some of the first clinical successes of tissue engineering, it does not allow for sufficient spatial and temporal control of cells and growth factors seeded on the scaffold. Thus, the top-down approach is limited in the
amount of complexity it is able to produce in synthesized tissues. Instead, 3D bioprinting utilizes a bottom-up approach, in which the individual components of the tissue are patterned to allow for formation of complex tissue architecture. By utilizing computer-aided design (CAD) tools, researchers can carefully control the placement of cells, materials, and morphogens to replicate the types of organization found in the human body. These strategies often draw on the self-assembly and growth factor-driven mechanisms of cells to allow for formation of functional, biomimetic tissues.8 Perhaps the most popular form of 3D bioprinting has been extrusion printing, in
BIOPRINTING IS LIKELY TO BECOME ONE OF THE BIGGEST AREAS OF INVESTMENT. which filaments are forced through a nozzle to form the 3D structure.10 Thus, in this method, there is contact between the delivery mechanism and the “bio-ink.” A contact-less approach has also been developed using thermal ink-jet printing. In this method, a
18
In extrusion printing, the most popular form of bioprinting, researchers use computer-aided design (CAD) tools to model the cells in the biological structure. They then send the model to the 3D printer, which forces filaments through a nozzle and builds the structure “bottom-up,” layer by layer. Here, an ear is built using extrusion printing.
pulse of current is passed through the heating element of the printhead cause formation of small ink bubbles. The resulting change in pressure causes the bubble to collapse and the ink to be ejected from the nozzle.11 Thus, the bio-ink never comes into contact with the delivery mechanism. A number of parameters must be taken into consideration with the development of 3D printers. For example, the desired resolution plays a role in determining which type of 3D bioprinter to utilize. As tissues require both macro-scale and micro-scale control, multiple techniques must be utilized to develop both gross architecture and detailed micropatterning of cells and growth factors. Similarly, selection of material, or bio-ink, is crucial. A great deal of investigation has been devoted to the discovery and development of new bio-inks, including hydrogel mixtures (used with extrusion printers) and water-based inks (for thermal ink-jet printers). Cell viability is a third factor of interest. While extensive optimization of both extrusion and thermal ink-jet printing methods has allowed for viability of up to 90% of cells following seeding, the forces and stresses that cells are placed under throughout the printing process are a topic of current research.10-13
SUCCESSES AND CHALLENGES While a great deal of effort is currently dedicated towards technical manipulations of
3D printers to ensure viability, some groups have already garnered success with generating functional tissues. For example, Cui et al. at the Scripps Research Institute were able to generate synthetic cartilage consisting of human chondrocytes in a polyethylene glycol (PEG) hydrogel.11 More recently, Duan et al. from Cornell University constructed aortic valve conduits composed of multiple cell types and custom cell distribution in an alginate/gelatin hydrogel.14 While these successes have proved to be exciting for the potential for 3D bioprinting, progress with 3D-printed tissue was limited by the same challenge as other tissue engineering avenues – vascularization. Without blood vessels, nutrients, oxygen, and wastes cannot diffuse throughout thick tissues, leading to cell death throughout the construct. Previously avascular tissues produced by 3D printing were thus by necessity very thin, a constraint that prevents the generation of larger-scale organs and tissues. A recent and astonishing breakthrough in 3D printed tissue engineering came in February 2014 from the Lewis lab at the Harvard School of Engineering and Applied Sciences (SEAS). The team utilized a custom-build four-inkhead bioprinter as well as several novel bio-inks, including a gelatinbased ink to provide structure for the scaffold and two cell-containing inks.15 Perhaps the most novel aspect of the investigation was
the use of a Pluronic-based bio-ink that undergoes a seemingly-counterintuitive solid-to-liquid phase transition when cooled below 4 °C. Thus, the researchers were able to generate 3D structures with complex networks of Pluronic ink which, upon cooling, resulted in liquification of Pluronic and production of channels within the construct. These channels were subsequently endothelialized to produce vasculature. Using this technology, the Lewis group printed structures composed of patterned human umbilical vein endothelial cells and neonatal dermal fibroblasts along with custom-built vasculature. This vasculature could be in turn be perfused in a bioreactor to allow for nutrient and oxygen flow within the construct. These results speak to the possibility of using 3D bioprinting to produce tissues of complexity far greater than that produced previously by other methods of tissue engineering.
ORGAN PRINTING AND THE FUTURE While 3D printing has a number of potential applications to research in basic science and cellular/tissue function, bioprinting has primarily captured the public imagination because of the role it could play in the clinical environment. Early clinical uses of 3D bioprinting have shown some success. For example, in 2012, physicians at the University of Michigan successfully
3D bioprinting is ideal for physicians and patients alike – it allows for rapid production of tissues that can be personalized specifically for each patient. Here, an ear is printed at Makers Party Bangalore 2013. Image from Wikimedia.
utilized 3D printing to construct a synthetic trachea for three-month old Kaiba Gionfriddo, who suffered from recurrent airway collapses.16 Other successes include printing bone to replace, as in two case studies, patient jaw and skull.5 3D
PHYSICIANS USED 3D PRINTING TO CONSTRUCT A SYNTHETIC TRACHEA FOR A THREE-MONTH OLD IN 2012.
bioprinting is ideal for physicians and patients alike – it allows for rapid production of tissues that can be personalized specifically for each patient. While the limited clinical work thus far has involved avascular and sometimes even acellular tissues, innovations in vascularization in the lab suggest the possibility of future production of organs such as the heart, lung, pancreas, and others. Certainly, some progress in this regard has already been made. Viewers of TED will likely recall Dr. Anthony Atala’s talk, in which he printed a miniature kidney on-stage. Organovo, a San Diego company geared towards developing functional 3D bioprinted organs, has made strides to release data on
its printed liver by 2015,17 while others have predicted the completion of 3D printed hearts within the decade. This research has also provoked a great deal of discussion over the ethics of 3D printed tissues. These concerns range from general objections to tissue engineering and organ construction to worries about construct quality and the role of intellectual rights in the world of 3D bioprinting. In particular, the question of who can produce 3D organs must be addressed before further clinical developments can proceed. In light of these challenges, it is perhaps too optimistic to suggest that 3D bioprinted technology will be available for patients within the next decade, though as some isolated case studies have shown, such constructs have been successful when utilized. Technical optimizations, particularly in vascularization, cell viability, and resolution of printing, will allow for improved functionality and complexity in printed tissues. From the non-scientific perspective, leaders in ethics and policy will need to tackle some of the stickier issues regarding intellectual property and quality assurance in the generation and use of 3D printed tissues. In spite of these obstacles, bioprinting remains perhaps the most promising avenue for pursuing the regenerative medicine of tomorrow.
With many thanks to Dr. Jennifer Lewis and David B. Kolesky, both of whom humored my requests to hear everything about their magnificent research. Thank you for tolerating my gushing nature.
REFERENCES [1] McCue, T. J. 3D printing stock bubble? $10.8 billion by 2021. http://www.forbes.com/sites/tjmccue/2013/12/30/3d-printingstock-bubble-10-8-billion-by-2021/ (accessed Dec. 30, 2013). [2] George, A. 3-D printed car is as strong as steel, half the weight, and nearing production. http://www.wired.com/ autopia/2013/02/3d-printed-car/ (accessed Feb. 27, 2013). [3] Marks, P. 3D printing: the world’s first printed plane. http://www. newscientist.com/article/dn20737-3d-printing-the-worlds-firstprinted-plane.html#.Ux4SLx_LI7x (accessed Aug. 1, 2011). [4] Ready, print, fire: the regulatory and legal challenges posed by 3D printing of gun parts. http://www.economist.com/news/ united-states/21571910-regulatory-and-legal-challenges-posed3d-printing-gun-parts-ready-print-fire (accessed Feb. 16, 2013). [5] Hornick, J. F. 3D Printing. 2014, 1, 14-23. [6] Print me a Stradivarius: how a new manufacturing technology will change the world. http://www.economist.com/ node/18114327 (accessed Feb. 10, 2011). [7] Langer, R. et al. Science 1993, 260, 920-926. [8] Devillard, R. et al. Methods Cell Bio. 2014, 119, 159-174. [9] Guillotin, B. et al. Trends Biotechnol. 2011, 29, 183-190. [10] Ferris, C. J. et al. Appl. Microbiol. Biotech. 2013, 97, 4243-4258. [11] Cui, X. et al. Recent Pat. Drug Deliv. Formul. 2012, 6, 149-155. [12] Mironov, V. et al. Tissue Eng. 2006, 12, 631-634. [13] Campbell, P. G. et al. Expert Opin. Biol. Technol. 2007, 7, 1123-1127. [14] Duan, B. et al. J. Biomed. Mater. Res.-A 2013, 101, 1255-1264. [15] Kolesky, D. B. et al. Adv. Mater. 2014, 26, 3124-3130. [16]. Zopf, D. A. et al. NEJM 2013, 368, 2043-2045. [17] Organovo. http://www.organovo.com/ (accessed Jan. 6, 2015). [18] Clark, L. Bioengineer: the heart is one of the easiest organs to bioprint, we’ll do it in a decade. http://www.wired.co.uk/news/ archive/2013-11/21/3d-printed-whole-heart (accessed Nov. 21 2013). [19] Atala, A. Printing a Human Kidney, TED2011, filmed Mar. 2011. http://www.ted.com/talks/anthony_atala_printing_a_ human_kidney (accessed Jan. 6, 2015).
HARVARD UNIVERSITY
EXPLORING THE AVIAN MIND CAITLIN ANDREWS
In June 1977, in a small laboratory at Purdue University, Irene Pepperberg stood with her arm outstretched toward a large bird cage, trying to coax a quivering Grey Parrot out of it. Just one year earlier, Pepperberg had received her doctorate in theoretical chemistry, having devoted years of her life to devising mathematical models of complex molecular structures and reactions, first as an undergraduate at MIT and then through her graduate work at Harvard.1,2 Yet, here she was, completely spellbound by this trembling, sentient creature whom she had named “Alex,” an acronym for the “Avian Learning Experiment,” of which he was to be the subject and star. “Here was the bird I hoped—and expected—would come to change the way people think about the minds of creatures other than ourselves,” Pepperberg writes in her memoir, Alex & Me. “Here was the bird that was going to change my life forever.”1
FROM CHEMISTRY TO COGNITION To many, Irene Pepperberg’s decision to leave chemistry behind in pursuit of the new and largely uncharted field of animal cognition represented an unfathomable risk. But, looking back, Dr. Pepperberg knows that it was the right choice. “I was actually no longer intrigued by chemistry,” she says, “figuring that what was taking me years and years would soon be done in days via better computers.”2 In the late 70s, as she faced an
uncertain job market, particularly for women in chemistry, she knew that she needed to find a new path. Although she had always loved animals, it was only when she began watching the NOVA television program on PBS that she realized that there were people using real science to study animals and to draw parallels between the animal and human minds. Thinking back to her childhood in New York City, she remembered the pets with whom she had spent countless hours: a series of talking parakeets that had provided her with the type of companionship craved by a self-proclaimed shy and “nerdy” only child. As she watched TV programs about apes using sign language, dolphins exhibiting evidence of abstract thinking, and scientists unearthing the mechanisms behind birdsong, Pepperberg realized that she had already encountered a subject that could provide just as much insight into the minds of animals.1 “I figured that a talking parrot would be an even better subject,” she says. “Birds and humans diverged about 280 million years ago, yet they have so many similar capacities, including vocal learning….I began reading, studying the field, and realized that, as [American zoologist] Donald Griffin said, communication was a window into the animal mind.”2
THE ALEX YEARS From the start, Pepperberg’s respect for
animals and awareness of their needs, along with her technical background, proved to be a promising combination. To ensure that her studies would be representative of Grey parrots in general as opposed to one particularly outstanding subject, she asked a pet store employee to select a random bird from the flock for her. When she finally coaxed Alex out of his cage at the lab, she kept careful journals of her interactions with him. And, right away, she got to work, using a two-person, interactive modeling technique to demonstrate the association between vocal words, or “labels,” and the objects that Alex encountered around the lab. In addition, each time she gave Alex one of these objects she reinforced the label by repeating it and talking about its properties, while Alex watched and listened.1 Over the first several weeks in the lab, Alex began to vocalize on his own, although at first his utterances were more “noise” than “speech.” But, gradually, Pepperberg was able to discern precise labels from Alex’s vocalizations; when shown a piece of paper, Alex would make a two-syllable sound, which Pepperberg would reward by giving him the piece of paper until, eventually, he began to shape the sounds from ay-ah, to ay-er, to pay-er, and, finally, paper.1 Pepperberg and her assistants added more object labels—key, wood, wool—until Alex began to demonstrate an understanding that each label represented a category of objects that shared a certain
21
Alex could identify both a silver key and a red key as “keys,” transferring the label to a colored key that he had not encountered before. property, such as shape or texture. For example, Alex could identify both a silver key and a red key as “keys,” transferring the label to a colored key that he had not encountered before. While this concept might seem simple to humans, Pepperberg knew that for an animal like Alex this was a significant accomplishment. As she writes in Alex & Me, “This kind of vocal cognitive ability had never before been demonstrated in nonhuman animals, not even in chimpanzees.”1 Pepperberg often cites that interactive “model/rival” technique, which she used to train Alex, as a major reason for their success. Initially developed by German ethologist Dietmar Todt, the technique involves an animal subject and two trainers; while one trainer acts as the principal trainer and questioner, the other acts as a model for the desired behavior (e.g., labeling the object) and as a rival competing with the animal for the principal trainer’s attention. As Alex picked up more labels, adding colors and numbers to his already-extensive repertoire of object labels, it was crucial that he had humans to model proper pronunciation and label usage. Mostly, these were students who came to work in the lab. Pepperberg also found that it was important for Alex to learn that the same people did not always act as principal trainers or as models; sometimes she asked Alex questions, and sometimes she modeled correct (or incorrect) behavior and was rewarded (or not rewarded) by a student trainer.1 It was very important for the humans to make these occasional mistakes, and be scolded, so that Alex would observe the consequences of errors. The work was not always easy. First, Dr. Pepperberg was dealing with a highlyintelligent animal who could pick and choose when he wanted to work—much more like a colleague than a research subject. Additionally, as she moved among various universities, she found that the fledgling field of animal cognition was not always met with the same enthusiasm that she had hoped was possible. Eventually, however, the
media picked up on Alex’s story and began to follow Pepperberg’s work.1 In his prime, with over 100 words in his vocabulary, “Alex made it clear to the scientific community that a ‘birdbrain’ could do the same things as an ape brain, and sometimes even those of a child’s brain,” Pepperberg says. “Alex and I were not the first to study avian cognition, but we had the widest impact, thanks to media coverage.”2 Studying an animal who could communicate verbally set Pepperberg’s studies apart, because she could ask Alex questions and he could answer directly, giving insight into how he perceived the world around him. On the most basic level, he could identify an object’s material, color, and shape, and he could ask his trainers to take him somewhere (e.g. Wanna go back) or bring him something (e.g. Want banana). He also had a grasp for numbers; if shown a tray of assorted objects, Alex could answer questions about a particular subset of those objects (e.g. How many yellow wool?). He also showed evidence of being able to add small values, and, Pepperberg says, he could also “infer the cardinality of new number labels from their ordinal position on the number line—something no other nonhuman has yet accomplished.”1,2 Alex understood concepts of “bigger” and “smaller,” as well as “same” and “different”—an important distinction, since it showed that Alex understood that several labels could be applied to a single object.1,3 For example, given two square pieces of wood differing only in color, he could identify that the color was “different,” while the other properties were the “same”; if none of the properties differed among a pair of objects, he would indicate this by saying “none”.1 Sometimes, Alex’s most impressive work came when it was least expected. One day, while testing number comprehension, Pepperberg presented Alex with a tray containing sets of different numbers of objects of various colors—two, three, and six items. Because the sets were all different colors,
she could ask Alex, “What color three?” But Alex, as he often did when he became bored with a particular study, insisted on avoiding the correct response. This time, he did so by answering “five,” even though there was no set of five objects on the tray. She repeated the question; he repeated his answer. Thinking that she could beat Alex at his own game, Pepperberg asked him, “What color five?” “None,” Alex replied, taking Pepperberg by surprise, as he transferred a concept that he had only ever used in reference to “same/different” or “bigger/ smaller” to an entirely new context.1 “Western civilization didn’t have ‘zero’ until about 1600,” Pepperberg says. “And Alex transferred the ‘null’ concept himself.”2 In her three decades of work with him, Dr. Pepperberg got to know Alex more deeply than most any researcher ever gets the chance to know his/her subject. Working with a single animal for such a long time is “fascinating,” she says, “because one gets to know so much about the individual—not
IN HIS PRIME, ALEX HAD OVER 100 WORDS IN HIS VOCABULARY.
just what is studied, but all the personality quirks and the temperament.” Some of these “quirks” were incorporated into published studies, such as how Alex spontaneously invented his own label for an apple—which he called a “banerry”—out of a combination of the labels “banana” and “cherry”.1 This provided evidence that “Alex clearly did more than repeat what he learned vocally; he parsed his labels to make new ones, much as do humans.”2 But other examples of Alex’s quirks serve only as anecdotes to illustrate the unique individual that he was—like how he called cake “yummy bread” when he first
DR. PEPPERBERG WITH ALEX’S SUCCESSOR, GRIFFIN tried it, or how he would say “You be good. I love you,” as Pepperberg left the lab each night. These were his last words to Dr. Pepperberg, as their pioneering studies came to an abrupt halt in 2007 when Alex died suddenly at the age of 31. Although, at that point, Pepperberg’s research had involved several other parrots in addition to Alex, his death was devastating to her and many around the world. However, Pepperberg sees her work and the work of others in her field as emerging. In her opinion, there is a broad-ranging potential for animal cognition, in terms of its implications for animal welfare and conservation, as well as for the development of teaching methods for children with cognitive deficits. “When I started, the field barely existed; ‘animal cognition’ was almost considered an oxymoron,” she says. “Today we have journals that are specifically devoted to the field…. Only by continuing to study a variety of species will we really understand the various capacities of different ‘minds’.”1,2
A RETURN TO HARVARD In July 2013, Irene Pepperberg returned to the campus where, almost four decades earlier, she had received her doctorate in theoretical chemistry, not knowing the path that she would set out upon soon after graduating. While she had been a Research Associate in the Vision Lab at Harvard since 2005, and teaching classes in animal cognition and human-animal communication at the College and the Extension School, her research base had been at Brandeis for the
past decade. But, after securing a lab space at Harvard, she moved to William James Hall in July, bringing with her Griffin—an 18 year old African Grey Parrot, who had lived and learned alongside Alex for much of his life. Having had Griffin since he was only seven and a half weeks old, Dr. Pepperberg knows Griffin’s quirks just as she knew Alex’s. And Griffin is certainly his own bird. He gets “self-conscious” when he struggles with a particular label and is more hesitant than Alex was—something Alex would sometimes take advantage of by prodding Griffin to
is Dr. Pepperberg’s first female African Grey. “So far, working with Athena seems to be a cross between my early work with Alex and that with Griffin,” Pepperberg says. “We learned a lot from both of the previous birds and are implementing some of that with Athena.”2 Over Athena’s first six months in the lab, research assistants have been working with her constantly on vocal labels and even audio recording her progress, from her very first warbles to her recently more distinctsounding “wood” and “key” labels. “What will be really interesting is that new computer techniques and analysis tools will let us track her vocal development in ways that I couldn’t manage with Alex or Griffin,” Pepperberg says.2 While Griffin is currently still warming up to the idea of having a new “little sister” around, Pepperberg hopes that Griffin will act as a model for Athena as she begins to learn new vocal labels. With only two birds, she will find it difficult to draw any definite conclusions about sex differences in cognition. However, she may be able to gain insight into how cognitive abilities develop over the lifetime of an individual, much in the same way that she did with Alex. As for her experiences at Harvard so far, Dr. Pepperberg enthusiastically says that “so far, it’s been terrific!” She sees many opportunities for collaboration between herself and other members of the Psychology Department, and she is excited by the possibilities for the future. As she writes in
HE [ALEX] WOULD SAY “YOU BE GOOD. I LOVE YOU,” AS PEPPERBERG LEFT THE LAB EACH NIGHT. produce the correct vocalization, while other times seemingly wanting to help by hinting at the correct label.1 Although Griffin speaks less now that Alex is gone, he has a list of impressive accomplishments to call his own. Most recently, he showed that he had an understanding of the benefits of sharing, as he chose to share a reward rather than act selfishly so long as his partner was also willing to share.4 He has also done work with optical illusions, demonstrating an ability to recognize obstructed objects and thereby providing insight into the commonalities between how birds and humans perceive the same visual illusions.1 But Griffin has not been alone in the Harvard lab; he also gained a new companion in October. Hatched in April 2013, Athena
Alex & Me, “Alex left us as a magician might exit the stage: a blinding flash, a cloud of smoke, and the weaver of wizardry is gone, leaving us awestruck at what we’d seen, and wondering what other secrets remained hidden…wondering what else he would have done had he stayed.”1 As Dr. Pepperberg embarks on the next leg of her journey without Alex by her side, perhaps these secrets may be revealed through a new set of voices.
REFERENCES [1] Pepperberg, I. M., Alex & Me; HarperCollins: New York, 2008. [2] Pepperberg, I.M.; personal interview. [3] Pepperberg, I.M.; The Alex Studies; Harvard University Press: Cambridge, MA, 1999. [4] Péron, F. et al. Animal Cognition 2014.
23
OXFORD UNIVERSITY
NO HEARTBEAT NO HOPE? AMY LINEHAM Though at first glance implausible, reanimation of a stopped heart, and thus, what one might sensationally call ‘resurrection,’ has been known to be possible for some years. Slight hypothermia has been used therapeutically since the 60s, but in 1999, a ski accident in Norway changed surgery’s approach to temperature forever. It was there that Anna Bågenholm, while skiing off-piste, took a tumble that left her trapped beneath thick ice, immersed in the icy water of a mountain stream.1 Human body temperature is normally 36.5 - 37 °C, but falls rapidly in cold water. Though initially mild, the effects become rapidly more severe as core temperature continues to fall, with patients losing consciousness at around 30 °C. At 25 °C, cardiac arrest is almost certain. Due to the remote location of Bågenholm’s accident, it took well over an hour and a half for helicopter aid to arrive—within 40 minutes she had lost consciousness, her heart stopping shortly after. By the time Anna finally reached a hospital, her heart had not beaten for two hours, her core temperature an icy 13.7 °C.1 It was this, however, that would save her life: the doctor in charge of Bågenholm’s case had some previous experience of the preservative abilities of extreme cold and, unlike most physicians, did not give up on the case. Though extreme cold had induced Anna’s cardiac failure, it also had the effect of dramatically reducing the rest of her body tissues’ metabolic demands, effectively plunging her into a state of ‘suspended animation.’ Anna’s fate was dictated by one question: had her heart stopped before or after tissue demand
Is hypothermic suspension between life and death brrrrilliant or destined for the cold shoulder?
fell? The decision was made to warm her up, and four and a half hours after she fell through the ice, Anna’s heart was restarted.2 Though a long rehabilitation period was to follow, she made a full recovery, able to once again pursue her love of extreme skiing. This ‘heart-warming’ case study sparked the introduction of extreme therapeutic hypothermia (down to as low as 10 °C)3 in a variety of surgical situations. Recent research has shown the method’s efficacy in preventing brain damage in babies starved of oxygen during birth,4 and the technique is routinely employed in those undergoing major openheart surgery. In planned operations, these lows are achieved using ice packs and blood cycling through external cooling equipment,
PLUNGING HER INTO A STATE OF ‘SUSPENDED ANIMATION’ but current developments are focusing on the potential use of suspension in emergency cases such as gunshot and stab wounds.2 Here, the drop in core temperature must be significantly more rapid: because the patient’s pulse has often stopped prior to arriving at accident & emergency (A&E), each moment of reduced circulation increases the risk of lasting damage. To address these demands, a method was developed by Peter Rhee in 2000 in which the blood is replaced with cold potassium or saline solution.2 Following treatment (currently only tried in pigs), the patient is gradually warmed back up via restoration of circulating blood. This therapy is finally ready to be tested in humans. Imminent studies will
hopefully provide positive results that improve the 7% survival rate currently recorded for major trauma cases.2 Despite hopeful progress, controversy remains regarding widespread use of hypothermic techniques in surgery, particularly for trauma patients. A recent paper from Sweden suggests the consequences of hypothermia in such cases, such as inhibition of clotting mechanisms, could vastly outweigh potential benefits;5 it is thought Anna may have survived her ordeal thanks to a total lack of trauma and therefore bleeding, protecting her from the enzymatic disturbances that threaten other cases. Clarification of this factor, vital to development of hypothermic suspension, will hopefully become available following the potassium and saline solution experiments. Until then, observers must simply hope this ‘cool’ new therapy doesn’t turn out to have chilling results.
REFERENCES [1] Comfort, G. Dying to Live; Tyndale House: Wheaton, IL, 1992. [2] Cox, D. Between life and death – the power of therapeutic hypothermia. http:// www.theguardian.com/science/blog/2013/ dec/10/life-death-therapeutic-hypothermiaanna-bagenholm (accessed Jan. 8, 2015). [3] Thomson, H. Gunshot victims to be suspended between life and death. The New Scientist [Online], March 26, 2014. http://www.newscientist.com/article/ mg22129623.000-gunshot-victims-to-besuspended-between-life-and-death.html#. VK4zhSvF98E (accessed Jan. 8, 2015). [4] Hoehna, T. et al. Resuscitation 2008, 78, 7-12. [5] Nielson, N. et al. NEJM 2013, 369, 21972206.
OXFORD UNIVERSITY
UNDERSTANDING THE WORLD HOW YOUR BRAIN CONSTRUCTS A SIMULATION OF THE WORLD Imagine you are a newborn kitten. You open your eyes, and you realize that the world around you consists of only one thing: vertical stripes. One day you feel yourself being picked up and placed in a strange world. You take a few steps forward. Suddenly, you’re falling. This strange, stripy universe was created in a laboratory at Cambridge University studying visual perception.1 In the late 60s, scientists in Cambridge began investigating how we progress from the moment we’re born—when the world presumably makes no sense to us— to the moment we can understand the visual world around us. Essentially they wondered, “How do we learn to see?” Vision feels so effortless that it may seem odd to think that we learned how to see after birth. Curled up in the darkness of the womb, we had no idea that we would be born into a universe in which solid objects exist in three-dimensional space. We might just as well have been born into a universe consisting solely of points of light or vertical stripes. In the moments after being born, our world must appear a “blooming, buzzing confusion,” as the psychologist William James famously described.2 The brain makes sense of this confusion by building a simulation of the world around us inside our heads. This simulation is the mind. The Cambridge scientists wondered
whether or not it was possible for us to perceive things we never knew existed. They investigated this question by raising kittens in cylinders with vertical stripes painted on the walls. After a few months, the scientists took the kittens out and let them explore the real world. They found that the kittens would walk right off the edges of tables if unstopped— they simply couldn’t see the edge. These kittens had never seen a horizontal line in their life, so horizontal lines were never built into their brains’ simulation of the world. The scientists found something even more surprising when they examined the kittens’ brains. Normally, visual information arrives at the brain from the retina and is initially processed by the visual cortex, located at the back of the skull. This part of the brain extracts edges from the visual scene in that each cell responds to its own preferred orientation of edge by producing a series of electrical pulses called action potentials.3 There are cells with a vertical preference, cells with a horizontal preference, and everything in between. Consequently, looking at an edge of any orientation will always elicit a response from a particular group of cells. These electrical signals are used by the rest of the brain as the building blocks for your simulation of the visual world. The same is true of the normal cat brain. For the kittens raised in a stripy world, however, cells in
JAMES COOKE the visual cortex only responded to vertical stripes and not to horizontal ones. When a horizontal stripe was presented to the kittens, the cells in their visual cortexes were silent, as if the kittens were looking at nothing at all. This research has helped us understand the limits of our own perception. It explains why we can’t imagine what it would be like to see with sound, the way a bat does, or to see heat, the way a snake does. We simply never adapted to have such experiences in our mind’s simulation, whereas these other organisms did. However, the brain’s amazing plasticity raises a seemingly farfetched possibility: one day, we may learn to see as they do.
REFERENCES [1] Blakemore, C.; Sluyters, R. C. J. Physiol. 1975, 248, 663-716. [2] James, W. The Principles of Psychology; Harvard University Press: Cambridge, MA, 1890/1981, Vols. 1-2 [3] Hubel, D. H.; Wiesel, T. N. J. Physiol. 1959, 148, 547-591.
Elsevier offers a wide range of cross-discipline tools to support your
Elsevier is a proud sponsor of the research journey, enable research and career management, and help International Collegiate Science Journal you make an impact in your field.
ScienceDirect
Scopus
Mendeley
Knovel Village Elsevier offers a wide rangeEngineering of cross-discipline tools to support your research journey, enable research and career management, and help you make an impact in your field.
Elsevier powers knowledge, which empowers those who use it. ScienceDirect Scopus Mendeley elsevier.com
Knovel
Engineering Village
Elsevier powers knowledge, which empowers those who use it. elsevier.com
PRINCETON UNIVERSITY
LAUGHING WITH JOHN NASH:
THE PERSON BEHIND THE MIND JENNIFER LEE
THE ONETIME PHANTOM OF FINE HALL OPENS UP. I was nervous. It was a quiet, normal afternoon, a time when fellow students were working hard or relaxing inside their respective dormitories. However, one probably could have guessed by my jogging and running along the sidewalk that I was on a mission. With a camcorder in one hand, notebook in the other, and my roommate with the tripod, we made our way down to Frist Campus Center. Why? Well, we just simply could not be late for a meeting with the esteemed, Nobel Prize winner in Economic Sciences, Dr. John Forbes Nash Jr. John Nash is one of the world’s greatest mathematicians whose research in game theory produced innovative insight into calculations of chance in strategic decision making. Game theory studies strategies for situations where the decisions of the other competitors influence a competitor’s actions; think of the stare-downs and poker faces at a Texas hold’em poker table. His research not only changed the world of economics, but also revolutionized public policy, computational biology, artificial intelligence, and even military theories and tactics. As a result, Nash won the Nobel Memorial Prize in Economic Sciences and John von Neumann Theory Prize in operations research. Later, he became a professor at both the Massachusetts Institute of Technology and Princeton University. In 2001, the four-time Academy Award winning movie, A Beautiful Mind premiered, starring Russell Crowe as Nash. The portrayal of Nash’s struggle through mental illness expanded his popularity from the academic field to the rest of the world. While many focus on his mathematics or his battle with schizophrenia during the height of his career, there is more to Nash’s story. The world knows him as a genius, valuing him for his
intelligence and contributions, but who is the person behind this amazing mind? One thing is for sure; there are countless misconceptions about this genius. Yes, he is still alive. No, he does not seem nor act crazy. Yes, I am sure that I did not interview Russell Crowe. In reality, having a conversation with John Nash is like a fireside chat with someone’s grandfather. He is brimming with stories and experience, tinted with his own sense of humor. Nash did not start off his academic career in mathematics – at Carnegie Institute of Technology (now named Carnegie Mellon University) he studied chemical engineering. How does one win the Nobel Prize in Economics as a chemical engineering major? Nash elaborates: “I wasn’t studying economics. I was studying
chemical engineering and chemistry and mathematics. But one time I took an elective course in International Economics taught by an Austrian professor [at Carnegie Mellon]...I think I must have learned a lot from that...I impressed [came off as] a little bit like a Austrian economist.” Almost like a Renaissance Man, he is clearly a man of many interests and knowledge in different fields. He likes to discuss a range of topics from government taxes to other areas of scientific research, such as computer science and the biological anti-aging problem. When discussing his monetary award for winning the Nobel Prize, Nash laments, “Since the time of Reagan, these prizes have been taxed [in the United States], but in most other parts of the world, if you get an international prize, it is
27
Nash’s research in game theory revolutionized fields spanning from public policy to artificial intelligence. Game theory explores strategies for situations wherein competitors’ decisions influence an individual’s actions. Think of the stare-downs and poker faces at a Texas Hold’em poker table.
not taxable and is considered payment for your labor.” His interests are definitely not confined to academia; He thoroughly enjoyed sharing his thoughts about popular culture, reminiscing about seeing the locations where the Harry Potter scenes were shot at Oxford University. He was especially fond of the Great Hall, Hogwarts’ dining hall, which is actually the Christ Church at Oxford. In the generous amount of time I spent with him, I learned that Nash spends a lot of time thinking about topics that have nothing to do with mathematics. He is also not short of memories from his time at Princeton University as he loves to share them. It was a defining period in Nash’s life as it was the site of some of his most significant contributions—it was the birthplace of the Nash equilibrium theory. This theory dictates a scenario where everyone in the situation makes the best decision for themselves while taking the other players’ decisions into account. After finishing graduate school at Princeton, he stayed on campus, but moved out of the graduate college into a house on Mercer Street, right down the street from Einstein who was working at the Institute of Advanced Study. Nash remembers heading to the Princeton campus on many mornings, and crossing paths with Einstein, walking the other way to the Institute. It was almost like a changing of the guard as they met on this 1950s sidewalk: Nash, the up-and-coming mathematician in his twenties, his Nobel Prize forty years away, and Einstein, the accomplished and proven genius late in his career, in his seventies. Nash even approached Einstein about a possible collaboration. “I arranged an appointment to see Einstein about a research area relating to physics and astronomy. Ultimately, he told me I had to do a lot of work to do anything in this area. [Einstein said] It might take considerable study to approach this problem very well.”
Once called “The Phantom of Fine Hall” for his aloof and interesting behavior and the location of his office in Princeton’s mathematics department, John Nash is now frequently seen at Frist Campus Center amongst the chatter of the many students milling around. During our
a two-person game.” By searching for a theory involving three people playing a cooperative game, he believes that the solution would bring necessary insight in solving a two-person cooperative game, an outstanding puzzle for game theorists. As for his plans for this summer, John
NASH IN A NUTSHELL: - Initially studied Chemical Engineering at Carnegie Mellon University but switched to Mathematics; went on to Princeton to pursue graduate studies in Mathematics - As a graduate student at Princeton, Nash developed Nobel Prize-winning Equilibrium Theory - Today, he still conducts research in Princeton on game theory
interview, he was definitely not short on funny anecdotes and quips. Chuckling to himself, he joked about Einstein’s famous crazy hair. “I guess he didn’t go to places in town that often. I didn’t know if he had a barber for example.” He recounts, “One time Heisenberg was here. There was a joking remark on one of the toilets, in this building actually. ‘Heisenberg may have shat here.’ Of course, Heisenberg uncertainty.” Nash is now a wise and influential man of 86 world-changing years. He is still conducting research and doing work in cooperative, competitive games that involve collaboration between players, in Fine Hall at Princeton University during the afternoons. Concerning his current research, Nash notes, “I think if a good theory evolves for three-person games, then that might indicate what theory is good and why for
Nash is definitely staying busy as he notes, “I have some plans, some meetings. There may be three already to go to. One in Long Island, one in Germany, and one in Brazil.” As the interview comes to a close, I couldn’t resist asking, what are John Nash’s final words of advice? “The older generations can always give some advice to the younger. It’s not so nice tojust give very standard advice.” He suggests that students “should know that they are in a place with considerable flexibility. It’s not so much a machine-like track. You may not be here very long, but make use of the time...Look around for options. ” It almost seems like the mind of John Forbes Nash Jr. will never stop running, but there is more to this funny, well-traveled, sassy Princetonian than you may think.
PRINCETON UNIVERSITY
THE SCIENCE OF EXERCISE BENJAMIN HUANG
When college students are not busy taking advantage of free food and study breaks, they sometimes remember to exercise. The ancient Greeks have been aware of the benefits of exercise for millennia. Today, these merits are known to span the physical and the mental, including increased strength and bone density, a more robust cardiovascular system, and improved concentration. Whether one runs, jumps, swims, or lifts, these activities are composed of basic movements that break down into fundamental neuromuscular interactions. While often lost behind promises of building muscle, losing fat, or gaining the muchcoveted abs, an understanding of these fundamental interactions is an important part of discerning what exercise achieves and why. Ultimately, these interactions are what provide benefits to the body’s physiology. Learning about the foundational aspects of
First Stages: Efficent Motor Unit Recruitment
the neuromuscular system can improve not only our understanding of exercise but also how we exercise. While muscles are made up of muscle cells, which contain the special contractile fibers necessary for movement, the truly basic unit of movement is the motor unit.1 Each muscle is composed of many of these motor units whose coordination through their associated motor neurons enable movement. Signals from the central nervous system coordinate and “recruit” motor units to produce force and movement by the muscle. The recruitment of more motor units, called spatial recruitment, and increased motor neuron firing rates, known as rate coding,2 both generate more force. Accordingly, the motor unit is of vital developmental focus in the first stages of exercise. A new runner, swimmer, or lifter will find his or her greatest developmental
gains early on as his or her body develops its capacity to direct and recruit motor units or increase rate coding more efficiently.1 In a sense, the body itself is learning the exercises and motions just as the person is. Thus, a person’s ability to exert force or exercise well can increase in leaps and bounds as the neuromuscular system adapts to the exercise by effectively stimulating and coordinating motor units.1 In the beginning, then, it matters rather little how “intelligently” you exercise. As long as you train the motions, you can see substantial gains to strength and endurance due predominantly to a more efficient neuromuscular interface. To be prescriptive: just get started and worry about details later. Regardless of the exercise or activity, substantial improvements to health can be made simply by getting a general sense of the motion down and performing it. For example, in weightlifting,
Subsequent Effects: Muscle Growth
29
TYPE I
TYPE II
SLOW TWITCH, OXYGEN EFFICIENT
rather than devoting lots of time to a motley assortment of so-called accessory or isolation exercises, the most efficient way to get stronger would simply be to focus on the larger, more engaging compound movements and exercises.1 Exercises like squats and bench presses can tap the neuromuscular developments that will provide the most substantial improvements. Eventually, the body becomes sufficiently well adapted at the exercise at hand. After substantial improvements to the engagement and coordination of motor units stop happening, the body’s developmental focus shifts toward what we commonly conceive to be the immediate effects of working out: muscular growth. Occurring simultaneously with the initial neuromuscular improvements, muscular development begins to take on a greater role in improving strength and endurance. While the neuromuscular interface is key to the body’s ability to signal and generate force, each motor unit’s ability to generate force is also a function of the actin and myosin filaments. Thus, the growth of these filaments and the resulting growth of muscle—muscular hypertrophy—are important aspects of the gains from exercise.1 In the intermediate stage of exercise, subsequent improvements are smaller and follow more predominantly from increases in muscular size. Hence, greater consideration of volume, intensity, and duration of activity with respect to fitness and exercise goals is necessary to achieve desired increase in strength and endurance as well as health thresholds.1
We can get even more particular with respect to the growth of muscle fibers by considering type I and type II fibers. Type I fibers, slow-twitch fibers that can carry more oxygen and sustain longer aerobic activity, work more for long-distance runners. These are adapted for improved endurance via increased mitochondrial enzyme activity, fatty acid metabolism, and mitochondrial oxygen uptake.1 On the other hand, type II fibers, fast-twitch fibers that can contract more quickly and more forcefully in much smaller bursts, work more for sprinters and
FAST TWITCH. SHORT BURSTS
Ultimately, there is a lot of variation in people’s fitness goals. Some people just want to lose or gain weight. Others want to run a marathon. Still others want to lift heavy things. People can have any of a number of many, many goals in this regard, and appropriately, there are many, many variations on what one can do. While our general fitness goals to be healthy need not be so strictly examined under the light of physiology, we can certainly optimize our regimens to achieve our more specific goals armed with this knowledge.
greater consideration of volume, intensity, and duration of activity with respect to fitness and exercise goals is necessary to achieve desired increase in strength and endurance heavy lifters and adapt accordingly.1,2 Given these differences, it also makes sense to prioritize exercises to focus and develop one type preferentially to be more efficient. In short, long-distance runners should simply focus on aerobic exercises, as opposed to sprinting or heavy lifting. Conversely, focused weightlifters, should limit dedicated aerobic exercise to focus on training the more relevant type II fibers involved in heavy lifting.1,2
REFERENCES [1] McArdle, W. D.; Katch, F. I. Exercise Physiology: Nutrition, Energy, and Human Performance; 7th ed.; Lippincott, Williams & Wilkins: Baltimore, MD, 2010. pp. 371392, 453-457, 519-525. [2] Tipton, C. M. Exercise Physiology: People and Ideas. Oxford University Press: Oxford, 2003. pp. 58, 79-87.
HARVARD UNIVERSITY
TRUST YOUR GUT Treating autism with antibiotics LINDA XU Despite recently emerging as the most increasingly prevalent and most heavily researched neurodevelopmental disorder in the United States,1 autism spectrum disorder (ASD) is still largely incurable. ASD has traditionally been viewed as a neurological disorder, as its symptoms include deficits in language and social behavior, but new research has brought focus onto the many disorders associated with ASD, including diabetes, sleep disorders, and—of particular interest—gastrointestinal (GI) defects.2 In a paper published in Cell, Hsiao et al.3 propose that GI defects may in fact be a cause of the behavioral abnormalities displayed in ASD, and furthermore, that these behavioral abnormalities may be corrected through treatments that target the gut. In order to confirm the proposed link between ASD and the gut, the authors use an ASD mouse model known to display the typical behavioral features of ASD. By showing that these “ASD mice” also suffer associated GI defects, including “leaky gut” (increased intestinal wall leakage) and gut composition imbalance, the authors demonstrate a clear correlation between ASD and GI defects. The affected mice were then treated with probiotics—“good” bacteria that are commonly taken as dietary supplements.
The authors find that treating mice with a probiotic known as Bacteroides fragilis not only corrects gut defects but also improves behavioral abnormalities, by decreasing anxiety-like behavior and increasing communication. These results strongly suggest that the behavioral abnormalities seen in ASD are in fact caused by gut defects. The authors strengthen their claim by proposing that these gut defects may stem from changes in the levels of certain metabolites (small
disorder associated with gut abnormalities, as Rett syndrome,5 cerebral palsy,6 and major depression7 have all been found to feature GI defects as well. The implications of this study are clear: taking a broader perspective on the factors that influence our health will open up countless new avenues for research and healthcare. By opening our eyes to a new understanding of ASD and the gut, this study has already brought us one step closer to a cure.
BEHAVIORAL ABNORMALITIES IN ASD ARE CAUSED BY GUT DEFECTS
molecules) in the gut. They point specifically to a metabolite known as 4-ethylphenylsulfate (4EPS), which produces anxiety-like behavior in mice and has a close parallel to a human metabolite known to be increased in ASD patients.4 By exploring the connection between the gut and the brain, Hsiao et al. demonstrate the great potential for gut-targeted treatment of ASD. Furthermore, it should be noted that ASD is not the only neurological
REFERENCES [1] Bishop, D. V. M. PLoS ONE 2010, 5, e15112. [2] Kohane, I. S. et al. PLoS ONE 2012, 7, e33224. [3] Hsiao, E. Y. et al. Cell 2013, 155, 1451-1463. [4] Altieri, L. et al. Biomarkers 2011, 16, 252-260. [5] Motil, K. J. et al. J. Pediatr. Gastroenterol. Nutr. 2012, 55, 292-298. [6] Campanozzi, A. et al. Brain Dev. 2007, 29, 25-29. [7] Graff, L. A. et al. Inflamm. Bowel Dis. 2009, 15, 1105-1118.
RICE UNIVERSITY
FARMING THE UNKNOWN DANIEL COLCHADO
THE ROLE OF THE LIVESTOCK INDUSTRY IN PRESERVING HUMAN HEALTH The livestock industry is a vast network of expectations. A farmer expects meat, dairy, and eggs from his animals, and a consumer expects to obtain these products from grocery stores. Industry expects profitable revenue from the sales of these products. Given the intensiveness of modern agriculture, this chain of action has been massively amplified. Meat production has doubled since the 1950s, and currently almost 10 billion animals—not including additional goods such as dairy and eggs—are consumed every year in the United States alone.1 Due to the magnitude of this industry, even small changes can bring about large scale effects. Infections exemplify this chain of events. Though animal infections might initially seem to be a lesser concern, their effects on human health are rapidly becoming more pronounced and pervasive. During the past few years, an increased number of food-
SINCE 1980, 87 NEW HUMAN HAVE BEEN IDENTIFIED... 80% ARE ZOONOTIC
borne disease outbreaks have been traced to products such as beef, pork, poultry, and milk.2 These outbreaks are especially concerning because the pathogens involved are new strains previously harmless to humans. These pathogens have become infectious to humans due to mutations that occur in animal hosts; such diseases that jump from animals to humans are termed zoonotic. Within the food industry, zoonotic illnesses can be transmitted by consumption or through contact with animals. Crucially, zoonotic cases are much harder to treat because there is no precedent for their treatment. How often does this transmission occur? Since 1980, 87 new human pathogens have been identified, out of which a staggering 80% are zoonotic.3 Furthermore, many of these have been found in domestic animals, which serve as reservoirs for a variety of infectious agents. The large number of zoonoses raises several key questions. Are these outbreaks PATHOGENS the product of our management of livestock or simply a natural phenomenon? How far could zoonotic illnesses escalate in terms of human cases and mortality?
What practices or perspectives should we modify to prevent further damage? Prominent virologist and Nobel laureate in medicine Sir Frank MacFarlane Burnet provided a timeless perspective to this issue in the mid-20th century. He conceptualized infectious disease as equally fundamental to other interactions between organisms such as predation, decomposition, and competition.4 Taking into account how we have harnessed nature, particularly with the aim of producing more food, we can see how farming animals has also inadvertently farmed pathogens. Treating animals as living environments that can promote pathogenic evolution and diffusion is crucial to creating proper regulations in the livestock industry that protect the safety of consumers in the long run. Current practices risk the emergence of zoonotic diseases by facilitating transmission under heavily industrialized environments and by fostering antibiotic resistance in bacteria. Cooperative action between government, producers, and educated consumers is necessary to improve current practices and preserve good health for everyone.
32 INFLUENZA: OLD THREATS, NEW FEARS The flu is not exactly a stranger to human health, but we must realize that the influenza virus not only affects humans but also other species such as pigs and birds. In fact, what is known as “the flu” is not a single virus but rather a whole family of viruses. The largest family of influenza viruses, influenza A, has different strains of viruses classified with a shorthand notation for their main surface glycoproteins—H for hemagglutinin and N for neuraminidase. These surface glycoproteins are important because their structure and shape determines if the virus will attach to the cellular receptors of its host and infect it. For example, the
influenza H7N7 virus has a structure that allows it to specifically infect horses but not humans. Trouble arises when these surface glycoproteins undergo structural changes and the virus gains the capacity to infect humans, as was the case during the 2003 avian flu and the 2009 swine flu pandemics, when the influenza virus jumped from poultry and swine to humans. Since 2003 when it was first documented in humans, avian influenza H5N1 has been responsible for over 600 human infections and associated with a 60% mortality rate due to severe respiratory failure.5 The majority of these cases occurred in Asia and Africa, particularly in countries such as Indonesia, Vietnam, and Egypt, which accounted for
over 75% of all cases.5,6 Though no H5N1 cases have been reported in the U.S., there have been 17 low-pathogenicity outbreaks of avian flu in American poultry since 1997, and one highly pathogenic outbreak of H5N2 in 2004 with 7,000 chickens infected in Texas.5 Poultry is not the only area of livestock industry where flu viruses are a human health concern. The 2009 outbreak of influenza H1N1—popularly termed “swine flu” from its origin in pigs—was officially declared a pandemic by the WHO and the CDC. With an estimated 61 million cases and over 12,000 deaths attributed to the swine flu since 2009, H1N1 is an example of a zoonotic disease that became pandemic due to an interspecies jump that turned it from a regular pig virus to
Hemagglutin (H)
The structure and shape of a cell’s surface glycoproteins determines if the virus will attach to a host cell’s receptors.
Influenza virus
Neuraminidase (N)
a multi-species contagion.7 The theory of how influenza viruses mutate to infect humans includes the role of birds and pigs as “mixing vessels” for mutant viruses to arise.8 In infected pigs, the genetic material from pig, bird, and human viruses (in any combination) reassorts within the cells to produce a virus that can be transmitted among several species. This process also occurs in birds with the mixing of human viruses and domestic and wild avian viral strains. If this theory is accurate, one can infer that a high density of pigs in an enclosed area could easily be a springboard for the emergence of new, infectious influenza strains. Thus, the “new” farms of America where pigs and poultry are stocked to minimize space and maximize production provide just the right environment for one infected pig to transfer the disease to the rest. Human handlers then face the risk of
exposure to a new disease that can be as fatal as it is infectious, as the 2009 swine flu pandemic and the 2003 avian flu cases demonstrated. As consumers, adequate care of our food sources should not only be priority in avoiding disease but also in national and global health.
FEEDING OUR FOOD: ANTIBIOTIC RESISTANCE IN THE FOOD INDUSTRY Interspecies transmission is not the only way through which new diseases can become pathogenic to humans. In the case of bacteria, new pathogenic strains can arise in animals from the action of another mechanism: antibiotic resistance. Antibiotic resistance is the result of the fundamental concept of evolutionary biology—individuals with advantageous traits that allow survival and reproduction will perpetuate these traits
As a result, H7N7 will not affect humans, but H5N1 will.
to their offspring. Even within the same population, antibiotic resistance varies among individual bacteria—some have a natural resistance to certain antibiotics while others simply die off when exposed. Thus, antibiotic use effectively selects bacteria with such resistance or, in some cases, total immunity. In this way, the livestock industry provides a selective environment. The rise of these resistant strains— commonly termed “superbugs” for their extensive resistance to a variety of common antibiotics—is a serious threat in hospitals; there, antibiotic use is widespread, and drug resistance causes almost 100,000 deaths each year from pathogens such as Methicillinresistant Streptococcus aureus, Candida albicans, Acenitobacter baumanni, and dozens of other species.9 Our attention should not be exclusively focused to hospitals as sources of superbug infections, however. The
widespread use of antibiotics in the livestock industry to avoid common bacterial diseases in food animals also poses the risk of breeding superbug strains, and it has not been without its share of outbreaks and casualties. The Center for Science in the Public Interest –a non-profit organization that focuses on advocating for increased food safety in the US—has reported that antibioticresistant pathogens have caused 55 major outbreaks since 1973, and that the majority of cases have come from dairy products, beef, and poultry. Furthermore, the same study reported that most of these pathogens exhibit resistance to over 7 different antibiotics.10 One of the main culprits identified in these outbreaks is the bacterium Salmonella typhimurium along with other Salmonella species, which account for over half of these cases. Salmonella is especially dangerous because it is so pervasive; it is able to lay dormant in a variety of livestock products such as uncooked eggs, milk, cheese, poultry, and beef until incubating in a live host for infection. Escherichia coli 0157:H7 (commonly known as E. coli), a bacterium that usually resides in the intestines of mammals, has also been implicated in a number of outbreaks related primarily to beef products. Overall, antibiotic-resistant pathogens have caused over 20,500 illnesses, with more than 31,000 hospitalizations and 27 deaths.10 These cases demonstrate how the widespread use of antibiotics in the food industry is perpetuating the risk of infections and damage to human health with antibioticresistant bacteria. Currently, the Food and Drug Administration (FDA) in the U.S. still approves of the use of antibiotics as a treatment for sick animals; furthermore, the organization allows antibiotic use in healthy animals as disease prevention and even as growth enhancers.11 In fact, over 74% of all antibiotics produced in the United States are used in livestock animals for these reasons.9, 11 Using antibiotics in non-infected animals in this way generates a greater environmental pressure for superbugs to emerge; this type of use in particular should be restricted. Managing the use of antibiotics to reduce the risk of emerging superbug strains should be prioritized in the food industry just as it is in health care.
HUNGRY FOR A SOLUTION Still open to debate is the question of how many resources should be allocated
Antibiotic resistant pathogens such as E. coli and Salmonella have caused 55 major outbreaks since 1973. The cases have mainly been a result of infected poultry, dairy products, and beef to the problem of widespread antibiotic use. Currently, diseases are transmitted from animals to humans faster than they are evolving within humans. Not only that, many of these zoonotic diseases have high potential to become a pandemic due to their high infectivity, as in the case of H5N1 avian influenza. Measures to prevent the transmission of viruses among livestock animals and to reduce the rate of emergent antibiotic-resistant strains need to take into account the environmental and evolutionary nature of a zoonosis. A more thorough surveillance of livestock animals and monitoring signs of new emerging strains are important in preventing the spread of such deadly pathogens. This strategy requires intensive molecular analysis, a larger number of professionals working in the field, and a nationwide initiative. Keeping an accurate record of where new strains arise and the number of animal and human cases would significantly improve epidemiological surveillance of infectious disease. This process requires cooperation at multiple levels to ensure that the logistics and public support for these initiatives is ongoing and effective. Additionally, educating people about the nature of zoonotic pathogens is crucial to fostering the dialogue and action necessary to secure the good health of animals, producers, and consumers.
REFERENCES [1] John’s Hopkins Center for a Livable Future: industrial food animal production in America. Fall 2013. http:// www.jhsph.edu/research/centers-and-institutes/johnshopkins-center-for-a-livable-future/_pdf/research/ clf_reports/CLF-PEW-for%20Web.pdf (accessed Oct. 24, 2013). [2] Cleaveland, S. et al. Phil. Trans. R. Soc. B. 2001, 356, 991. [3] Watanabe, M. E. BioScience 2008, 58, 680. [4] Burnet, F. M. Biological Aspects of Infectious Disease; Macmillan: New York, 1940. [5] Centers for Disease Control and Prevention: avian flu and humans. http://www.cdc.gov/flu/avianflu/h5n1people.html. (accessed Oct. 12, 2013). [6] Cumulative number of confirmed human cases of avian influenza A(H5N1) reported to WHO. http:// www.who.int/influenza/human_animal_interface/ H5N1_cumulative_table_archives/en/ (accessed March 14, 2013). [7] Chan, M. World now at the start of the 2009 influenza pandemic. http://www.who.int/mediacentre/news/ statements/2009/h1n1_pandemic_phase6_20090611/ en/ (accessed March 14, 2013). [8] Ma, W. et al. J. Mol. Genet. Med. [Online] 2009, 3, 158-164. [9] Mathew, A. G. et al. Foodborne Pathog. Dis. 2007, 4, 115-133. [10] DeWaal, C. S.; Grooters, S. V. Antibiotic Resistance in Foodborne Pathogens. http://cspinet.org/new/ pdf/outbreaks_antibiotic_resistance_in_foodborne_ pathogens_2013.pdf (accessed March 14, 2014). [11] Shames, L. Agencies have made limited progress addressing antibiotic use in animals. http://louise. house.gov/images/user_images/gt/stories/GAO_Report_ on_Antibioic_Resistance.pdf. (accessed Jan 20, 2014).
RICE UNIVERSITY
PERSONALIZED HEALTHCARE: THE NEW ERA POOJA YESANTHRAO
In medicine, personalized healthcare has become more important, as general medical treatments no longer “fit all.” Specifically, personalized healthcare describes the ability to use an individual’s genetic characteristics to diagnose his or her condition with more precision and finesse. With this development, physicians can select treatments that have increased chances of success and minimized possibilities of adverse reactions. However, personalized medicine does not just enable improved diagnostics and therapeutics; it also yields the ability to better predict disease susceptibility. Thus, it can be used to devise a comprehensive plan to avoid a disease or reduce its extent.1 The advent of personalized healthcare has brought a preventative aspect to a field that has traditionally employed a reactive approach,2 where diagnosis and treatment occur after symptoms appear. Medicine has always been somewhat personalized: following examination, doctors tailor treatment to individuals. However, the new movement to personalize medicine takes this individualization to a genetic level. The International Human Genome Sequencing Consortium reported the first genome sequence in 2001. Now, scientists can determine information about human
physiology and evolution to a detail never before possible, creating a genetics-based foundation for biomedical research.3 Genes can help determine an individual’s health, and scientists can better identify and analyze the causes of disease based on genetic polymorphisms, or variations within genes. This scientific advancement is an integral factor in the personalized healthcare revolution. Technological developments that allow human genome sequencing on a real time scale at relatively low costs have also helped to move this new era of medicine forward.2 The science behind such personalized treatment plans and prediction capabilities follows simple logic: scientists can create a guide for treatment by identifying and characterizing genomic sequences associated with particular responses, such as sensitivity or resistance, to chemotherapy drugs. They can then use these patterns to understand the molecular mechanisms that create such responses and categorize genes based on these pathways and mutations.4 Therefore, physicians can compare the genetic makeup of patients’ tumors to these libraries of information. This method, known as genetic
:
“GENES CAN HELP DETERMINE AN INDIVIDUAL’S HEALTH, AND SCIENTISTS CAN BETTER IDENTIFY AND ANALYZE THE CAUSES OF DISEASE BASED ON GENETIC POLYMORPHISMS, OR VARIATIONS IN GENES.”
profiling, matches patients to successfullytreated individuals to provide effective treatment that increases the accuracy of predictions and minimizes allergic reactions. For example, efforts are underway to create individualized cancer therapy based on molecular analyses of patients. Traditionally, physicians predict cancer recurrence based on patterns from past cases. To do so, they look specifically at metrics such as tumor size, lymph node status, response to systemic treatment, and remission intervals.4 While this type of prediction has merit, it provides only generalized estimates of recurrence and survival for patients. Often times, individuals with little risk of cancer relapse are put through chemotherapy due to these inaccurate predictions. With the new age of personalized medicine, powerful analytical methods, such as protein profiles and dysfunctional molecular pathways, will allow physicians to predict the behavior of a patient’s tumors more accurately. Personalized cancer treatment can plot the clinical course for each patient with a particular disease based on his or her own conditions rather than generalizations from a heterogeneous sample of past cases. This type of healthcare thus improves upon current medicine by creating a subset of homogeneous groups within past cases through genetic profiling, allowing physicians to make a more accurate prediction of an individual’s response to treatment.
Additionally, personalized medicine can prevent medical maladies such as adverse drug reactions, which lead to more than two million hospitalizations and 100,000 deaths per year in the U.S. alone.5 It can also lead to safer dosing and more focused drug testing. However, this approach is hindered by the nascent nature of genomics technology and the difficulty in identifying all possible genetic variations. Particularly challenging are cases where certain drug reactions result from multiple genes working in conjunction.6 Furthermore, opponents of gene sequencing argue that harnessing too much predictive information could be frightening for the patient. For example, patients shown to have a genetic predisposition towards a degenerative disease such as Alzheimer’s disease could experience serious psychological effects and depression due to a sense of fatalism. This knowledge could adversely impact their motivation to reduce risks. This possibility has been demonstrated in clinical studies regarding genetic testing for familial hypercholesterolaemia, which measures predisposition to heart disease.7 This dilemma leads to a fundamental question of gene sequencing—how much do we really want to know about our genetic nature? Today, personalized medicine is starting to make its mark through some commonly available tests such as the dihydropyrimidine dehydrogenase test, which can predict if a patient will have severe, sometimes fatal,
reactions to 5-fluorouracil, a common chemotherapy medicine.8 Better known are the genetic tests for BRCA1 and BRCA2 mutations that reveal an increased risk of breast cancer,9 popularized by actress Angelina Jolie’s preventative double mastectomy. With these and other upcoming genetics-based tests, the era of personalized medicine has begun, and only time can reveal what will come next.
REFERENCES [1] Center for Personalized Genetic Medicine. http://pcpgm.partners.org/ about-us/PM (accessed Oct. 24, 2013). [2] Galas, D. J.; Hood L. IBC 2009, 1, 1-4. [3] Venter, J. C. et al. Science 2001, 291, 1304-1351. [4] Mansour, J. C.; Schwarz, R. E. J. Am. Coll. Surgeons 2008, 207, 250-258. [5] Shastry, B. S. Nature 2006, 6, 16-21. [6] CNN Health. http://www-cgi.cnn. com/HEALTH/library/CA/00078.html (accessed Oct. 24, 2013). [7] Senior, V. et al. Soc. Sci. Med. 1999, 48, 1857-1860. [8] Salonga, D. et al. Clin. Cancer Res. 2006, 6, 1322. [9] National Cancer Institute fact sheet. http://www.cancer.gov/cancertopics/ factsheet/Risk/BRCA (accessed Oct. 24, 2013).
01011011 11111110 10100111 00011111 00110010 10000111 01101110 00100001 01010111 01101100STANFORD 11101100 11110101 UNIVERSITY 11100100 00011000 01110010 00110101 11111111 10011101 11111111 00111100 11100111 11001001 01010101 00010010 00010101 10011111 11001000 01100001 10111011 10110110 11100100 00110110 10101101 01110010 00110110 10000101 10100100 11101000 00011010 01111110 10111011 00001000 11101011 01100111 11001001 11010010 10110000 11001101
10100000 11001111 01101001 00001110 11000011 01000010 00001101 10010100 11101101 11011000 10001101 11111100 00111111 10110000 11111001 01111001
11111110 00000111 00100110 11100000 11001100 01001011 10100110 10111101 10011100 00101111 00010101 00001110 00011000 01110010 01010010 11000010
00101111 10100011 01111010 11010100 00000010 00100001 00101111 00101101 00111101 01000101 00100110 00010011 01000111 10101010 01001000 11011111
THE INTERSECTION BETWEEN REALITY AND VIRTUAL REALITY
00110001 11010010 11000001 11001100 10010000 00111011 10111101 11011111 00100111 01011101 01000110 01011011 00110110 00011000 00011111 01101001 10110111 01101110 10010010 00100011 01101100 00000011 10110110 11100100 Professor Jeremy 00010011 00011010 00110101 Bailenson’s main 00010000 01100000 11111111 research interest revolves 01011011 01010110 11001001 around the concept 10010011 10011001 of digital00010101 human representation in the 00110100 00111111 01100001 context of immersive 00011000 01010010 11100100 virtual reality. 01100010 10100111 01110010 Currently, he is an Associate Professor 00000101 11100101 10100100 at Stanford University’s Department of
AN INTERVIEW WITH PROFESSOR JEREMY BAILENSON SARAH HIRSHORN
Sarah Hishorn: What initially sparked your interest in the realm of human interaction with the virtual environment? Jeremy Bailenson: So I got my PhD in 1999 studying artificial intelligence and running experiments to see how the mind was structured and when people looked at categories—when they did reasoning. And while I loved doing research, I wasn’t enamored with that field any longer, so I decided to take a post-doc in a different area. At the time, I was reading a science fiction novel called Neuromancer, which came out in the early 80’s and really outlined a future of avatars and agents in virtual reality. And that kind of inspired me to think of a different way of applying my skills and I took a post-doc at UC Santa Barbara in the year 1999, where I learned how to program, do the engineering work behind VR, and also to ask social questions about the nature of social human interaction inside virtual reality, about what it means to be a person, a human, and what the self means in the digital age and move forward from there. SH: A few weeks ago you had the opportunity to give Facebook founder Mark Zuckerberg a tour of the virtual
reality lab before Facebook paid $2 billion for the virtual gaming headset company Oculus Rift. Can you tell us more about the Oculus rift technology and your day with Zuckerberg? JB: He came to visit and we spent about two hours together. That was before he bought the Oculus Rift, so he came by and talked about the pro-social applications that we can use virtual reality for, like education, teaching about the environment, changing the nature of business, travel, teaching empathy and altruism. The company Oculus makes a very cheap and high quality head knob that’s a fraction of the cost of the competitors. So it implements the system that we have in the lab, but is much more affordable and could possibly be a consumer product. SH: Could you explain what it is like to experience a virtual environment and how an individual’s experience interacting with a virtual environment differs from their normally perceived world? JB: Perception is active. So right now with you across the room from me, I have to do a lot of work, say, to understand
Communications and is the Director of Graduate Studies for Stanford’s Doctoral Program in Communications. In addition, he is an Associate Professor by courtesy in the Program in Symbolic Systems. Professor Bailenson is also a Senior Fellow in Stanford’s Woods Institute for the Environment.
01101111 10111000 00000101 11111110 00110010 00100001 11101100 00011000 11111111 00111100 01010101 10011111 10111011 00110110 00110110 11101000
color and depth; my mind is constantly actively working to figure out how to perceive the physical world. Virtual reality substitutes human senses with digitally created ones. So instead of getting light from the physical world that bounces off of your face, I would be wearing a head mounted display that creates light for me in the eyes. Or, instead of hearing that sound come from my pocket, I would have headphones that would differentiate volume to create the illusion that the sound is coming from there. So, from a cognitive
MY MIND IS CONSTANTLY ACTIVELY WORKING TO FIGURE OUT HOW TO PERCEIVE THE PHYSICAL WORLD.
psychology perspective, the brain has not yet evolved to differentiate virtual stimuli from physical stimuli. SH: I’ve read that you see virtual human interaction as having the potential
101111 111000 000101 111110 110010 100001 101100 011000 111111 111100 010101 011111 111011 110110 110110 101000
g g g g g g
01011100 11000001 10001110 00111100 01111000 11101101 10011000 00010001 01011111 10100111 10100000 11111110 10000111 11001111 00000111 01010111 01101001 00100110 11110101 00001110 11100000 01110010 11000011 11001100 to promote weight loss by showing 10011101 01001011 individuals01000010 avatars of their leaner self. 11100111 00001101 10100110 What could be other future applications of this technology? 00010010 10010100 10111101 JB: One of the11101101 outcomes that we10011100 are most 11001000 excited about is teaching environmental 10110110 11011000 00101111 awareness. So, by putting somebody 10101101 10001101 00010101 in futures where one can experience 10000101 11111100 00001110 how human activity is changing the oceans, air quality, water, it can make the 00011010 00111111 00011000 connection between one’s actions and the consequences less abstract. Another application is education. I am working with John Mitchel, the Vice Provost of Online Education, to figure out how can take that to the next level. Instead of watching a video of a professor, can we build a system where somebody is in a chemistry lab and seeing and feeling all the stimuli, or in a theater department and they are actually acting and experiencing all the pedagogy in a way that is more constructive? SH: Are there any potential dangers with applications of the technology? JB: The primary concern in my mind is the nature of addiction. The majority of students seem to be texting and walking and texting and biking. So right now, media is so compelling, even with just
g g g g g g
g g g g g g gggg gggg gggg g g g g g g
g g g g g g
g g g g g g
fight!
11000001 11110110 11011111 11101011 11001100 11110110 00010011 11000010 10111100 00101111 00110001 11010010 10100011 11001100 10010000 01111010 10111101 11011111 11010100 01011101 01000110 00000010 00110110 00011000 words, that people cannot walk without 00100001 01101001 having to check their devices. I10110111 am 00101111 10010010 building applications in which,00100011 instead of seeing words from your friend, it is as 00101101 00000011 10110110 if your friend is right beside you. So, as 00111101 00010011 00011010 the technology becomes more immersive 01000101 00010000 01100000 and more compelling, how do we 00100110 01011011 prevent humans from being so 01010110 absorbed 00010011 10011001 in the virtual10010011 stimuli that it changes the way that they should naturally interact in 01000111 00110100 00111111 the physical world? SH: How long do you think it will take before the virtual reality technology becomes readily available to the public? JB: In virtual reality, there are three components: tracking your physical movements, updating a digital scene which we call rendering, and then displaying new perceptual information to the eyes, ears, and skin. The Microsoft Kinect has been most impressive event in tracking. You don’t have to wear anything on your body and it tracks 24 points on it and the x y and z position at each point. Videogame technology has pushed rendering quite well so we can already do graphics very elaborately and very quickly. The last challenge is display, that helmet that you wore upstairs costs about $30,000. The new helmets are just a
g g g g g g
g g g g g g
g g g g g g
g g g g g g
01011011 11111110 10100111 00011111 00110010 10000111 01101110 00100001 01010111 37 01101100 11101100 11110101 11100100 00011000 01110010 00110101 11111111 10011101 11111111 00111100 11100111 11001001 01010101 00010010 couple hundred dollars. So we are at the 00010101 10011111 11001000 point now where the technology is finally 01100001 10111011 getting cheap enough where you10110110 can think about using it on a large scale. 11100100 00110110 10101101 01110010 00110110 10000101 SH: Do you think that there are any 10100100 11101000 00011010 elements of reality that the virtual world will never be able to replicate? JB: I believe that all reality is virtual in the sense that the human brain has got to interpret the light that hits your retinas, the sound waves on your ears, and stimuli your skin feels. A translation occurs between the actual process and what you perceive in your brain as an experience. So, given that a mental experience is simply an interpretation of all the stimuli that you receive, I believe it will be possible to replace the stimulation that your body gets with digitally created versions.
So, as the technology becomes more immersive and more compelling, how do we prevent humans from being so absorbed in the virtual stimuli that it changes the way that they should naturally interact in the physical world?
1010 1100 0110 0000 1100 0100 0000 1001 1110 1101 1000 1111 0011
STANFORD UNIVERSITY
CURRENT SCIENCE AND POLICY OF BYCATCH REDUCTION: AN INTERVIEW WITH PROFESSOR LARRY CROWDER AMANDA ZERBE Professor Larry Crowder is the science director at the Center for Ocean Solutions (COS). He is also Ed Ricketts professor of biology at Hopkins Marine Station and a senior fellow at the Stanford Woods Institute for the Environment, both part of Stanford University. His recent research has focused on marine conservation, including research on bycatch, spatial ecological analysis, nutrients and low oxygen, sustainable seafood, ecosystem-based management, marine spatial planning, and governance. ADZ: Thank you very much for agreeing to speak with me. Let’s just start out with an overview of your research. What are you currently working on and/or most excited about? LBC: There are a couple of projects that I’m really excited about related to my work at the Center for Ocean Solutions. The first one is exploring the potential for using environmental DNA as a way to monitor relative abundance for vertebrate animals in the ocean. Basically, what this technique requires is collecting water samples and extracting free environmental DNA. All animals release DNA which decays in two to three days, so if you amplify the DNA that’s in a sample of water, you get the DNA signature
of everything that’s been in the water recently. We tested this approach in the open ocean tank at the Monterey Bay Aquarium. We’re using something called next generation sequencing, in which the number of DNA strands that you get is proportional to the number in the sample. This technique could reflect relative abundance of species as well as presence-absence. If we treat the Outer Bay tank as a black box, we could detect all of the bony fishes in their relative abundance in the tank. In the first round of tests, we didn’t get the sharks, the turtles, or the ocean sunfish to
Good for the environment AND good for people.
represent very well, but the more abundant bony fishes did. There’s also potential to use this technique for surveying or censusing animals in the open ocean. In general whales aren’t a problem because they’re easily observable, but sharks can be hard to observe. So if you can extract their DNA, that’s enormously helpful. Another project is about sustainability
in small-scale fisheries. We’ve made a lot of progress with new management innovations for big industrial fisheries but those technologies and those approaches haven’t yet been extended to small-scale fisheries. Globally, 95% of the people who are supported by fisheries work in small-scale fisheries. Small-scale fisheries might harvest as many fish as industrial fisheries in terms of biomass, so it’s a whole sector that really hasn’t been carefully examined by science or effectively managed. Small-scale fisheries differ a little from industrial fisheries, in the sense that the communities that depend on the resources are tightly coupled to the resources. This means that you have to think about solutions that promote sustainability of biophysical ecosystems, and also the sustainability of the communities that depend on those ecosystems. It’s inherently much more interdisciplinary work. There’s a whole field of study about social-ecological systems—under the umbrella of sustainability science—that we’re trying to apply to smallscale fisheries globally to create interventions with communities to make their fisheries more sustainable. What we’re trying to do at COS is to get out front with those initiatives and make sure that going in, we use all the information that we have about how those coupled systems work. The goal is to design
39
Amplifying DNA in seawater samples allows scientists to determine the types of animals that have been in the area in the past 2-3 days.
as
interventions in a way that is most likely to be successful, both for the fish and for the people that depend on the fish. Usually in environmental issues things are cast as it’s for the environment or it’s for people. What we need is an ‘and’ in that sentence: good for the environment and good for people, not or. The last project looks at all of the bycatch information that is out there. What you do with that information, if you know that there are fisheries that go after a target fishery, like a swordfish, and they catch albatrosses or sea turtles in the process? Do you just close the swordfish fishery? Not likely to happen. Or do you try and figure out how to fish the swordfish while minimizing the impact on the turtles and the albatrosses? Some people have proposed that you get that done by using different fishing gears, or setting the gears in different ways that reduce the impact on the non-target species. The new concept is something that we’re calling dynamic ocean management. It builds off of the idea that if you have satellite tagged all these animals, and you can model their movement and their habitat distributions based on remotely-sensed oceanography, you can model where the bycatch species are likely to be relative to those oceanographic features. The thing is, they don’t sit in one place, so you can’t close a rectangle in the ocean to fishing and protect them because
they move. You could have protected areas that move seasonally. What happens now, in bycatch management, if there’s a spatial closure, it’s a big box, just because at some time during the year there’s a sea turtle someplace in that box that a fisherman might catch. But if you think of the sea turtles as moving seasonally, they’re in a much smaller box in May, and in June, but the box is in a different place. So the fundamental argument is: fishermen move, the ocean moves, the target species move, the bycatch species move. So why is the management static? This idea was suggested in the literature almost 15 years ago. But we’re now at the cusp of having the technology and the modelling skill to be able to project, based on remotely sensed oceanography, where the sea turtles are likely to be. If you can move fishermen away from where the sea turtles are likely to be, they may be able to fish in much larger pieces of the ocean by agreeing not to fish where sea turtles are likely to be. Once a month, or once a week, you could update where that closed area is. ADZ: What can students and consumers do to be more conscientious about our impacts on the ocean, particularly related to fishing? LBC: There’s lots of guidance out there in terms of selecting what you choose to
eat and what you choose to buy. So on the fishing side, there’s trying to make sure that you’re making good choices and sustainable choices. On the climate side, it seems like such a big problem, but everybody’s choices have an impact on the problem. Some of it is, kind of, relieving guilt, and some of it can be really impactful. People need to think about the choices that they make when they buy food, the choices that they make when they buy cars, how you choose to live your life in terms of energy intensity and product intensity. Those are important things for people to think about. More and more, we’re finding that good environmental decisions are also good economic decisions: things that are good for the environment are also things that are good for the economy. People have put those in contrast, saying that [environmental action] is a job-killer, but the recent IPCC report which just came out suggests that we can’t afford to overlook this problem. It’s going to have potentially devastating impacts on everybody’s plans for the future. So I think it’s time to get serious about it and to get past quibbling and denying. You don’t have to worry about what’s actually causing it: the fact is that it’s measurably happening. The question is, how do you cope, as a society and as an individual, with those changes?
THE
BREAKING DOWN AFFORDABLE CARE ACT
The Patient Protection and Affordable Care Act, commonly referred to as the PPACA, ACA, or Obamacare, is the most extensive attempt at health care reform since the introduction of Medicaid and Medicare in 1964. Some of the main goals of the ACA include providing comprehensive, affordable coverage to the estimated 84 million Americans in the US that are either uninsured or have poor health coverage,1 and making it easier for individuals to obtain and keep private insurance. Because the possession of effective health coverage is a positive indicator for overall good health,2 the ACA attempts to extend coverage to these individuals with the goal of improving the health of the American population. In this article, we intend to provide an understandable, unbiased breakdown of some of the most important and relevant parts of the ACA. We will address the changes that affect insurance policies, changes to Medicaid, and the much-discussed employer and individual mandates. These are just a few aspects of the 2000-page law, but the aim of this article is to provide a working knowledge of the law’s philosophy, execution, and relevance to students. This article attempts to explain the most important changes to every insurance plan’s coverage, new rules and regulations that insurance companies must follow, and the law’s specific impact on young adults.
INSURERS CANNOT IMPOSE ANNUAL OR LIFETIME SPENDING LIMITS Before the Affordable Care Act became law, an insurance company could limit the amount it would pay doctors. This could be enforced as an annual limit or as a limit for the lifetime of the plan. For most people with health insurance, this was not a major concern; about 80% of plans had a lifetime cap of over two million dollars.3 However, patients with chronic diseases often surpassed this limit, especially patients who required organ transplants, which often had a separate cap. Milliman, Inc. puts the total cost of a kidney transplant at $260,000,4 so an individual whose plan had a $250,000 transplant-specific limit would pay at least $10,000 out of pocket.
Transplants are usually a last resort; by then, patients have often already paid tens of thousands of dollars to manage their diseases. Medical bankruptcy is the leading cause of bankruptcy in the United States,5 and by eliminating all annual and lifetime spending limits (which took effect January 1, 2014), the Affordable Care Act gives insured individuals peace of mind, knowing that they are more than an accidental fall or a bad gene away from falling into medical bankruptcy.
INSURERS CANNOT DROP POLICYHOLDERS MID-TERM Before the current healthcare reform, insurance companies could drop covered persons from their policy due to sickness. If a policyholder contracted an illness or were diagnosed with a chronic condition, an insurer could cancel the policyholder’s plan to avoid reimbursing the doctors in excess of the policyholder’s premiums. The average medical bills for someone in this position total over $20,000.5 This result is part of a recurring theme of health insurance before healthcare reform: the more coverage an individual might need, the less they would be likely to receive. The ACA also prevents what the law calls “frivolous cancellations.”6 An insurer could previously cancel a plan if a mistake was found on a policyholder’s insurance application, even if that mistake was insignificant under that person’s previous plan. The insurance company could then ask for back payment of the money spent on the individual’s medical care. The ACA mandates an end to these cancellations.
PRE-EXISTING CONDITIONS The ACA changed the ways in which insurance companies can choose whom they provide insurance to. Under the old health care system, insurance companies had the right to refuse providing insurance on the grounds of having a condition or disease prior to applying for the new plan. The Affordable Care Act makes it illegal for insurance companies to deny coverage due to a pre-existing condition, meaning that patients
will now be able to safely switch plans or find a new plan even when undergoing treatment. One notable exception to this rule is “grandfathered” plans.7 These are plans that were bought individually before the PPACA went into effect, and they are not required to switch their policy concerning preexisting conditions.
YOUNG ADULTS CAN STAY ON THEIR PARENTS’ PLANS UNTIL AGE 26 Young adults are key players in healthcare reform. However, their lack of participation in the health insurance marketplace presents a dilemma. Possessing a more limited budget compared to established adults, as well as an invincibility complex, many young adults do not see the need for health insurance. They see it as a luxury, given the usual health conditions of a young adult. The double-edged sword of the invincibility complex is that, while most young adults are in good health and might not need insurance, good health is important to insurance risk pools to keep rates low.8 Insurance would simply not be viable if the only people covered were older individuals who needed significant medical services. This fact necessitates having a pool of healthy people who pay insurance. That way, any given individual in the insurance pool is expected to incur fewer medical costs on average, which saves money for the insurance companies, who, in turn, can lower rates for everyone. The ACA enables these young adults to remain on parents’ plans until age 26—keeping young adults in the insurance risk pools while not burdening them with high premiums.
10 ESSENTIAL BENEFITS
BY
41
WASHINGTON UNIVERSITY AT ST. LOUIS
BY ALEX WESS AND DAN COHEN AT 85%
The ACA makes sweeping changes to the basic requirements of all plans that will be offered on the newly established insurance exchanges and in certain marketplaces. These baseline requirements are established with “10 Essential Benefits” that every new plan must cover,9 including ambulatory services, mental health and substance abuse services, rehabilitative services, and maternity and newborn care, among others. Notably, these Essential Health Benefits also include full coverage for federally approved contraceptive methods. Some groups are philosophically opposed to this so-called “contraceptive mandate” and are calling it a violation of the Religious Freedom Restoration Act of 1993 (RFRA), which states that the government cannot interfere in a person’s normal exercise of religion. This issue has recently been addressed on the national stage in the Supreme Court in the case of Hobby Lobby v. Sebelius. Hobby Lobby, a for-profit corporation, succeeded in arguing that its religious rights were violated by the requirements for contraceptives, and the company has been granted a waiver from providing insurance plans covering certain contraceptives.10
MEDICAL LOSS RATIO FIXED
One of the major goals of the ACA is to reduce the unnecessary spending of health insurance companies. A novel approach implemented by the ACA is to mandate a fixed Medical Loss Ratio for these companies. In a typical health insurance company, part of any money earned from an enrollee (someone on the health plan) goes toward running the insurance company itself.11 Some of these expenses include the administrative costs, overhead, and marketing budget. The rest of the revenue funds enrollee healthcare or quality improvement of healthcare. From this division of spending, we get the Medical Loss Ratio (MLR).12 The MLR is the raw amount of money that the insurance company spends on healthcare services for its enrollees and their dependents divided by the total revenue generated by health plan premiums. Under the ACA, the government sets minimum MLRs (calculated as ratios, but more easily understood when expressed as percentages) that health insurance companies must meet. Under the new law, insurers with at least 51 employees must maintain an MLR of 85%, and for smaller insurers with 50 or fewer employees, the MLR must be 80%. Why the difference based on size? Larger insurance companies have more people paying premiums every month, which gives them larger pools of money to pull from to pay for care. This means that, proportionally, more of every dollar earned should be spent on healthcare. In theory, this reduces the cost of insurance for the average person, which falls in line with the overarching goal of making health insurance more affordable.
SUMMARY The ACA is one of the most complicated bills passed in recent years. Some of this complication is due to the way that the US healthcare system has developed. It has grown to fit our country’s health care needs and sometimes meets them in very inefficient or convoluted ways, especially compared to other countries that have much more unified systems. In some ways, the ACA is our government attempting to consolidate the many disparate elements into a more cohesive whole.
REFERENCES
[1] McCarter, J. http://www.dailykos.com/ story/2013/05/07/1207592/New-study-84million-uninsured-or-underinsured-innbsp-2012# (accessed Apr. 15, 2014). [2] Heavey, Susan. http://www.reuters.com/ article/2009/09/17/us-usa-healthcare-deathsidUSTRE58G6W520090917 (accessed Apr. 11, 2014). [3] Health insurance caps leave patients stranded. http://www.nbcnews.com/ id/25644309/ns/health-health_care/t/healthinsurance-caps-leave-patients-stranded/#. U6N2cI1dWd- (accessed Apr. 10, 2014). [4] Hauboldt, R. et al. http://publications. milliman.com/research/health-rr/pdfs/2008us-organ-tisse-RR4-1-08.pdf (accessed Apr. 10, 2014). [5] Arnst, Catherine. http://www.businessweek. com/bwdaily/dnflash/content/jun2009/ db2009064_666715.htm (accessed Apr. 12, 2014). [6] Department of Health and Human Services. How does the health care law protect me? https://www.healthcare.gov/ how-does-the-health-care-law-protectme/#part=5 (accessed Apr. 18, 2014). [7] Department of Health and Human Services. Grandfathered health plan. https://www. healthcare.gov/glossary/grandfatheredhealth-plan/ (accessed Apr. 17, 2014). [8] American Academy of Actuaries. Critical issues in health reform: risk pooling. http:// www.actuary.org/pdf/health/pool_july09.pdf (accessed Apr. 16, 2014). [9] Department of Health and Human Services. Essential health benefits. https://www. healthcare.gov/glossary/essential-healthbenefits/ (accessed Apr. 17, 2014). [10] The Hobby Lobby hubbub: the Supreme Court ponders the contraceptive mandate. http://www.economist.com/news/unitedstates/21599789-supreme-court-ponderscontraceptive-mandate-hobby-lobbyhubbub (accessed Apr. 16, 2014). [11] California Department of Managed Health Care. Useful terms. http://www.dmhc. ca.gov/FileaComplaint/UsefulTerms.aspx#. U6N7CGRdU_4 (accessed Apr. 18, 2014). [12] Center for Medicare and Medicaid Services. Medicare program; medical loss ratio requirements for the Medicare advantage and the Medicare prescription drug benefit programs. https://www.federalregister.gov/ articles/2013/05/23/2013-12156/medicareprogram-medical-loss-ratio-requirementsfor-the-medicare-advantage-and-themedicare#h-23 (accessed Apr. 18, 2014).
WASHINGTON UNIVERSITY OF ST. LOUIS
THE IMAGE OF THE DOCTOR: Television
&
Reality
KATELYN MAE PETRIN DOCTOR HOUSE. DOCTOR COX. DOCTOR LECTER. These names, these titles—they have weight. Some esteem stems from their pop culture clout: they are fun, they are snarky, they say it like it is (even if Lecter does eat people). But there is something more than that. Their title–“Doctor”–forms a central part of their identity, such that we categorize them long before we encounter their stories. Stories about doctors are not novelties. Doctor Faustus gave way to Doctor Frankenstein, and so on. But like any tale that doesn’t die, the doctor’s story has evolved in reflection of the cultures that retell it. In the early days of doctor dramas, clinicians were beacons of hope, perfection, and excellence.1,2 However, these stories have evolved over the past decade to focus on astute and gifted doctors plagued by very real human failings. Indeed, many of the doctors portrayed in 21st century medical dramas fit a stone-cast archetype: conflicted, tortured, and talented. Beyond their work, they often have no meaning, no human connections, and no purpose beyond the application of impressive knowledge and skills honed by decades of dedication and training. They might have
a conflicted relationship or two, if it serves the plot. Despite these personal difficulties, they always heal the patient. In short, these characters are rarely good people, but they are always good doctors. Despite their supposed medical prowess, these TV doctors overlook a serious aspect of medical responsibility. When scholars cataloged the bioethical decisions made in 50 episodes of “House” and “Grey’s Anatomy”, they found that in 57% of cases, the doctors committed blatant bioethical violations—not just performing procedures without consent, but also lying outright to their patients to achieve consent.3 Here, we see a certain paternalistic trend: in every story, every week, doctors do the job. They have the knowledge. They are the best. No matter how nasty they are, they have something the patients need, and they hold life in their hands. But they break the rules, they disregard others, and they endanger lives and flout ethics. Thus, these television doctors become absolute powers, unchecked even by the law. Being a doctor becomes a power symbol more than it is a profession.
But what does it matter? Certainly, many speculate about how “doctors on TV” might change medical practice (for better or worse).4 Popular dialogue has suggested a few trends: maybe television doctors raise patients’ expectations too high;5 maybe they give viewers too many ideas of bizarre diseases they’re unlikely to have;6 maybe they give real doctors a bad name.7 Many of these arguments lack anything that resembles evidence. Despite this, these ideas have become assumptions made by many viewers. Some researchers have found evidence that prime-time doctors make people view their physician as cold, cruel, and unethical.8 Others have found that the most television can do is make you think your doctors are ugly and immoral but otherwise acceptable.9 Quantitative data remains inconclusive at best, outright misleading at worst. Individual perceptions are notoriously difficult to quantify. However, as pop culture scholar J. R. McLeod writes, “Television has the power to manipulate and to certify, to selectively inform, and to selectively manipulate emotion. All of these effects
43 operate at a level of cultural immersion.”10 So how might medical dramas reflect a culture of care and the doctors who offer it? In the United States’ medical system, answers to this question are tentative but emerging. A 2008 study conducted at Johns Hopkins University found that while more than 50% of their students watched doctor dramas, less than half discussed the shows’ bioethical dilemmas.11 Considering that the heroes of medical dramas so often stray horribly from ethical practice and that the same study reports minimal classroom engagement with bioethics, this may suggest a worrying trend return to paternalism in young doctors. After all, the power that doctors wield in medical dramas is not entirely fictional. Individual experiences with doctors reflect similar issues of power and representation. “Doctors get away with shots in the dark,” said Livvy Bedford, a Yale undergraduate. “When they try something that doesn’t work, there’s no accountability. At one point, I had a test for my stomach condition. I watched the pH test dip down to the number that qualifies for diagnosis. And when I walked into my doctor’s office, she said, ‘Yeah, you don’t have acid reflux.’ No explanation, no test—one sentence. And I wanted to ask: what happened? Why did I see those numbers?”12 The problem for her, Livvy said, is not necessarily that misdiagnoses like this can happen; it’s that “you can’t question what the doctors are doing, especially when you’re younger. I think we have this lone wolf image of doctors who act on their own, like in ‘House.’ That’s more harmful than anything.” Livvy has since been diagnosed and treated for dyspepsia by a different doctor. Situations like Livvy’s are not uncommon. The National Center for Policy Analysis reports that a believed 10-20% of diagnoses are incorrect in the United States. Beyond that, 28% of surveyed cases were lethal misdiagnoses.13 In instances of misdiagnosis and malpractice, the law overwhelmingly favors doctors: only four out of ten patients win legal battles.14 Perhaps the doctors are in the right; perhaps the law is biased. Often, doctors are merely able to afford better legal representation. It is well known that the United States’ health care system has flaws. However, the structure of the health care system—the way it valorizes doctors as free agents, grants them the power to act independently of each other15 and sometimes even their patients— prevents its own reform.16 Here lies the juncture between media
image and reality. Misdiagnosis? Expected. Imperfect systems? Everywhere. However, the relationship between the patient and the doctor and the resistance of doctors and the health industry to revising a relationship that so empowers them—that’s McLeod’s cultural immersion, an image of “health care” created and perpetuated in culture, then reflected outwards through television. Television says: the doctor has the power, the patient has none. And society doesn’t tell anyone otherwise.
REFERENCES [1] Strauman, E. C.; Goodier, B. C. J. Med. Humanit. 2011, 32, 31-46. [2] Tapper, E.B. Bayl. Univ. Med. Cent. 2010, 23, 393-399. [3] Czarny, J. M. et al. J. Med. Ethics 2010, 36, 203-206. [4] O’Callaghan, T. The House effect. Time, Apr. 09, 2010. http://content.time.com/time/health/ article/0,8599,1978591-1,00.html (accessed March 10, 2014). [5] Schindlholzer, B. Patients watching many medical TV Series. Diametrics, Dec. 11, 2008. http://www. diametrics.io/patients-watching-many-medical-tv-seriesare-less-satisfied-with-patient-experiences-in-hospital. html#comments (accessed March 9, 2014). [6] New House, M.D. is affecting patients’ expectations of medical care. KevinMD, Oct. 06, 2009. http:// www.kevinmd.com/blog/2009/10/house-md-affectingpatients-expectations-medical-care.html (accessed March 10, 2014). [7] Pfau, M. et al. JBEM 1995, 39, 441-458. [8] Chory-Assad, R. M.; Tamborini, R. JBEM 2003, 47, 197-215. [9] Stinson, M. E.; Heischmid, K. Health Mark. Q. 2012, 29, 66-81. [10] McLeod, J. R. JPC 1991, 25, 69-75. [11] Czarny, M. J. et al. AJOB 2008, 8, 1-8. [12] Bedford, L. Personal interview. [13] Physicians misdiagnose at an alarming rate. National Center for Policy Analysis. May 8, 2013. http://www. ncpa.org/sub/dpd/index.php?Article_ID=23148 (accessed March 10, 2014). [14] Public use data. http://www.npdb.hrsa.gov/resources/ publicData.jsp (accessed July 24, 2014). [15] Porter, M. E.; Lee, T. H. Why health care is stuck. Harvard Business Review, Sept. 17, 2013. http://blogs. hbr.org/2013/09/why-health-care-is-stuck-and-how-tofix-it/ (accessed March 9, 2014). [16] Berwick, D. M. JAMA 2013, 310, 1921-1922.
The International Collegiate Science Journal is a revolutionary publication with a mission to translate science from obscure to captivating. We want to write exciting articles explaining why our brain forgets names of people we just met, why it rains diamonds on Saturn, how you can travel to the Caribbean in a second through virtual reality, how we may be able to live for 200 years in the near future, and other interesting topics that you will want to read. Through an online and print magazine, we hope to create an environment where anyone can uncover science and discover all that it can do. Our team consists of undergraduate college students representing nine successful and established undergraduate science journals at some of the world’s top universities. Currently, ICSJ has members from Harvard University, Princeton University, Stanford University, University of Oxford, University of Cambridge, Duke University, Rice University, UC Berkeley, and Washington University at St. Louis. We come from diverse majors and backgrounds, and all share a common passion for making science accessible to the public. This is not just a journal. This is ICSJ.