SPRING 2017 | Berkeley Scientific Journal
1
STAFF STAFF Editor-in-Chief Alexander Powers Managing Editor Harshika Chowdhary Features Editors Rachel Lew Aarohi Bhargava-Shah Interviews Editors Georgia Kirn Yana Petri Research Editor Akshara Challa Blog Editor Neel Jain Layout Editors Allison Chan Katherine Liu Features Writers Nini Liu Mina Nakatani Michelle Verghese Fariha Rahman Yizhen Zhang Interview Team Nikhil Chari Kara Jia Isabel Craig Heliya Izadpanah Elena Slobodyanyuk Catrin Bailey Tiffany Nguyen Ismael Ostolaza Research Team Kore Lum Gabby Shvartsman Alaa Abdelmageed
2
Berkeley Scientific Journal | SPRING 2017
EDITOR’S EDITOR’S NOTE NOTE The Berkeley Scientific Journal proudly presents its Spring issue: the Science of Power and Control. On this semester’s cover, we present a stylized rendering of the CRISPR Cas9 gene editing tool. CRISPR embodies all aspects of power and control: control over the human genetic code, the need to control technology to ensure its ethical use, and lately conflicts over control of intellectual property. And as many of you know, the CRISPR Cas9 revolution owes much to UC Berkeley’s very own Jennifer Doudna, Professor of Chemistry and Molecular Biology. Beyond just CRISPR, control systems in biology are some of the most complex and highly regulated processes on the planet and humans are just beginning to possess the ability to mimic these in artificial systems. In this issue, we examine advances in bionic limbs and talk with new EECS Professor Anca Dragan about robot decision making processes. In the age of instant communication and connection, one would hope that scientific information would be more easily accessed and absorbed. However, the opposite seems to hold true. With many of us on the Berkeley campus still reeling from the 2016 election, it can be disheartening to see just how easy it is to spread misinformation and for our country’s leaders to reject scientific theories and methods. Moving forward, scientists must not only fight with facts but with stories and pictures, with more ethos and pathos. It is clear that BSJ’s mission is more critical than ever: to educate young scientists and engineers in written and graphical communication and lead those specializing in the humanities to apply their skills in elucidating scientific concepts. We are succeeding grandly in this mission. In the 2016 – 2017 school year, the BSJ had over 70 participating undergraduate students. We hope you enjoy this visually stunning issue!
Harshika Chowdhary Managing Editor Alexander Powers Editor-in-Chief
TABLE OF OF CONTENTS CONTENTS TABLE Features 9. 17.
The Future of Bionics Mina Nakatani
Psychological Mechanisms of Self Control Fariha Rahman
21.
Hijacking the Brain with Optogenetics Yizhen Zhang
23.
Brain Initiative: Power in Networks Nini Liu
32.
Drawing the Line in Genomics with CRISPR Technology Michelle Verghese
Interviews 4.
by Interviews Team
Chemical Engineering Professor Markita Landry: Nanotechnology, Detection, and Plants
EECS Professor Anca Dragan : Human and Robot Interactions 12.
27.
Chemistry Professor Ronald Cohen: Monitoring Atmospheric Chemistry
Research 35.
Quality and Value Moniterying of Acute-Care Hospitals Grace Deng
SPRING 2017 | Berkeley Scientific Journal
3
NEUROTRANSMITTER IMAGING AND PLANT NANOBIONICS Interview with Professor Markita Landry BY CATRIN BAILEY, ISABEL CRAIG, NIKHIL CHARI, YANA PETRI, ELENA SLOBODYANYUK
Dr. Markita Landry is an Assistant Professor of Chemical and Biomedical Engineering at the University of California, Berkeley. Professor Landry’s laboratory focuses on understanding and exploiting optical nanomaterials to access information about biological systems. In this interview, we discuss semiconducting single-walled carbon nanotubes (SWNTs) and their applications in the detection of dopamine in the brain and biological cargo delivery to plant systems. Professor Markita Landry [Source: UC Berkeley College of Chemising biophysics tools in engineering space. That's how I ended up here as well.
BSJChemical and Biomedical Engineering? : What has made you so interested in optical I trained in Physics for my undergraduate BSJ nanomaterials and nano-sensor design? MLdegree and Ph.D. The focus of my Ph.D. work was to study molecular interactions. To do so, There is a lot of opportunity in developing our lab developed high spatial and temporal resoMLnanosensors, especially for molecules that : How did you first get involved in the field of
:
:
lution instruments, which were well-suited for the systems that we were studying. When I graduated, I felt that these instruments were more broadly applicable and wanted to translate their use into nanotechnology. For my postdoc, I planned to come back to physics and then apply nanotechnology tools, but biophysics tools ended up being really useful for nanotechnology. That's how I was introduced to Chemical and Biomolecular Engineering: by build-
4
Berkeley Scientific Journal | SPRING 2017
are otherwise very difficult to access information from. For example, when we diagnose something like cancer, we use quantitative methods: typically, a blood screen for biomarkers and then an assay that shows how many cytokines are in the blood. For behavioral disorders like psychosis and depression, we have only very qualitative methods. That's where my interests are: in the more challenging areas to develop sensors for. I'm trying to make diagnosis
Figure 1. Polymers with hydrophobic and hydrophilic segments are pinned to the surface of a SWNT. The polymer-SWNT conjugate is able to detect a molecular analyte such as dopamine by selectively enabling the analyte to access the SWNT1. more quantitative by developing sensors for modulatory neurotransmitters, which govern behavior and disease.
BSJ
: Semiconducting single-walled carbon nanotubes (SWNTs) have been used in your laboratory for a variety of applications. Neurotransmitter detection1,2, recognition of riboflavin2, and sensing of nitric oxide3 are only a few examples. What challenges are associated with traditional methods of single-molecule detection?
One of the main challenges is in the photostaMLbility of traditional probes. If we consider organic :
fluorophores, green fluorescent proteins (GFPs), or even quantum dots, the fluorescence of these materials can deteriorate over time. For single fluorophores, it can be as short as a few seconds. For quantum dots, we can get out to tens of minutes. SWNTs don't photobleach. If we are looking to study something like behavior, we want an experimental time window that's much more than a few seconds. What we aim to do is study behavior over the course of multiple days. The physics behind why SWNTs don't photobleach goes back to the unique way they provide infrared (IR) fluorophores of light that we use for imaging. It's really unique to SWNTs and that's why we chose them for these sensors.
“Non-photobleaching fluorescence [of SWNTs] can be modulated selectively by the presence of molecular analytes�
What exactly are SWNTs and which properties BSJmake them so suitable for selective recognition of :
a broad range of molecules?
are, conceptually, sheets of graphene that MLareTheyrolled up. They are very high aspect-ratio :
nanomaterials, which means that they are about 1 nm wide and several hundreds long. They are very non-biological in their structure and in their shape. That makes them easy to interface with biological systems because they are relatively small and can be inserted into the extracellular space of the brain or into the extracellular space of plant tissues fairly noninvasively. What makes them well-suited for biological imaging and molecular recognition is that the non-photobleaching fluorescence emission can be modulated selectively by the presence of molecular analytes. By performing some chemistry on the surface of the carbon nanotube, we can make it selective for molecular analytes that will change the fluorescence intensity only when that analyte is present.
BSJregion particularly advantageous? Photons that are emitted in the visible waveMLlength range are scattered by biological tissues;
: Why is detecting a fluorescent signal in the IR :
it's the reason that we can't see through hands, skin, and bone. And when we try to do microscopy, especially high-resolution single molecule or single cell microscopy, any photons that are emitted by fluorophores or probes in the visible wavelengths will be scattered by tissues, blood, and bone. And on the opposite side of the spectrum, water starts absorbing photons past 1800 to 1900 nanometers. So between these two scattering and absorption regimes, we have this really nice dip, at around 1,000 nm, where photons can go through water without being
SPRING 2017 | Berkeley Scientific Journal
5
absorbed and through bones without being scattered. SWNTs emit in this nice wavelength range that we can use to minimize interference with biological samples so that we can insert these probes deep into tissues and perform imaging studies without, for example, having to open the skull.
BSJticle-adsorbed organic phases for SWNT
: What guides your selection of nanopar-
libraries?
We started with a fundamental MLproof-of-principle assay. We wanted to see if :
we could replicate the mechanism by which proteins have evolved to recognize antibodies. A protein is just a chain of amino acids, and it’s really not functional until it folds and adopts a nice globular 3D form that can then do biocatalysis or molecular recognition. Much in the same way, these polymers, in their 1D sequence or the way that they’re synthesized, don’t have any affinity for any analyte, but it’s only once they fold onto the nanotube structure that they adopt a globular conformation to recognize an analyte. That was the design principle behind our assay. Proteins have had several billions of years to evolve this structure-function relationship, and
Figure 2. a) Near-infrared photo indicating rapid penetration of ss(AT)15SWNTs through chloroplast lipid bilayer b) SWNT transport through chloroplast double membrane envelope via kinetic trapping by lipid exchange c) Chloroplast TEM after incubation in SWNT-NC suspension.
6
Berkeley Scientific Journal | SPRING 2017
we were hoping that we could at least somewhat replicate it synthetically. Initially, we just designed polymers that would partially adsorb to the tube and partially remain desorbed, where the adsorbed phase would be something that would tether the molecule to the tube, and the desorbed phase would be the molecular recognition phase. We made a library of these polymers with slight chemical variations and then started screening to show that we could achieve a good level of molecular cell activity with just these synthetic polymers. How are the polymer-wrapped carbon nanoBSJtubes synthesized? The nanotubes now have become a popular MLstarting material for many applications be:
:
yond biological sensing. Given their popularity, they can now be commercially procured. We typically purchase the tubes, purify them, and do post-processing to ensure consistency amongst batches. We also have an ongoing collaboration with Ron Zuckermann’s lab at the Molecular Foundry at LBNL (Lawrence Berkeley National Laboratory]). Zuckermann’s lab has a robot that synthesizes polymer sequences.
How is analyte recognition achieved on a BSJmolecular level? What role does corona phase :
molecular recognition [CoPhMoRe] play in this process?
We would love to know the answer to that MLquestion. We’re working towards it. Initially, :
we just started by building an almost random library
“We would eventually like to accomplish neurotransmitter detection in awake and behaving animals” of polymers and an almost random library of analytes to show that this could work. As we move towards different spaces - neuroscience, protein detection - we are getting a little bit smarter about our approaches. The way that we develop a corona phase for proteins is different than what we use for neurotransmitters. For neurotransmitters, now we know that the important part is to have polymers that will wrap in loops as opposed to helices. That’s just one of the discoveries that we made by trial and error. For proteins, what’s important is either using protein-like molecules such as peptoids that have loops for protein recognition, or by using phospholipid coatings so that the corona phase resembles the membrane of a cell. one of your articles, SWNTs were used BSJforIn neurotransmitter detection. Why did you :
specifically focus on dopamine for further sensor optimization?
a bit of luck. That was the best sensor MLweIt was found within our screen. But we were :
lucky because dopamine is one of the three primary modulatory neurotransmitters that govern behavior. Dopamine is a molecular target that’s been used by the pharmaceutical industry for over sixty years to treat depression, psychosis, and ADHD.
What polymers were the most effective senBSJsors for dopamine? For dopamine, nucleic acid polymers worked MLvery well. We tried many different sequences, :
:
and, counterintuitively, as soon as we started changing the bases within a sequence we got very different response profiles to dopamine and other molecules. One of the key findings we made recently is that the original (GT)15 DNA polymer on the tube creates about a 90% dopamine response. If we cut that roughly in half and make a (GT)6, then instead of making helices the polymer makes rings. That does
funny things to the nanotube excitons, which provide light output and increase signal by over an order of magnitude. So these (GT)6 rings end up being probably what we’ll be pursuing now for in vivo studies of dopamine.
BSJlaboratory is plant nanobionics3. What has : Another exciting area of research in your
motivated you to attempt to engineer plant function with SWNTs?
The plant nanobionics area of the group, MLwhich also looks at delivering biological cargo :
into plants, was motivated by some frustrations we had in the neuroscience space. We were having issues with sensors going inside cells, which is not where we wanted to measure dopamine. But we found that there’s a lot of very easy internalization of these nanotubes through biological membranes. Although we can now fix this penetration issue with chemistry, we wanted to exploit this phenomenon known as “barrier crossing” to deliver useful biological cargo to systems. And one of the more difficult systems to deliver biological cargo to is plants. In addition to a cell membrane, they also have a cell wall, which evolved to be very stiff to provide the turgor pressure that the plants need to stay upright. We’re motivated by the introduction of foreign genes, for example, into mature plants. We can develop nanomaterials in which a gene vector for a certain transgene is introduced passively into the plant. Then we observe that the test vector that codes for GFP expression, for example, will see cells produce protein at the GFP injection site. So that’s a proof of principle that, not only is the gene vector getting into the plant, but that protein expression is also happening after the delivery.
BSJphotosynthetic activity of plants? We don’t know that yet. SWNTs have these MLunique photonic properties in the way exci: How do SWNTs have the ability to modify :
tons travel through them, so that they can absorb light not just in the visible range but also in the infrared range. Photosynthetic pigments can only absorb visible light. So what we’re thinking is that there might be some ability of the carbon nanotube to absorb photons of light within a very broad range. The sun emits in the IR as well, and that energy is somehow transferred to the pigment of the plant that can then increase photosynthetic efficiency.
SPRING 2017 | Berkeley Scientific Journal
7
Do you thing that SWNT-mediated photoBSJsynthesis could serve as an ex vivo source of :
renewable energy?
are currently working on recomposing MLtheWeplant from its species. One of the things :
we are doing is extracting chloroplasts (main photosynthetic element of the plant) and looking at its interactions with the carbon nanotube. One challenge there is that the chloroplast is a plastid, or not an independently living organism. Keeping it viable when extracted is a bit of a challenge. We are exploring with a few synthetic chemistry approaches that mimic the native environment of the chloroplast in a tissue culture. What are the future directions of your BSJresearch and how will the Zuckerberg award :
allow you to expand into more high-risk directions?
In addition to neurotransmitter imaging, we MLwould eventually like to accomplish neu:
rotransmitter detection in awake and behaving animals. We would like to start probing the relationship between different social environments and how these affect neurotransmission in the brain. We would also like to start validating some clinical therapies. If we dose a mouse with an antidepressant and employ our technology, that can be a quantitative measure of how dopamine is actually changing in the brain. For plants, we would also like to move forward with material plant transformation. Currently, if you want a transgenic plant, you need to start with a seedling, wait 4-5 weeks until it grows, and see whether the resistance element that was introduced is actually working. A method for direct modification of just a subset of a plant tissue would allow us more spatial control over what parts of the plants are transgenic. That can be very interesting if, for example, you wanted to grow a non-GMO fruit, but still wanted to confer disease resistance to the roots and the leaves, thus creating locally transformed plant tissue. These are the types of projects that we are mainly pursuing under the Zuckerberg initiative. The ability to change directions if we find something more exciting than originally expected is part of what makes the Zuckerberg award so powerful.
REFERENCES:
1. 1. Beyene, A.G., Demirer, G.S., Landry, M.P. Curr. Protoc. Chem. Biol. 2016, 8, 197-223. 2. 2. Zhang, J., et al. Nat. Nanotechnol. 2013, 8(12), 959-68. 3. 3. Giraldo, J. P., et al. Nat. Mater. 2014, 13(4), 400-8.
8
Berkeley Scientific Journal | SPRING 2017
THE FUTURE OF BIONICS BY MINA NAKATANI
NEW OPPORTUNITIES TO REGAIN CONTROL FOR INDIVIDUALS WITH UPPER LIMB LOSS
B
ack in 2005, over half a million Americans underwent some form of upper limb loss. For reference, that is just under the population of Albuquerque, New Mexico, as of the 2010 census. By the year 2050, just over three decades from now, that number is predicted to double - over one million Americans will be suffering from some level of amputation to their upper limbs. In other words, that would be approximately the population of Dallas, Texas in 2010, all handicapped by the loss of some part of their arms2. In short, the use of prosthetics is hardly a new idea, familiar to almost anyone. However, current science strives to improve upon bionic technology (essentially roboticized prosthetics) in particular, advancing the systems made to control a bionic as well as the flexibility of the bionic itself in order to most accurately mimic the natural movement of a human limb. That challenge becomes particularly daunting when it comes to the human hand. It is easy to forget or overlook the dexterity and ability of the hand; the precision it is capable of in everyday tasks - even in typing this article - is really pretty remarkable. With twenty-one degrees of freedom - a measure of the number of joints and the direction in which those joints can move, namely forward and backward, or side to side - in the hand alone, as
well as six more in the wrist and a versatile thumb, the dexterity comes as no surprise. Unfortunately, that same dexterity is what poses such a problem to researching and developing bionic hands. For people in need of an upper limb prosthetic, whether due to traumatic injury (such as nerve damage and amputation) or a congenital condition, those problems frequently outweigh the benefits. In fact, approximately one third of all prosthetic users will choose to use a passive prosthetic - a cosmetic hand without technological function instead of a bionic, or active prosthetic. Even then, among those who do choose to use an active prosthetic, another 88% will cease to use their active prosthetic, in short, because they are “too tiring and difficult to use.”5 Given the current systems utilized in most bionic technologies, many people simply find the controls to be too difficult, the mental strain not worth the possible functionality restored, the execution of the bionic still leaving something to be desired. When faced with the desire for technology that can restore natural, reliable control over a number of degrees of freedom, a long training time coupled with inconsistent and unintuitive function for control over one - or, at best, two - degrees of freedom does seem like legitimate grounds to decide against its use entirely. In general, the complaints regard-
ing the performance of such systems are not unfounded. Currently, most systems for bionics use surface electromyography (sEMG) in order to control the hand, essentially taking electric signals from muscle groups depending on if they are active or at rest. In other words, they look to see whether muscle groups are “on” or “off ” like the signals in a computer, or a lightswitch. Combinations of those signals can map to certain controls over the hand, forming pre-programmed hand positions, or specifically controlling one degree of freedom in one finger, a process generally known as pattern recognition. In theory, the process should be simple and reliable,
“IT IS EASY TO FORGET OR OVERLOOK THE DEXTERITY AND ABILITY OF THE HAND; THE PRECISION IT IS CAPABLE OF IN EVERYDAY TASKS ...IS REALLY PRETTY REMARKABLE.”
SPRING 2017 | Berkeley Scientific Journal
9
memorizing patterns for straightforward outputs. Even better, it should work with a proposed 97% accuracy - accuracy being the “ratio of correct predictions over the total number of classified instances”5 - over individual degrees of freedom. By mapping specific sEMG readings to predetermined hand positions, a process called Binary Control Schemes, that accuracy is even greater. The reality is not quite as simple, however. Limitations are inherent in the system, restricting the types of movement allowed. On one hand, given that each movement needs to be pre-programmed into the system, only those movements may be performed, a problem particularly relevant to Binary Control Schemes. Any new movements would need to be programmed into the system anew - not exactly an easy process. As for controlling individual degrees of freedom via Pattern Recognition, while there is more independence of movement, ease of coordination becomes a major problem. The complex movement required of a human hand takes more control than can be feasibly performed by manually controlling each degree of freedom on its own. The mental strain required for an accuracy rate closer to 82.8% is just
10
not reasonable.5 As such, other methods have been developed to improve on the sensors used to control the bionic hand, making them more adaptable to different situations rather than simply existing in either an “on or off ” state. One of those sensors proposed to increase the ability of bionic hands is called an IMU, or Inertial Measurement Unit, made to measure more than simply whether a muscle group is activated or not, also taking data on the rotational movement and acceleration of the limb itself. With such a system, more combinations can be made, equating to more different possible hand positions which the bionic may assume; thus, the need to pick and choose a smaller set of possible functions would be eliminated. Essentially, more flexibility of use is possible without necessarily adding an exorbitant number of sensors, keeping the design simple. Beyond the benefits of the design itself and its enhanced capability, bionics with a combination of IMU and sEMG sensors outperformed their counterparts with only sEMG sensors.5 Specifically, a combination of 12 sEMG sensors and IMU sensors in total yielded the best results overall, participants having a 12% higher completion rate of a given set of tasks - grasping differently shaped and sized objects - compared to a model with an equal number of solely sEMG sensors. Moreover, the considerable increase in accuracy was achieved with only a marginal increase in the time taken to complete those tasks. In fact, even in a test with only four to six sensors, a combination of IMU and sEMG sensors, participants still managed to outperform the test of only sEMG sensors with and
Berkeley Scientific Journal | SPRING 2017
still maintain a comparable time.5 Furthermore, IMU sensors are not the only advancement in bionic hand control; visual servoing is another fairly new concept introduced to the standard set of sEMG signals. Though the system is not yet optimized for commercial use tested with a simplified hand made of only three degrees of freedom - it adds to the adaptability of the bionic hand by utilizing a camera. Rather than pre-programmed, generic shapes which do not take into account the different sizes and shapes of objects, the system takes in visual data to give a proportional amount of control to each degree of freedom. sEMG sensors are still used, but simply for the most generic idea of the task being performed - whether the hand should be at rest, extended, or grasping. In the case of grabbing an object, for example, sEMG sensors would read that general command, then the cameras would take a picture of that object, marked with colored dots which can be analyzed in terms of difference in color, giving a com-
The bebionic hand, the most advanced prostetic hand commercially available.
simplify their usage, coming closer to fully mimicking the ability of the real human hand.
REFERENCES
Postural Control allows a bionic hand to perform more complex motions, such as picking up a coin. puter the shape of the object in 3D space. As such, the hand is allowed to adjust each finger to the appropriate angle, and the thumb to the optimal angular position. Overall, recent tests have confirmed that, in theory, the system allows the hand to adapt to any proposed situation, much like the human hand.3 Beyond that, sEMG sensors themselves can be used in more advanced and adaptable ways. Postural Control methods take those same sEMG sensors and Pattern Recognition methods, but
“Tests...yielded promising results, restoring 55% of normal hand function to participants with traumatic or congenital limb loss.”
rather than simply treating each signal as “on” or “off,” it measures the strength of those signals, transforming those strengths into proportional angles at which the fingers should be moved, allowing a simple grasping motion to be adapted toan object of any size. Tests conducted with a modified Bebionic hand, a commercially available bionic hand, yielded promising results, restoring 55% of normal hand function to participants with traumatic or congenital limb loss.7 That relatively high number is largely due to the fact that Postural Control systems actually allow users to perform movements that their traditional Pattern Recognition counterparts may not, such as picking up a coin or turning a door handle, motions which require the hand to remain in position while the limb itself is moved through space.7 On the whole, bionic hands are making considerable strides in terms of regaining physical control over the environment for individuals with limb loss compared to the options which had existed in the past. Even with individuals who have suffered major, traumatic limb loss, extensive tissue damage, and botched biological reconstruction of those limbs, the adoption of a bionic hand has still managed to greatly improve quality of life, restoring function and even reducing pain. Better yet, those possibilities continue to improve. New systems and sensors are being created and tested to even further increase the utility of bionic hands and
1. Aszmann, O. C., Vujaklija, I., Roche, A. D., Salminger, S., Herceg, M., Sturma, A., . . . Farina, D. (2016). Scientific Reports,6, 34960. doi:10.1038/ srep34960 2. Cordella, F., Ciancio, A. L., Sacchetti, R., Davalli, A., Cutti, A. G., Guglielmelli, E., & Zollo, L. (2016). Frontiers in Neuroscience,10. doi:10.3389/ fnins.2016.00209 3. Hu, Y., Lin, G., Yang, C., Li, Z., & Su, C. (2015). Manipulation and grasping control for a hand-eye robot system using sensory-motor fusion. 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO). doi:10.1109/robio.2015.7418792 4. Khushaba, R. N., Kodagoda, S., Takruri, M., & Dissanayake, G. (2012). Toward improved control of prosthetic fingers using surface electromyogram (EMG) signals. Expert Systems with Applications, 39(12), 10731-10738. doi:10.1016/j.eswa.2012.02.192 5. Kyranou, I., Krasoulis, A., Erden, M. S., Nazarpour, K., & Vijayakumar, S. (2016). 2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob). doi:10.1109/biorob.2016.7523681 6. Maat, B., Smit, G., Plettenburg, D., & Breedveld, P. (2017). Passive prosthetic hands and tools: A literature review. Prosthetics and Orthotics International, 030936461769162. doi:10.1177/0309364617691622 7. Segil, J. L., Huddle, S., & Weir, R. F. (2016). IEEE Transactions on Neural Systems and Rehabilitation Engineering, 1-1. doi:10.1109/tnsre.2016.2586846
PHOTO CREDITS
1. http://www.likecool.com/Terminatorstyle_cyborg_arm_based_on_Deus_ Ex_video--Tech--Gear.html 2. http://www.oandp.com/articles/ NEWS_2015-04-01_02.asp 3. http://www.cnn.com/2014/05/12/tech/ innovation/deka-bionic-arm-kamen/
SPRING 2017 | Berkeley Scientific Journal
11
DYNAMICAL SYSTEMS: ASSISTIVE ROBOTS AND AUTONOMOUS CARS Interview with Professor Anca Dragan BY ISABEL CRAIG, NIKHIL CHARI, YANA PETRI, ELENA SLOBODYANYUK
Dr. Anca Dragan is an Assistant Professor in the Department of Electrical Engineering and Computer Science at the University of California, Berkeley. Her lab focuses on developing human-robot interaction algorithms, which not only account for robot function, but also for robot interaction and collaboration with end-users. In this interview, we discuss human-robot collaboration in context of autonomous cars and other dynamical systems. Professor Anca Dragan [Source: https://people.eecs.berkeley.edu
BSJElectrical Engineering and Computer Sci: How did you get involved in the field of
ence?
In middle school, I started really liking ADmath and thought about its applications. :
I loved proving things, but also wanted to see the tangible effects of math on the world. Computer Science combined all of these things for me. Initially, I thought I would do Programming or Software Engineering, but then in high school I came across a book on Artificial Intelligence by Stuart Russell. It was very interesting because you could follow
12
Berkeley Scientific Journal | SPRING 2017
certain solvable problems and immediately see follow-ups. The notion of creating new algorithms that solve new problems instead of implementing existing algorithms seemed very exciting. I thought I should do research in that field, andI loved the idea of agents that make intelligent decisions on their own, and that’s how I got into AI. And then, at the time, I didn’t even imagine you could work on AI in industry: it’s about problems we have not yet figured out the answer to – that’s research! So, here I am. How did you get involved specifically in BSJRobotics? :
Figure 1. a) Example of a collaborative table setting scenario, in which the robot needs to adapt to what people plan to do. b) Different timings of motion convey different information about the robot, including the perceived weight of the object being manipulated and how confident the robot is in what it’s doing. I knew that I liked Artificial Intelligence, and ADRobotics is just the physical manifestation of :
Artificial Intelligence. With robots, the outcome of your algorithms is right there in front of you, moving! That’s what makes them so cool!
What can a robot infer from a human's ongoing BSJactions, and how? things. At first, we started looking at how ADtheMany robot can figure out what a person is reaching :
:
for. But since then, we’ve taken a step back and found that there are many more things that we infer by watching another person. Imagine watching someone perform a normal day-to day task, like cooking. You can figure out if they’re excited about it or bored. You can figure out if they’re angry about something because they probably set things down in a much more decisive way. You can figure out if they’re an expert or if they’re a little hesitant. There’s a lot of information (what we call internal state) that is communicated implicitly via actions. How do we infer internal state? Typically, by methods that fall under the category of Bayesian inference. The idea is that there’s an underlying state that you can’t observe, but you can observe the actions. You can treat those actions as evi-
“How do we infer internal state? Typically, by methods that fall under the category of Bayesian inference”
dence about that underlying state - all you need is what we call the observation model. If this were the correct version of the internal state - if the person were confident - how would they act? That’s much easier than the other way around: if I see an action, what’s the probability that the person is confident? Luckily, Bayes’ rule gives us the way to go from one model to the other. It’s this neat little trick that we’ve been using in Robotics for a long time now, but with the right observation model it becomes applicable to not just localizing a robot in a map, but also to “localizing” the a person’s internal state. How do you take advantage of timing to make BSJrobot motion more natural? not only can a robot observe a person and ADtryWell,to make inferences about the person’s internal :
:
state, but people do the same thing when they observe robots. We perceive robots as agents. Timing plays a big role in the inferences that people make. You can have the same geometric motions (the same path), but if you time it differently - for instance, if you go fast and steady versus pausing repeatedly - people will interpret that in different ways. The same geometric path but different timings lead to different information that you would read into the motion. With my students Allan and Dylan, we’ve been exploring how to not necessarily make robots more natural, but rather more expressive. How do different timings communicate different information about the robot? It turns out that different timings communicate robot’s capability and even the mass of the object that the robot is carrying. If I just carry an object slow and steady, you think that it might be light, but if I all of a sudden slow down and then speed back up, you might think that the object is heavier than it looks.
SPRING 2017 | Berkeley Scientific Journal
13
"confidence" in a robot Another area of your research is self-driving BSJandHowhowdo doyouyoudefine assign a quantifiable value to BSJcars, or autonomous cars. Why is important :
what is typically considered an emotional state?
the hard part. This is why goals (what ADtheThat’sperson is reaching for) are easy – it’s :
trivial to write down an equation for what a goal is, and it’s relative easy to learn how people reach for different goals. Confidence is more tricky, because what does it mean mathematically in the first place? What we’ve done so far is a very simple model. “Confidence” for a robot is just its estimation of the probability of success. But another way to think about it is as a measure of uncertainty. The robot might start moving, and maybe it’s uncertain about its location or the location of the object that it’s trying to manipulate, so it keeps on gathering observations. Confidence is about the initial precision or uncertainty. Timing is involved in this because to gain the necessary precision at the end, when you’re at the goal, the robot needs to slow down to gather more observations if it starts with low precision; whereas if it already has high precision, it can just go for it. That’s a way to take something fuzzy like this notion of confidence and try to break it down to some mathematical, tangible model. But it’s just a start.
:
to monitor the interactions between human drivers and autonomous cars as a dynamical system?
Autonomous cars work by trying to reach ADtheir destination and trying to avoid colli:
sions. Implicitly, other human drivers on the road are simply obstacles that need to be avoided. So the car tries to predict what they might do and get out of the way so that they don’t collide. As a result, these robots tend to be very defensive. For example, they would never merge in traffic if there’s not a big enough gap. Or at an intersection they might get stuck, because people keep on coming and the autonomous car never gets to go. These cars are physically safe, but they’re not necessarily very effective. Thus, to make them a little more effective, we’ve integrated a model of human response to the robot’s actions. This is a dynamical system that incorporates human state as part of its state definition, and for which the dynamics model for how state changes as a function of the robot’s actions now incorporates that the person will respond to the robot, which will change their state, which changes the overall system state. Not that it wasn’t a dynamical system before: it just was a simpler one, where we were assuming the human state would evolve unaffected by what the robot does. Now, we’re trying to add in that coupling, to say, “Well, wait a minute, a person’s actions do depend on the robot’s decisions.” We use inverse reinforcement learning to create a model for how humans drive in response to the robot, and then we use that in our planners to come up with something for the robot to do. Dorsa, the graduate student doing this work, found some interesting behavior being produced by planning in this system. For instance, autonomous cars know that they can sometimes merge in front of someone because the person can slow down to let them in. Or, they decide to inch forward at an intersection to probe the person and figure out whether they’re going to let them go. My favorite part is that if the person lets the robot through, it just goes, but if Figure 2. When asked to compare their own style (without knowing it’s theirs), a more defensive style, and a more aggressive style, users typically prefer a more defensive style than their own.
14
Berkeley Scientific Journal | SPRING 2017
“If the person lets the autonomous car through, it goes, but if the person doesn’t let it through, then the robot gently backs up. ” the person doesn’t let it through, then the robot gently backs up to let the person go. I like that a lot. What kind of environment were you testing BSJyour robots in? So far, for a lot of collaborative communiADcation type of work we use our JACO robot, :
:
which is actually the first seven-degree of freedom robot that the Kinova company made. We use our JACO robot for a lot of the physical, collaborative interaction tasks. But for driving we’ve been using a simulator. We put the person in front of a steering wheel, and they have pedals, but they’re not driving a car in the real world (that would be a little dangerous at this point), and they’re looking at a monitor where they see their car move, and they’re reacting to the environment that way. We simulate both highway as well as city intersection scenarios.
Based on the results of your research, what BSJtype of driving style do people prefer? Is it :
similar to their own driving style, or is it different?
We started with the hypothesis that people ADwould want an autonomous car to drive in :
the same style as them. So we thought that aggressive drivers would want a more aggressive car, defensive drivers would want a defensive car, and so on. It turns out that’s not the case. Chandrayee, who ran this study, fund that people tend to prefer a driving style that’s more defensive than their own. But, interestingly enough, they think that they prefer the car to drive like them. So if you ask them to choose, they will choose not their own driving style, but something more defensive, but they think they are choosing their own style. They thus have a misperception of how they drive.
In the future when you buy a car you’ll be BSJable to select among different options? :
an option will be made tailored ADjustEvento better, you! It’s more like in the future when :
you buy a car, you’re at the dealership, and you sit down in a nice simulator and there’s this virtual agent that says, “Hey, if I’m your car, and this is the environment, do you want me to do this,” and it plays out a trajectory, “or, do you want me to do this other thing.” Maybe one of them is more aggressive and one of them is more defensive. So you say, “I want this one,” and then you repeat this a few times, and the algorithm that we’re looking at is one that will actually enable this agent to, as quickly as possible, converge on the right style. This is an active learning type of approach, where you search for the most informative queries and comparisons that you can ask the person. And after a few such queries, hopefully the car will have converged to the driving style that you want.
Are such dynamical systems specific to BSJdriving? No, they’re not, that’s a very good question. ADWe’ve applied this to driving, but, if you think :
:
about it, robot actions influence human actions in all sorts of tasks. In fact, we’ve looked at a problem that’s a collaboration. So it’s no longer “I drive and I have my own objective, you drive and you have your own objective.” It’s an actual collaboration where a human and a robot work together to do a task, but the person is actually responding to what the robot is doing. And in particular, if the person is not perfect at optimizing and responding to the robot, meaning they can’t think many steps ahead, then a smart robot can guide the person to a better overall plan and compensate for a person’s myopathy. To make this concrete, we were looking at a handover example where a robot gives the person an object, and the way the robot decides to hold an object influences how the person grabs the object. If I give you a bottle upright, you can maybe grab it from the top, but if I tilt it you have a whole different set of choices for grasping it. So what’s interesting is that people grab the bottle in the most natural way. However, if they have to do something with it, like put it in a cupboard, they’re not very good at thinking ahead and grabbing it so that they can quickly put it upside down. So they end up having to re-grasp the object or twist their arms. But if the robot actually accounts for the fact that the person is a bit myopic and doesn’t think many steps ahead, then the robot essentially influences the person’s choices. For example, a robot can give a person a bottle in a way that would be most convenient for them to grasp in a
SPRING 2017 | Berkeley Scientific Journal
15
natural way and then put in a cupboard. I think it’s a beautiful example of robots using the influence they have on people’s actions to gently guide the person to perform better at this collaborative task. has inspired you to become a co-PI for BSJtheWhatCenter for Human Compatible Artificial :
Intelligence (AI)?
Center for Human Capable AI is trying ADtohedevelop an AI that is not just functional, :
but that can be beneficial to and compatible with humans. What got me most interested in the problem of human compatibility is that it’s difficult to specify objective functions for robots. We specify some objective function, but inevitably, we don’t think about corner cases or new situations. It’s very common for AI systems, as they become better at optimizing the objectives that we give them, to end up not doing what we actually want and produce unintended consequences. In the center, we’re working on the value alignment problem: how a robot or AI agent would be able to, over time, arrive at the correct objective function. One of the key ideas here is for the agent to not look at the objective function that’s been given initially as set in stone but to be able to cooperate with the person and figure out the true objective function. This is cooperative inverse reinforcement learning: a collaboration with Stuart, Pieter, and our student Dylan.
Would you say that your research has a lot of BSJintersections with psychological research? Yes, in that our mission is to formally incorADporate human state and human action into :
:
robotics, and account for the fact that human state is different and much more interesting than physical state. In order to figure out a model for human state, including human decisions and beliefs, and how these change based on the action of the robot, that’s where psychology comes in. In particular, a branch of psychology called computational cognitive science that is all about developing computational models for how people reason and make decisions. AI agents can then use these mental models for how a person works so that they know how to interact with them.
What are the future applications of your BSJwork? I think the applications are fairly broad. We ADrecently focused on autonomous driving. :
:
That’s probably going to continue because driving is a present and exciting problem. But we’ve also focused on manipulation: on interacting with robot arms and
16
Berkeley Scientific Journal | SPRING 2017
more humanoid-like robots, which are harder problems because those robots don’t work well right now. We do research on things that are applicable across multiple situations because we’re trying to study the fundamental theory and algorithms that enable robots to correctly reason about people and their actions. This applies to anything from manufacturing, where you have robots moving out of the cages and working side by side with human workers, to robot-wheelchairs that can help people with disabilities live more independently, or to robots in the home helping you clean up the dining room table. There too, we want interaction, because we don’t want all the people on the couch watching TV and the robots in the kitchen doing everything. Even in that domain, interaction is important.
REFERENCES:
1. 1. Basu, C., et al. Proceedings of the 2017 ACM/ IEEE International Conference on Human-Robot Interaction, 2017. 2. 2. Liu, C., et al. Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 2016. 3. 3. Sadigh, D., et al. Proceedings of the Robotics: Science and Systems Conference (RSS), 2016.
PSYCHOLOGICAL MECHANISMS OF SELF-CONTROL BY FARIHA RAHMAN
THE ROLE OF EXECUTIVE FUNCTION IN NEUROPSYCHOLOGICAL HEALTH AND DISEASE
C
onsider the constant cognitive work you, and most of the people you know, undertake everyday: scheduling every hour, changing behavioral modes and goals between various involvements, and maneuvering through many different social worlds. All of these tasks require a few key skills. One must be able to redirect one's focus, track useful information, and avoid distractions. In other words, much of our everyday functioning, as well as the occasional extraordinary successes, depends on our capacity to correctly analyze new situations and respond in a controlled manner. These abilities are encompassed by the broad term “executive function” (EF), which refers to working memory, planning, and cognitive flexibility.1 Although these are rather complex cognitive skills, they are fundamental to living with purpose, and their dysfunction is nothing short of disastrous for the average cognitively healthy adult.
Consequently, executive function is often tracked in cases of neuropsychological disorders. Common tests of executive function challenge the subject to change focus, adjust habitual physical motor responses, correctly respond to conflicting experiences or directions (interference tasks), and focus on long term goals. One interference task, which takes several forms, is called “set-shifting." Set-shifting requires the subject to switch a goal, and the corresponding set of behavioral responses, in the middle of a task. As this goes against the subject’s previous habits, successful completion of the task demands flexible thinking. More specifically, set-shifting tasks force the subject to choose one correct response when presented with cues that would elicit different responses if they appeared separately, thus producing an interference condition. Set-shifting tasks exist for a surprising number of different animals, including humans, rats, and monkeys.
THE ROLE OF THE PREFRONTAL CORTEX How is it that monkeys can ever successfully complete tests of executive function? The answer lies in brain structure. In primate, and especially human, evolution, the development of one area of the brain particularly stands out: the prefrontal cortex (PFC). The PFC refers to the furthest forward, or “anterior," part of the frontal cortex, located below our foreheads. As we look at various members of the animal kingdom, the complexity of an animal's PFC is inversely related to its evolutionary distance from Homo sapiens. That is, the human PFC is more complex than that of other primates - denser with neurons and their interconnections - while the primate PFC is, in turn, more complex than that of other non-human mammals. Meanwhile, mammals are the largest category of animals that have any PFC whatsoever - although some other animals have analogous, but simpler, brain structures. Therefore, it is reasonable
SPRING 2017 | Berkeley Scientific Journal
17
that the PFC is responsible for many of the neurological functions which form the “crucial aspect[s] of what we think of as “human” cognition.”1,4 These include direction of spoken and written language, conscious response to emotions, and various social behaviors, in addition to executive function.14 Notably, the PFC is one of the latest brain areas to develop in humans, and the dorsolateral area of the PFC (the DLPFC) is especially delayed in development. The DLPFC is also the most directly implicated anatomical structure in executive function. This correlates to the very slow timeline for cognitive development in humans, where more complex cognitive skills are picked up later on in development. With regards to executive function, a recent study demonstrated that normal adults naturally respond more to tasks which engage their executive functions (EF) than children, while older children respond more than younger ones, indicating that people pay increasing amounts of attention towards tasks that require usage of EF as they grow into adulthood.13 Some researchers suggest the PFC has a notable role in psychological disorders through its late development and role in executive function.14 We know the PFC is involved in executive function partially because its activation is observed, in both humans
and monkeys, during set-shifting tasks.10 In monkeys and humans, loss of function in the DLPFC keeps subjects from correctly adapting to “conflict” in behavioral goals, as in the set-shifting task.10 Other parts of the PFC - rostral (RPFC), ventrolateral (VLPFC), and medial (MPFC) - are also consequential in EF. The VLPFC is related to working memory functions, although not directly responsible. It aids in shortterm memory storage before information is called into use to adapt to some situation, and thus moved into working memory.4 The RPFC seems important to even higher-level processes that may be more specific to humans than non-human primates, such as multitasking or imagining “what other people are thinking,” otherwise known as theory of mind.4 The MPFC contains the anterior cingulate cortex (ACC), which, while still controversial in terms of function,10 is highly connected to the DLPFC and appears to be related to proper EF, perhaps by helping alert the DLPFC to potential behavioral conflicts.4 However, the ACC is not as essential to EF as the DLPFC, since human patients with ACC lesions, or damage, still perform adequately on tests like the Wisconsin Card Sorting Test (WCST.)4,10 Therefore, researchers tend to look at the PFC, especially the DLPFC, and the other brain areas it communicates with most, to better understand disorders marked by deficits in executive function. These include many of the most common and devastating neuropsychological illnesses, from the dementia seen in Alzheimer’s patients, disorientation and loss of function in some patients who suffer physical trauma to the head (traumatic brain injury, abbreviated as TBI), and schizophrenia, to various mood disorders, such as bipolar disease and major depression. AGING AND DEMENTIA
FIgure 1. Anatomy of the prefrontal cortex (PFC) from a lateral (side) view. Dorsolateral PFC (DLPFC), rostral PFC (RPFC), and ventrolateral PFC (VLPFC) are highlighted.
18
It seems natural to assume that executive function declines along with the rest of human functioning as people age. However, it appears to be so critical that while various contributing neurological functions may decline, one study suggests that the brain adjusts accordingly to maintain at least some healthy executive function. A 2016 study by Dulas and Duarte argues that while older adult humans show a level of dysfunction
Berkeley Scientific Journal | SPRING 2017
“Researchers look at the PFC to better understand disorders marked by deficits in executive function.” in their DLPFC, they can recruit VLPFC in order to perform similarly to younger adults on tests of executive function, in this case testing their working memory.3 Dementia is therefore a distinctive phenomenon, rather than purely age-related. As the researchers Manning and Ducharme describe in their 2010 review, dementia is defined as a “decline in memory and a decline in at least one additional area of cognition including aphasia [inability to cognitively produce or comprehend speech], apraxia [inability to physically produce speech], agnosia [inability to recognize sensory experiences], or a decline in executive functioning.”9 They note that deficits in EF have important consequences for everyday functioning, from planning to self-care.9 They also argue that more severe cases of Alzheimer’s often involve executive dysfunction, meaning it is much harder for patients to take care of themselves when executive dysfunction is present.9 While it is tempting to think of dementia as a natural part of aging, it is in fact no less a disruption in health and function than other life-threatening diseases, because of the central role of executive function in our lives. The most common form of dementia is the one that accompanies many cases of Alzheimer’s disease. Though the specific mechanisms are not well-understood, this dementia somehow relates to the distinctive neurological characteristics of Alzheimer’s. These are primarily various protein aggregates.9 Tellingly, one of these diagnostic markers is also called senile plaque.12 Another common manifestation of dementia is vascular dementia (VD), which is as-
Figure 2. Wisconsin Card Sorting Test. The card can be matched by shape (1), color (2), or number (4). The subject is told to sort by one rule in the first section, then another in the second. Results from the test are generally used as a proxy for executive function in humans. Variants are also used to test EF in humans and monkeys.
sociated with health events that affect blood flow to parts of the brain, such as stroke. When executive dysfunction appears in VD, it tends to affect patients’ capacities for focus and and problem-solving.9 TRAUMATIC BRAIN INJURY Traumatic brain injury (TBI) - injury to the brain resulting from physical trauma - is also associated with executive dysfunction in some cases. Recently, a study showed that TBI patients with at least moderate levels of injury - severe enough to necessitate a caregiver at the time of the study - are worse at comprehending subtleties in spoken language.1 Specifically, the researchers found it hard to distinguish between direct, indirect, deceitful, and ironic verbal communication. Meanwhile, they do not show a notable difference compared to controls on comprehension of direct verbal communication, where there is little need for cognitive flexibility and applying previous experiences of extralinguistic cues (e.g. sarcasm). The researchers developed the distinction between extralinguistic and direct communication after running a battery of tests on the TBI patients and matched controls, then using a statistical model to evaluate which explanatory variables correlate with the various communicative deficits observed. The model only showed statistically significant correlations between executive function and extralinguistic deficits.1 They concluded the difference between TBI patients and controls on extralinguistic comprehension is best explained by the state of executive functions in the subject (working memory, planning, and cognitive flexibility), as opposed to lower-level, less conscious forms of cognition, like attention,
long-term memory, and theory of mind.1 One notable scientist, Muriel Lezak, argues that deficits in executive function form the basis for “the most crippling and often the most intractable disorders associated with severe TBI.”8 This is because they tend to affect our self-awareness, which can significantly reduce our abilities to direct and control ourselves, and properly understand and interact with others.8 It also affects our ability to properly access and apply perceptions and memories to new contexts, which is the basis for correctly responding to new situations. This is demonstrated by the fact that among patients with physical traumas to the nervous system, those with worse executive function after their injuries tended to be less independent in their daily lives a few months after injury.8 SCHIZOPHRENIA Executive function deficits are also characteristic of schizophrenia, which has a strong genetic component and typically takes hold relatively early in life. Some researchers argue that the degree of patients’ various deficits in everyday functioning directly correlate to executive dysfunction, such being productive at work, effectively socializing, etc.7 EF deficits strongly correlate to the “disordered” aspects of schizophrenia.7 Notably, schizophrenia also tends to be accompanied by altered functioning of the DLPFC and ACC. Some people are at a higher genetic risk for schizophrenia, meaning they carry some version of at least one gene (an allele) that is correlated with higher risk, and are therefore considered “risk allele” carriers. Risk allele carriers, in addition to people with schizophrenia themselves, show unusual patterns of activation in the ACC and DLPFC, among other areas in the PFC.14 Interestingly, risk allele carriers also show increased connectivity of the ACC and DLPFC, which authors of a review (Sutcliffe et al.) suggest to be the result of “context-inappropriate hyperfunction” which disrupts
regular executive function - for example, paying excessive attention to one situation you should respond to automatically affects your ability to appropriately respond to another, newer one.14 First-degree non-affected relatives of patients with schizophrenia, although they may or may not carry risk alleles, are statistically more likely to develop schizophrenia than the general population; they have also been found to have similar, but much less severe, cognitive deficits as patients with schizophrenia relative to members of the general population.6 While executive dysfunction makes it fairly impossible for patients with schizophrenia to lead normal lives, EF may soon be targeted by drugs that would magnify the effect of two very common and important neurotransmitters, glutamate and dopamine.2 A study by Desai et al. showed that set-shifting ability was impaired when glutamate and dopamine were inhibited from binding to the NMDA and D1 receptors respectively, using drugs that would compete with the neurotransmitters.2 Neurotransmitters are small chemicals, produced by neurons, which elicit some response in target neurons through binding. Glutamate and dopamine are both excitatory neurotransmitters for the receptors targeted by these drugs, which means they amplify the function of their target neurons. The researchers believe that defects in both receptors’ functions led to the mice being unable to adapt to behavioral conflict, although individual dosages of drugs targeting either function still allowed for adaptation.2 Without EF, leading an independent modern life is nearly impossible. However, this makes EF a good target for treatment in many psychological diseases. In humans, a large review of existing research suggests that psychological treatments to restore EF, such as cognitive rehabilitation, can treat some symptoms of schizophrenia.7 Recent studies in animal models suggest possible pharmaceutical treatment directions as well, especially in
SPRING 2017 | Berkeley Scientific Journal
19
interference in associative memory: The role of PFC-mediated executive control processes at retrieval. NeuroImage, 132, 116-128. doi:10.1016/j.neuroimage.2016.02.017. 4. Gilbert, S.J., Burgess, P.W.. (2008). Primer: Executive Function. Current Biology, 18(3), R110-R114. doi:10.1016/j. cub.2007.12.014. 5. Hendry, A., Jones, E.J.H., Charman, T.. (2016). Executive function in the first three years of life: precursors, predictors, and patterns. Developmental review, 42, 1-33. doi:10.1016/j.dr.2016.06.005. 6. Jameson, K.G., Nasrallah, H.A., Northschizophrenia. Treatment of executive ern, T.G., Welge, J.A.. (2011). Executive dysfunction is currently a promising and function impairment in first-degree relimportant area of growth in of psychiatry. atives of persons with schizophrenia: A meta-analysis of controlled studies. Asian Journal of Psychology, 4(2), 96-99. REFERENCES doi:10.1016/j.ajp.2011.04.001. 1. Bosco, F.M., Parola, A., Sacco, K., Zett- 7. Kluwe-Schiavon, B., Sanvicente-Vieiin, M., Angeleri, R.. (2017). Communica- ra, B., Kristensen, C.H., Grassi-Oliveira, tive-pragmatic disorders in traumatic brain R.. (2013). Executive functions rehabilinjury: The role of theory of mind and exec- itation for schizophrenia: A critical sysutive functions. Brain and Language, 168, tematic review. Journal of Psychiatric Research, 47(1), 91-104. doi:10.1016/j. 73-83. doi:10.1016/j.bandl.2017.01.007. 2. Desai, S.J., Allman, B.L., Rajakumar, jpsychires.2012.10.001. N.. (2017). Combination of behaviorally 8. Lezak, M.D.. (2004). Neuropsychologisub-effective doses of glutamate NMDA cal Assessment. and dopamine D1 receptor antagonists 9. Manning, C.A., Ducharme, J.K.. (2010). impairs executive function. Behavioural Dementia Syndromes in the Older Adult. Brain Research, 323, 24-31. doi:10.1016/j. Handbook of Assessment in Clinical Gerontology, 155-178. doi:10.1016/B978-0bbr.2017.01.030. 3. Dulas, M.R., Duarte, A.. (2016). Age-re- 12-374961-1.10006-5. lated changes in overcoming proactive 10. Mansouri, F.A., Egner, T., Buckley, M.J.. (2017). Monitoring Demands for Executive Control: Shared Functions between Human and Nonhuman Primates. Trends in Neurosciences, 40(1), 15-27. doi: 10.1016/j.tins.2016.11.001. 11. Nakahara, K., Hayashi, T., Konishi, S., Miyashita, Y.. (2002). Functional MRI of Macaque Monkeys Performing a Cognitive Set-Shifting Task. Science, 295(5559), 1532-1536. doi:10.1126/science.1067653 12. Nelson, P.T., Alafuzoff, I., Bigio, E.H. et al. (2012). Correlation of Alzheimer disease neuropathologic changes with cognitive status: a review of the literature. Journal of Neuropathology and Experimental Neurology, 71(5), 362-381. doi:10.1097/ NEN.0b013e31825018f7. Figure 3. A rendition of dendritic spines, 13. Ohyama, T., Kaga, Y., Goto, Y., Aoyagi, which are points of informational input K., Ishii, S., Kanemura, H., Sugita, K., Aifor neurons. Each protrusion represents hara, M.. (2017). Developmental changes one possible synapse, or connection, in autonomic emotional response during with another neuron. an executive functional task: A pupillome-
"Set-shifting ability was reduced by the inhibition of glutamate and dopamine binding to the NMDA and D1 receptors."
20
Berkeley Scientific Journal | SPRING 2017
tric study during Wisconsin card sorting test. Brain and Development, 39(3), 187195. doi:10.1016/j.braindev.2016.10.002. 14. Sutcliffe, G., Harneit, A., Tost, H., Meyer-Lindenberg, A.. (2016). Neuroimaging Intermediate Phenotypes of Executive Control Dysfunction in Schizophrenia. Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, 1(3), 218-229. doi:10.1016/j.bpsc.2016.03.002. 15. Teffer, K., Semendeferi, K.. (2012). Chapter 9 - Human prefrontal cortex: Evolution, development, and pathology. Progress in Brain Research: Evolution of the Primate Brain, 195, 191-218. doi:10.1016/ B978-0-444-53860-4.00009-X. IMAGES Cover. https://i0.wp.com/neurosciencenews.com/files/2016/06/prefrontal-cortex-mood-neurosciencenews.jpg Fig. 1. Gilbert, S.J., Burgess, P.W.. (2008). Primer: Executive Function. Current Biology, 18(3), R110-R114. doi:10.1016/j. cub.2007.12.014. Fig. 2. https://upload.wikimedia.org/wikipedia/en/0/0f/Wisconsin_Card_Sorting_ Test.jpg Fig. 3. https://nihdirectorsblog.files.wordpress.com/2017/02/dendrites1.jpg
HIJACKING THE BRAIN BY YIZHEN ZHANG
EXPLORING THE BACKGROUND AND IMPACTS OF OPTOGENETICS
E
You’re using light at the moment to read this. I am using light to write this. Although we use light on an everyday basis, we often take it for granted and do not fully appreciate its power. As a matter of fact, light can control the brain. Optogenetics is a new hot biotech technique that uses light to control cells in living tissue, usually neurons. Neurons are cells that process and transmit chemical or electrical signals. As its name implies, optogenetics is a combination of optics and genetics. The technique is composed of three major steps. The first step is genetically modifying neurons to become responsive to light. A light source is then implanted to turn neurons “on and off,” correlating it with behavioral or physiological changes. The last part, which is occasionally skipped, is recording the brain’s resulting electrical activity. These intricate yet invasive procedures have limited opto-
genetics to animal models only. Despite the current limitation on recipients, there is no limit on potential applications. Optogenetics has been highlighted as a “Breakthrough of the Decade” and “Method of the Year” by renowned scientific journals such as Science and Nature, respectively. Thus far, optogenetics has been used to directly treat diseases and to better understand the neuroscience. The concept for optogenetics was first conceived in the 1970s when Francis Crick, also co-discoverer of the structure of DNA, noted that cell stimuli cannot distinguish cell types1. Therefore, a stimulus must be precise enough to control the activity of one cell type. Crick later casually speculated that light could potentially be a tool for this. A few years before this, the basis for the tool, initially unrelated, was identified—bacteriorhodopsin, a light sensitive microbial protein that captures light ener-
gy and converts it to chemical energy that stimulates the cell. This discovery soon led to the identification of channelrhodopsin, which is the protein commonly used now for optogenetics. However, it took decades for neuroscientists to link these two concepts. Only in the summer of 2005 the insertion of channelrhodopsin gene into mammalian neurons was reported. The full potential of optogenetics is still being discovered. The applications seem to be limitless and can only be bound by ethics. Most application has been used to treat diseases including Parkinson’s disease, addiction, depression, and many more. The basis of these relies on ‘hijacking’ specific neuronal circuits that are genetically modified to have the light sensitive protein, like channelrhodopsin. These neurons are then selectively controlled by a laser to activate specific regions of the brain or body. It is almost like a light switch but
SPRING 2017 | Berkeley Scientific Journal
21
“..the idea of controlling the brain with light moves away from fantasy.�
This technique seems like it came straight out of a science fiction story, and the possibilities are truly endless. The current advancements in neuroscience and genetics are promoting the development of optogenetics at an increasingly faster rate. The full realm of possibilities is becoming clearer as the idea of controlling the brain with light moves away from fantasy. 1. Crick, F.H. (1979). Thinking about the brain. Sci. Am. 241, 219-232
reversed. Light can turn the switch on or off which leads to a behavioral or physiological change. Beyond application, the use of optogenetics also sheds significant light on basic research and understanding the functions of neurons in our body. One particular research at New York University has been able to help clarify how the hippocampus, part of the brain responsible for long term memory, works.2 Prior, the function of the hippocampus had been identified for a long time but linking it to the structure of circuits has been weak. Optogenetics allowed the researchers to better understand which other brain regions responded to an activation or inactivation of the hippocampus. Parallel research is performed to better understand other portions of the brain, muscles, and stem cells.
2. Suzuki, W., & Naya, Y. (2011). Two Routes for Remembering the Past. Cell,147(3), 493-495. doi:10.1016/j. cell.2011.10.005 3. Ferenczi, E., & Deisseroth, K. (2016). Illuminating next-generation brain therapies. Nature Neuroscience, 19(3), 414-416. doi:10.1038/nn.4232 4. Yizhar O, Fenno LE, Davidson TJ, Mogri M, Deisseroth K. Optogenetics in neural systems. Neuron. 2011 Jul 14;71(1):9-34. doi: 10.1016/j. neuron.2011.06.004. PubMed PMID: 21745635. 5. Cho, Y. K., & Li, D. (2016). Optogenetics: Basic Concepts and Their Development. Methods in Molecular Biology Optogenetics, 1-17. doi:10.1007/978-14939-3512-3_1
The primary drawback of optogenetics is the genetic mutation part. It has been found that not all desired cells may express the light sensitive protein gene, thus not representing the full functions. At the same time, these genetic mutations have also been known to accidentally alter undesired cells. This technique, nevertheless, is still relatively young and more research is required.
22
Berkeley Scientific Journal | SPRING 2017
BRAIN INITIATIVE: POWER IN NETWORKS
BY NINI LIU
EXPLORING THE BRAIN INITIATIVE’S TRAJECTORY AND DOMESTIC EFFECTS
T
hough the brain has been deeply analyzed and dissected in the past century, much of it remains a vast, unexplored terrain. In April 2, 2013, the White House launched the BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies) with the goal of creating innovative technologies to address, treat, and prevent brain disorders, such as Alzheimer’s, epilepsy, and Parkinson’s disease.6 Within the last decade, the BRAIN Initiative has launched an explosion of neurotechnology in the United States, driving brain research and opening windows for innovative treatments. The key to this golden age in neuroscience lies in new private-public partnerships and domestic collaboration across scientific fields. The BRAIN Initiative’s goal of modeling the active brain may seem trivial to everyday citizens; however, not understanding how our brain is wired imposes massive costs on the US economy.4 In a
2016 report by Information Technology and Innovation Foundation (ITIF), the price of mental-health related conditions cost the U.S. economy $1.34 trillion dollars.18 This is enough to buy approximately 1.3 trillion giant Hershey’s Milk Chocolate Bars or 30% of cars in the U.S. within a $22,000 price range. This amount only accounts for financial costs related to direct treatments, care for related physical illnesses, increased crime rates, and lost productivity in the workforce.18 It does not factor in the indirect economic costs inflicted on caregivers or adults with
mental illness that can result from lower education, wages, productivity, and homelessness. As such, it is important to note that the initiative is not just focused on elevating neuroscience research or treating brain-related disorders, but also on alleviating far-reaching economic and social burdens that directly or indirectly affect everyday U.S. citizens.
“...the price of mental-health related conditions costs the U.S. economy $1.34 trillion dollars.”
The crucial nature of the BRAIN Initiative has promoted public-private partnerships, which lie at the core of revolutionizing basic research and technology in neuroscience. Essentially, the BRAIN Initiative exists only due to continued investments from federal research agencies and private research institutes, companies, and foundations (Figure 1).1 Thus,
SPRING 2017 | Berkeley Scientific Journal
23
on legal and administrative technicalities. A key challenge that the BRAIN Initiative recognizes and faces is data sharing. One of the problems in the neuroscience field is that there is no unified format to report findings and data, making replication and comparison of experiments difficult.18 And now with the BRAIN Initiative’s nationwide projects, a standardized data system is more necessary than ever to ensure progress through collaboration between laboratories. Moreover, the 21st Century Cures Act also Figure 1. Public-private partnerships involving key federal agencies, foundations, and requires federal agency-supportprivate companies. ed research to be shared publicly, the importance of continuing brain ships between clinical researchers to pool data and advance rapid research was highlighted in the 21st and manufacturers by acting as the biomedical research.5 Century Cure Act, a bill passed by both middleman. According to NIH, basic Republicans and Democrats in the research can progress and accelerate Therefore, in 2014 foundations and House of Representative and research institutes like the the Senate on July 10, 2015. Howard Hughes Medical Essentially, this bill gave the Institute, UC Berkeley, and AlNational Institute of Health len Institute for Brain Science (NIH), a key player in the collaborated and launched BRAIN Initiative, authority “Neurodata Without Borders: over initiating private-public Neurophysiology” (NWB). collaborations. Consequently, This project aims to create a this allowed the Obama aduniform data format for celministration to propose $300 lular-based neurophysiology million to be allocated to the data and optical physiology BRAIN Initiative in the fiscal data (Figure 2).21 The chalyear 2016 budget with greater ease to because clinical investigators have lenge with this is individual laboratories Congress.1 What this means is if NIH “early access to neuromodulacannot meet the $300 million budget, tion and recording devices for it can recruit private investors to meet human clinical studies.”17 Some this target. In summary, public-private examples of NIH’s partnerships partnerships helps sustain the BRAIN include NeuroNexus, BlackInitiative over a longer period of time rock Device, and Second Sight, by pooling financial resources from the who provide researchers tools federal to the private level. like multi-array electrodes to record large brain regions, Another way public-private spinal cord stimulators, and a partnerships accelerate progress with bionic eye that uses electrical the BRAIN Initiative is to facilitate stimulation in the retina to partnerships between researchers and “induce visual perception in manufacturers. One of the biggest blind individuals”.12 13 With problems recognized by the initiative technologies and facilitated was that researchers lacked powerful partnerships like these, retools and technologies to understand searchers and manufacturers in the active neural circuitry. As part the BRAIN Initiative can focus Figure 2: Through Neurodata Without Borders, labof NIH’s Public-Private Partnership on progressing brain research oratories must overcome technological and cultural Program, NIH helps create partnerwith novel technologies rather than hurdles.
“The importance of ... brain research was highlighted in the 21st Century Cure Act, a bill passed by both Republicans and Democrats in the House of Representative and the Senate...”
24
Berkeley Scientific Journal | SPRING 2017
have different standards and conditions of lows many labs around the world to control the BRAIN Initiative under the proposed obtaining recordings. Software developers neuronal activity in living mammals. In the fiscal year 2017 budget? In March 2017, the and vendors have to translate multiple Susan M. Ferguson and John F. Neumair lab Trump administration proposed an 18% neurophysiology datasets and unify it in the University of Washington, DREADD cut to NIH’s $31.7 billion budget. The key into a new common language. Then, they technology has been used to discover how reason is to eliminate overhead payments have to make sure that its data software the disruption of the striatonigral circuit and indirect expenses to universities and recan be easily understood and used by in the brain can cause uncommon reward search institutes. This is mainly money that researchers.18 Finally, as a key developer of and learning behavior, which manifests in is not going directly towards research, but the BRAIN Initiative Terrence Sejnowski neuropsychiatric disorders.7 towards costs like running facilities, or even states, “Investigators will have to be willing Another NIH breakthrough inpaying for staff needed to ensure experito part with their data,” which “will require cludes the blueprint for AMPET, a mobile ments comply with ethical standards.9 Here a culture change because it’s not how most brain scanning helmet developed by Dr. is where the public-private partnerships act labs work today.”19 as a double-edged sword. If the Essentially, creating a budget proposal is approved in standardized data set May, the BRAIN Initiative will like NWB plays only face major setbacks due to its a smaller picture in large dependence on the NIH. ensuring continued However, if it can garner greater federal funding and financial support from private large scale collaborasectors, the initiative has a chance tions for the BRAIN of weathering the budget cuts, Initiative. For the but would be greatly delayed in BRAIN Initiative to expected scientific breakthroughs. be successful, laboTo at least maintain the BRAIN ratories across the Initiative’s current trajectory, U.S. must overcome research institutes, universities, cultural barriers. private foundations, and everyFigure 3: A 3-D rendering of a mouse neocortex by the Licthman Lab. day citizens need to unite and Challengchallenge the budget cuts. es asides, since the start of the BRAIN Initiative and new Julie Brefczynski-Lewis and Dr. Kuang Gong Over the past few years, the BRAIN partnerships, there have been momentous at the West Virginia University (Figure 4).8 Initiative has yielded revolutionary discovbreakthroughs. In 2015, Jeff Lichtman and As opposed to previous immobile, low-reso- eries in neuroscience, generating greater his team at Harvard University finished lution positron emission tomography (PET) understanding towards curing neuropsychia comprehensive 3-D reconstruction of scanners, AMPET images a moving person atric and neurological disorders. It has also 1,500 cubic microns of a mouse neocortex in their natural environment, allowing shown the benefits of cross-cultural collab(Figure 3). While this is only about the physicians to study biochemical changes as a orations between the scientific community width of a human hair, it is the “largest diagnostic tool for diseases like Alzheimer’s, and public-private partnerships. Most portion of the mammalian brain rendered depression, as well as for seizure activiimportantly, it has revealed the necessity in full detail,” containing 1,600 neurons ty.2 Overall, major developments like Dr. and the power of creating unified netand 1,700 synapses.14 Fundamentally, Licthman’s reconstruction, DREADD, and works to support biomedical and scientific Lichtman cracked not only how synapses AMPET were made possible through collab- progress. While networks do afford risks are formed, but also the beginning of the orations between researchers and manufac- and challenges in shifting political environbrain’s wiring diagram to provide insights turers via public-private partnerships. ments, efforts to bolster these connections on what happens in Alzheimer’s or Parkin- But are these collaborations and can overcome obstacles and continue the son’s when brain circuits malfunction. partnerships strong enough to sustain BRAIN Initiative’s trajectory. NIH funding to the Roth Lab in the University of North Carolina also yielded a unique tool: DREADD, or Designer Receptor Exclusively Activated by Designer Drugs. Basically, with DREADD researchers can turn neurons on and off in vivo by injecting DREADD non-invasively via a virus into specific neurons and activating it only by using a designer drug-- Clozapine-N-oxide.13 This tool alFigure 4: Three AMPET models with varying mobilities and usages.
SPRING 2017 | Berkeley Scientific Journal
25
References
1. About the Brain Activity Map Project. (n.d.). Retrieved March 06, 2017, from 2. AMPET Helmet. (n.d.). Retrieved March 06, 2017 3. Argus® II Retinal Prosthesis System. (n.d.). Retrieved April 03, 2017, from http://www.secondsight.com/g-the-argsii-prosthesis-system-pf-en.html 4. Burrus, Daniel. “Why The Human BRAIN Initiative Is So Important To All Of Us!” Daniel Burrus. Burrus Research Associates, 12 May 2016. Web. 03 Apr. 2017. 5. The 21st Century Cures Act - A View from the NIH — NEJM. (n.d.). Retrieved March 20, 2017, from http:// www.nejm.org/doi/full/10.1056/NEJMp1615745#t=article 6. Fact Sheet: BRAIN Initiative. (n.d.). Retrieved March 13, 2017, from https:// obamawhitehouse.archives.gov/the-pressoffice/2013/04/02/fact-sheet-brain-initiative 7. Ferguson, S. M., & Neumaier, J. F. (2012, January 01). Grateful DREADDs: Engineered Receptors Reveal How Neural Circuits Regulate Behavior. 8. Gong, K., Majewski, S., Kinahan, P. E., Harrison, R. L., Elston, B. F., Manjeshwar, R., . . . Qi, J. (2016, April 19). Designing a compact high performance brain PET scanner—simulation study. 9. Jocelyn KaiserMar. 17, 2017 , 5:00 PM, 3, 2. A., 31, 2. M., 30, 2. M., 29, 2. M., & 28, 2. M. (2017, March 20). Trump’s NIH budget may include reducing overhead payments to universities. Retrieved April 04, 2017, from http://www.sciencemag. org/news/2017/03/trump -s-nih-budget-may-include-reducing-overhead-payments-universities 10.Jorgenson, L. A., Newsome, W. T., Anderson, D. J., Bargmann, C. I., Brown, E. N., Deisseroth, K., . . . Wingfield, J. C. (2015, May 19). The BRAIN Initiative: developing technology to catalyse neuroscience discovery. 11.Keshavan, S. M. (2017, April 01). Should taxpayers cover the light bills at university labs? Trump kicks off a tense debate. Retrieved April 04, 2017, from http://www.pbs.org/newshour/rundown/ university-funding-trump-debate/ 12. Kipke, D. (2015, September). Neu-
26
roNexus Materials. Retrieved April 3, 2017, from https://www.braininitiative. nih.gov/pdf/NeuroNexus_device_exhibits_92015_508C.pdf 13.Krook-Magnuson, E., & Soltesz, I. (2015). Beyond the hammer and the scalpel: selective circuit control for the epilepsies. Nature Neuroscience, 18(3), 331–338. http://doi.org/10.1038/nn.3943 14. Licthman, J., Morgan, J., Conchello, J., Berger, D., & Schalek, R. (2015, July 30). Reconstructing the Brains Wiring Diagram . 15. Milestones. (n.d.). Retrieved March 06, 2017, from http://www.braininitiative.org/ milestones/16. Nager, A. B., & Atkinson, R. D. (2016, July 1). A Trillion-Dollar Opportunity: How Brain Research Can Drive Health and Prosperity (Rep.). Retrieved April 2, 2017, from Information Technology and Innovative Foundation website: http://www2.itif.org/2016-trillion -dollar-opportunity.pdf?_ga=1.206979667.11 04540823.1491188881 16. N. (n.d.). BRAIN Initiative Public-Private Partnership Program: Industry Partnerships to Facilitate Early Access to Neuromodulation and Recording Devices for Human Clinical Studies. Retrieved April 03, 2017, from https://braininitiative. nih.gov/resource s/brain_ppp/index.htm 17. Nager, A. B., & Atkinson, R. D. (2016, July 1). A Trillion-Dollar Opportunity: How Brain Research Can Drive Health and Prosperity (Rep.). Retrieved April 2, 2017, from Information Technology and Innovative Foundation website: http://www2. itif.org/2016-trillion-dollar-opportunity. pdf?_ga=1.206979667.1104540823.14911 88881 18. Prominent U.S. Research Institutions Announce Collaboration Toward Sharing and Standardizing Neuroscience Data. (n.d.). Retrieved April 03, 2017, from http://vcresearch.berkeley.edu/news/ prominent-us-research-institutions-announce-collaboration-toward-sharing-and-standardizing 19. Three Years at the Frontiers of Neuoscience. (n.d.) Retrieved April 03, 2017. http://www.braininitiative.org/achievements/brain-initiative-three-years-frontiers-neuroscience/ 20. Upton, F. (2015, July 13). H.R.6 - 114th Congress (2015-2016): 21st Century Cures
Berkeley Scientific Journal | SPRING 2017
21. About NWB. (n.d.) Retrieved March 06, 2017, from http://www.nwb.org/aboutnwb/
Image References
http://www.kavlifoundation.org/ about-brain-initiative http://www.kavlifoundation.org/sites/default/files/image/resources/2014_SL_Neuro_Cartoon.jpg https://www.pethelmet.org/about
Behind the chemistry of human activity affecting the earth’s atmosphere
An interview with Professor Ronald C. Cohen BY: HELIYA IZADPANAH, KARA JIA, GEORGIA KIRN, TIFFANY NGUYEN, ISMAEL OSTOLAZA
Professor Ronald Cohen is a professor in the Department of Chemistry and of Earth and Planetary Sciences at University of California, Berkeley. Professor Cohen has been interested in atmospheric chemistry and planetary sciences. In this interview, we focus on one of his specialties, gas emission monitoring.
BSJ: How did you initially get involved and interested in the field of atmospheric chemistry and planetary science? RC: As a grad student, I was a fundamental physical chemist. I worked on measuring the absorption spectrum of clusters of small molecules, and I had a bunch of skills at the end of that time that had to do with lasers and how to think about interesting problems. I was looking for something where I didn’t know anything. I wanted to change as much as I could while finding some place where I could use the skills that I had. So, I went to do my post-doc in atmospheric science. I did my PhD here [at UC Berkeley] and my postdoc at Harvard. BSJ: In one of your papers you called for higher accuracy when interpreting space-based remote sensing of NO2 levels, specifically higher spatial and temporal resolution. Generally, how would these improvements affect the broader scheme of monitoring greenhouse gases? RC: That question tangles up a bunch of different things. So let me try to untangle them. There are some measurements from space right now, especially measurements of NO2 (nitrogen dioxide), where the measurement of the spectrum itself is
incredibly precise and accurate and our ability to interpret that spectra-what it means physically in the atmosphere- is not as good as its fundamental measurement...In principle, the information available to us is much better than our current ability to interpret it. That’s part one- we’d like to figure out a way to get all the information of this expensive and beautiful measurement. Part two are the natural length scales in the atmosphere…The meteorological time scale for those chemicals to be moved up-wind and diluted and diffused is the same as the chemical time scale…That’s part of the reason why we need to have higher resolution. We’re trying to measure something that’s changing on a 75 km length scale with an instrument in space where the best has only 24km pixel resolution which aren’t in perfect registration for a time equals zero experiment. The reason we need higher resolution, the problem we’re trying to solve, has variation in the SPRING 2017 | Berkeley Scientific Journal 27
“What we do has direct connection to other people’s lives in ways that other fundamental scientists don’t” length scale that we can’t observe right now. BSJ: How would you go about choosing which city to study? We saw that you used Atlanta, Georgia as a model city for observing effects of daily NO2 levels. RC: In general, we’re thinking about two different lines of research. In one line, we are trying to develop methods to do things better; when we do that, we tend to be quasi-random...The ultimate goal is to apply the methods to the whole earth; there is a separate, parallel effort where we’re not picking any individual city. BSJ: What are VOC emissions and could this spatial and temporal resolution improvement be applied to VOC emission monitoring? RC: There are two different kinds of problems my research reports on broadly. One is understanding climate and the chemicals, primarily CO2, methane, nitrous oxide, ozone, and particles in the atmosphere, that are responsible for climate change; the other is understanding the chemical constituents of the atmosphere from the point of view of public health. In that sense, much of my work on NO2 and organic molecules-- which is what VOC are-- are related to the public health questions. We also have an interest in how the world works in the same way you might want to understand why there are electrons in atoms. Most of the time, I am approaching things that way and remember that what we do has direct connection to other people’s lives in ways that other fundamental scientists don’t, and we try to engage on that. What we can see from space is based on things that are both high enough in atmospheric concentration and have strong enough absorption; that limits you to a small subset of molecules that are important. NO2 is an example: it absorbs in the visible light spectrum, and it’s a brown gas allowing it to have a very strong overlap with the sun thus strong enough absorption to be seen.
28
Berkeley Scientific Journal | SPRING 2017
BSJ: You’ve worked on designing a new method for monitoring atmospheric composition, Tropospheric emissions: Monitoring of pollution (TEMPO). What kind of data does TEMPO collect? RC: TEMPO, a new satellite instrument, is an improved version of OMI used in the project in Atlanta. Both instruments are standard UV spectrometer with half-nanometer resolutions. What’s different about TEMPO is that it has an imaging camera behind it... We’re getting a map of the spectrum of the reflected sunlight from the Earth. With both of these instruments, we’re doing a standard kind of Beer’s Law absorption experiment…This is, in concept, an incredibly simple experiment; the challenge is that we’re going to put it several hundred miles above the Earth and never be able to touch it again after it’s built. What’s new about TEMPO is it will have a bigger telescope on than the current generation of instruments, so it’s footprint on the ground will be smaller. The instrument we use now has a 13x24 km dimensions; TEMPO is going to have 3x3 km pixels...It’s going to be launched on a communication satellite, and it will be sitting on the same platform due south of somewhere like Oklahoma near the equator... Between 3 x 3 pixels and 13 x 24 pixels monitoring, we get effectively area that’s almost 100 times better. The science question that we’re trying to address requires understanding the spatial dimension on which the chemicals are changing in the atmosphere. We’ll be able to completely resolve the behavior of chemicals as they change in the atmosphere on the spatial scale of that change. BSJ: Which methods helped to inform your design of TEMPO? RC: TEMPO is a big team, and the core designers are not at Berkeley. In concept the spectrometer is simple: the light comes in, it hits the grating, grating spread out light on detector… But there are all kinds of important details for getting a really precise measurement, that’s being handled by a team at the Smithsonian Astrophysical Observatory... The part of the project that’s delegated to us at Berkeley, or we’re taking the lead on at least, is thinking about once we have a measurement, and are thinking about how to get that measurement into a measurable, sortable amount of NO2.
“This is, in concept, an incredibly simple experiment; the challenge is that we’re going to put it several hundred miles above the Earth and never be able to touch it again after it’s built.”
Optical ray trace for the TEMPO instrument, including telescope and spectrometer
BSJ: The TEMPO is part of an international satellite constellation. In what ways will TEMPO’s international collaboration benefit current and future studies on air quality RC: So the atmosphere sits on one planet, and air travels around the whole northern hemisphere in about two weeks. So air that was in China last week, or Korea, is going to be here in seven days, give or take. So understanding those links, those are really important, so that advantage of cooperating with our international partners in Korea and in Europe and trying to do something at the same time is being able to raise the level of view of every one’s information. We each know about this field that we’re looking at, but we also want to know what’s coming from upwind of us, and what we’re sending down. There’s also a really valuable science culture side. A year ago, we did an experiment in Korea, near 150 scientists reported by NASA, and 150 Korean scientists…Building that sort of cooperation-the best things that happen in science are often because two people both sit down and say, I want to try something together. BSJ: How can TEMPO’s scientific studies impact policy on a regional level? RC: The idea is that we’re going to understand the distribution of NO2, ozone, and formaldehyde. We’ll understand that distribution on a spatial scale we’ve never been able to map before. So I can give you a model today of the predicted distribution of NO2 at any spatial resolution you want…We’ll have a complete map every hour and then, that will change our ability to ask good questions. BSJ: What implications does BErkeley Atmospheric CO2 Observation Network(BEACO2N) monitoring -- which is a new, affordable, and more precise method of measuring CO2 in urban areas -- have on policy legislation and science? RC: …About five years ago I was watching some news piece about CO2 treaties and there was a buzz in the
scientific community about how if we did sign a treaty that was going to reduce CO2 emission, how would we know we were doing it? As you know from reading the news, if we say we’re going to reduce the emissions from cars and we don’t test, we’re not going to get the emissions reductions we expect, the diesel engine example being high in my mind there- but it’s not only one. The heavy duty diesel truck manufacturers did the same thing 20 years ago. They had some strategy for complying with regulations when trucks weren’t moving and on the road they did something totally different. So if you also think about the CO2 treaty, the most important thing is to give good feedback to all the people who actually have good will…So we wanted to think about how we would help those people. But what information did they need? How could absorbing the atmosphere really tell us that the things we are doing are having an effect? [One piece of our thinking is] that- “I think my CO2 emissions went down by 20%, what does the world say? Are they saying it only went down by 10? By 40? And how are we going to figure that out?” The other piece was that something like half the people in the world live in cities right now, and 25 years from now, it’s going to be three-quarters- if we’re going to solve the climate problem, it’s going to be in cities. So, we really wanted a better way to think about cities. Then the third line of thinking was that there was a tremendous sort of thoughts about networks of sensors and networks of all kinds of things. One way you see it is when you get on the road and you pull out Google maps and it tells you where the traffic is. You know that because hundreds of dozens of people let Apple or Google know where there phone is, and it shows when you’re not moving out on the road. That’s a network in action… The initial vision of BEACON was that we would put all the sensors on the roofs of middle schools and high schools. And, we would have a curriculum for science teachers to use the data directly from the roof and talk about what it means to make a measurement, how you think about the statistics, what’s different about what
SPRING 2017 | Berkeley Scientific Journal
29
you’re measuring in a real-world setting, from a controlled laboratory setting. The wonderful thing about the lab is you can have something that depends on dozens of parameters and you can change them one at a time. When you study the atmosphere, you don’t have that luxury, you get what you get… BSJ: What technological advancements and economic markets allowed more affordable and accurate monitoring of greenhouse gas emissions (such as carbon dioxide and nitrous oxides)? RC: What really makes this possible is mostly better communication. Free public Wi-Fi- kinds of things, that’s a key. Much, much less expensive computers. Another part is really small and inexpensive chemical sensors- we’re using sensors that were originally targeting the market for industrial alarms. They were originally just trying to be a threshold sensors to alarm you that certain concentrations of chemicals are higher than is safe in your house or business. The got to be good enough, that they are now used for more continuous measurements, not just for digital alarms. A household carbon monoxide alarm is one of the classic things that we are talking about.
The other thing was that the normal way people go about measuring CO2 is to buy $75,000 instrument which is incredibly precise and accurate. Carbon Dioxide is the hardest thing that we measure. In the sense that the concentration of Carbon Dioxide in the atmosphere in 400 ppm mole fraction, and the interesting variations are at about 1. So, if you don’t make a measurement that is good to about one part of 400, it’s boring. Whereas if we make a measurement of NO2 to 10% were good. So, CO2 was much, much more challenging because the interesting variation is so much smaller; you have to make a measurement with super high precision and accuracy or it’s not useful. We made a compromise in our network by buying an intermediate quality instrument that’s good enough for our purpose, banking on the idea that… if we have 20 instruments for the same price, then we get 20 times the square root of N-advantage…We are not putting 20 instruments in the same location and getting a direct square root advantage by measuring exactly the same thing- we have the square root of N advantage distributed over space, where each one is measuring something different. That sort of understanding is one of the challenges that made us excited to do this. BSJ: Relating to these 20 intermediate quality devices, what impact does more accurate and localized monitoring have on public health and public health policy? RC: We believe that we are going to be able to make maps of admissions and exposure that are better than anyone has ever had before, but we’re not there yet. We have a new project using the BEACON network, it’s called the CRAT institute for personal prevention. It’s a collaboration with colleagues in public health where we’re going to think about asthma and exposure in Richmond and the surroundings. We go back and forth a little bit in the project- sometimes this is a CO2 greenhouse gas project and sometimes is an air pollution public health project. The emission that cause both those problems all have the same source. So, in many ways the fundamental science we are trying to address is identical, it’s the applications that are different. BSJ: Where do you see your research going in the future-with your lab and with the field? RC: I’m pretty excited about the things we talked about today. They have an intersection, thinking about ways to get space and time resolution measurements of the atmosphere at the scale of the true variability. Then using those to address some of the fundamental questions about emission and chemistry in the atmosphere- that’s one theme.
Figure 3. A sample high-resolution bottom-up emissions inventory for the Bay Area adapted from Turner et al. (2016).
30
Berkeley Scientific Journal | SPRING 2017
“Building that sort of cooperation-the best things that happen in science are often because two people both sit down and say, I want to try something together.” We have some other long standing questions we are trying to think about. One is climate-related in the sense that we are trying to think about the role of temperature in changing the chemistry of the atmosphere. How to think about response to changing temperature on different space and time scales. We have some other projects, where we’re thinking about the role of interaction between the atmosphere and forest in the biosphere. About one part of 400, it’s boring. Whereas if we make a measurement of NO2 to 10% were good. So, CO2 was much, much more challenging because the interesting variation is so much smaller; you have to make a measurement with super high precision and accuracy or it’s not useful. We made a compromise in our network by buying an intermediate quality instrument that’s good enough for our purpose, banking on the idea that…if we have 20 instruments for the same price, then we get 20 times the square root of N-advantage…We are not putting 20 instruments in the same location and getting a direct square root advantage by measuring exactly the same thing- we have the square root of N advantage distributed over space, where each one is measuring something different. That sort of understanding is one of the challenges that made us excited to do this. BSJ: Relating to these 20 intermediate quality devices, what impact does more accurate and localized monitoring have on public health and public health policy? RC: We believe that we are going to be able to make maps of admissions and exposure that are better than anyone has ever had before, but we’re not there yet. We have a new project using the BEACON network, it’s called the CRAT institute for personal prevention. It’s a collaboration with colleagues in public health where we’re going to
think about asthma and exposure in Richmond and the surroundings. We go back and forth a little bit in the project- sometimes this is a CO2 greenhouse gas project and sometimes is an air pollution public health project. The emission that cause both those problems all have the same source. So, in many ways the fundamental science we are trying to address is identical, it’s the applications that are different. BSJ: Where do you see your research going in the future-with your lab and with the field? RC: I’m pretty excited about the things we talked about today. They have an intersection, thinking about ways to get space and time resolution measurements of the atmosphere at the scale of the true variability. Then using those to address some of the fundamental questions about emission and chemistry in the atmosphere- that’s one theme. We have some other long standing questions we are trying to think about. One is climate-related in the sense that we are trying to think about the role of temperature in changing the chemistry of the atmosphere. How to think about response to changing temperature on different space and time scales. We have some other projects, where we’re thinking about the role of interaction between the atmosphere and forest in the biosphere.
SPRING 2017 | Berkeley Scientific Journal
31
DRAWING THE LINE IN GENOMICS WITH
CRISPR TECHNOLOGY BY MICHELLE VERGHESE
Imagine a world where every mother selects the eye color of the child she is expecting; where every patient diagnosed with terminal cancer no longer receives a death sentence; or where every mushroom has been engineered not to brown. This world, though seemingly dystopian, is inching closer to reality with recent advancements in genomic editing. CRISPR-Cas9, a newly developed genome editing technology, gives scientists the ability to induce certain traits and cure genetic disease by directly editing DNA. Because of its unprecedented precision and simplicity, it is a revolutionary discovery that imapcts both scientists and everyday people alike. The field of genome editing, though constantly advancing, is relatively new. Broadly speaking, genome editing involves the insertion, deletion, or replacement of DNA within the genome of a living organism. Most editing techniques utilize engineered nucleases which are nicknamed “molecular scissors.” These nucleases create double-stranded breaks in the genome, cutting through both strands of DNA directly and rejoining them in order to edit the sequence. As researchers continue to improve their understanding of how DNA functions and how it can be manipulated, the editing techniques consequently become more refined and specific. For quite a long time, the dominant editing technique was RNA interference (RNAi). RNAi is so named because the editing process is carried out by two types of small RNA: siRNA and microRNA. These RNA bind to proteins and form a complex that is targeted to a mRNA sequence, which is the sequence of nucleotides that will be translated into proteins. The proteins then degrade the mRNA to edit the sequence.2 This process has been harnessed by scientists; by engineering sequences of siRNA, they can suppress or express a desired phenotype. However, scientists quickly realized that RNAi has plenty of limitations in terms of useful applications. Namely, RNAi’s off-target effects are numerous and challenging to eliminate entirely because the engineered siRNA can perform edits in non-target regions. Additionally, RNAi can only edit and silence genes - when considering gene therapy applications, RNAi cannot induce activation of genes, nor stably introduce gene segments. Other prominent editing techniques include zinc-finger nucleases (ZFN) and transcription activator-like effector nucleases (TALEN). Both, though able to directly edit DNA, lack in efficiency and ease of designing target sequences.6
32
Berkeley Scientific Journal | SPRING 2017
THE TECHNOLOGICAL BREATHROUGH
Though editing technologies continue to improve, on the whole they have lacked potential for broad and useful applications - until now. Named CRISPR-Cas 9, this new technique allows researchers to edit DNA at precise locations, modify genes in living cells, and eventually correct mutations that cause genetic disease. CRISPR was initially discovered in archaea and bacteria as part of their immune system. Unlike RNAi, CRISPR directly edits DNA
rather than working as a post-transcriptional modifier - it targets DNA as opposed to RNA or proteins. The process by which CRISPR performs edits centers around guide RNA sequences (gRNA), which are short nucleotide sequences that are complementary to the target DNA sequence. The gRNA binds to the target sequence and the Cas9 protein binds to the DNA, forming a complex. Cas9 then cuts both strands, and a new sequence can be inserted. Enzymes are used to repair the cuts so that the sequence can reform.1 Another unique feature of CRISPR is a protospacer adjacent motif (PAM) sequence, which is a two to six base pair sequence that CRISPR requires to recognize a target DNA. CRISPR can essentially be directed to any PAM-adjacent sequence, making editing versatile and flexible.3 CRISPR is efficient and specified - it is also simple in the sense that gRNA is designed readily and modifications to the Cas system are easily introduced. Additionally, minimizing CRISPR’s off target effects has been more successful because of the nature of the complex it forms.
ENVIRONMENTAL AND MEDICAL APPLICATIONS
CRISPR remains in the prelimary stages of research across the globe, which test its usage in a variety of applications. However, based on the efficiency of the technology, there is certainly much foresight into the potential CRISPR has to modify living organisms. For example, CRISPR could eliminate an invasive species from the planet. Scientists could develop a laboratory strain of the species with some problematic trait such as reduced fertility, and release the strain into the wild in order to slowly eliminate the population.4 The hazards of such an operation include the possibility that off-target mutations could result in the adverse trait manifesting in nontarget organisms, which risks the unintentional global loss of a harmless species. At the same time, if the species of Aedes mosquito were wiped out in this manner, diseases like malaria and the Zika virus which are carried by this mosquito and
Figure 1: The pathway by which CRISPR edits the target sequence
Figure 2: CRISPR Cas-9 protein 3D structure continue to plague many underdeveloped communities would be greatly suppressed. But the conceivable applications of CRISPR don’t stop there. CRISPR could also be very useful in the treatment of genetic disease; specifically, Down Syndrome has been heavily discussed in this regard. Children with DS have impaired language skills, learning difficulties, and both short and long term memory deficits. CRISPR could well be the specific tool required to alter the expression of DS, via either the silencing of an entire chromosome or deleting the specific gene associated with DS. Notably though, this treatment implicates gene editing from within the embryo, based on the fact that the main window to prevent cognitive impairment occurs before birth.10 In addition to DS, CRISPR may also become an efficient form of cancer therapy. After further developments, CRISPR could be used to edit immune cells to make them better at fighting cancer, and these cells could be injected into cancer patients. On a similar note, more research on CRISPR could lead to the ability to edit a cell and completely delete the region that contributes to HIV - thereby curing a patient of a chronic disease.
THE ETHICAL CONTROVERSY
Regardless of its capacity to treat disease, the usage of CRISPR still raises a lot of ethical questions and brings together scientists, lawmakers, and the general public together to discuss how, if at all, CRISPR should be integrated into our lives. The primary ethical question centers around the juxtaposition of healing and enhancement: specifically, if scientists have the technol-
SPRING 2017 | Berkeley Scientific Journal
33
ogy to address muscle related illnesses for example, they could also improve the strength of a healthy person. Similarly, if researchers can edit cancer genes, they can also, say, edit genes for red hair.7 This generalized ability to edit human phenotypes is overwhelming - as would be expected, the excitement for scientific discovery that accompanies the development of CRISPR is coupled with fear of the massive potential CRISPR holds. The National Institutes of Health released a statement in 2015 stating that their position is primarily restrictive of the use of CRISPR, especially in embryos. They describe the alternation of the human germline in embryos as “a line that should not be crossed.” They also cite legislation against this process, such as the Dickey-Wicker amendment, which prohibits the use of creating or destroying human embryos for research purposes. The NIH points to what they see as a current lack of compelling applications to justify the use of CRISPR in embryos, bringing up issues such as “unquantifiable safety issues” and “affect[ing] the next generation without their consent.”9 As to whether this is an astute judgement call is practically impossible to say. As of now, what we can say is that we simply don’t know enough about the potential benefits of CRISPR nor the dangers of it to make a legitimate case either way. It would seem that as of now, it would be wise to pay close attention as this scientific
breakthrough continues to grow within the constraints it has been given, and hope that ultimately there will be a way to introduce it into society to cure the sick and save lives, without irreversibly altering humanity in the process.
REFERENCES
1. QUESTIONS AND ANSWERS ABOUT CRISPR. (n.d.). Retrieved February 27, 2017, from https://www. broadinstitute.org/what-broad/areas-focus/project-spotlight/questionsand-answers-about-crispr 2. Unniyampurath, U., Pilankatta, R., & Krishnan, M. N. (2016, March). RNA Interference in the Age of CRISPR: Will CRISPR Interfere with RNAi? Retrieved February 27, 2017, from https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC4813155/ 3. Peters, J. M., Silvis, M. R., Zhao, D., Hawkins, J. S., Gross, C. A., & Qi, L. S. (2015). Bacterial CRISPR: accomplishments and prospects. Current Opinion in Microbiology,27, 121-126. doi:10.1016/j.mib.2015.08.007 4. Webber, B. L., Raghu, S., & Edwards, O. R. (2015). Opinion: Is CRISPR-based gene drive a biocontrol silver bullet or global conservation threat?: Fig. 1. Proceedings of the National Academy of Sciences,112(34), 10565-10567. doi:10.1073/ pnas.1514258112
5. Bosley, K. S., Botchan, M., Bredenoord, A. L., Carroll, D., Charo, R. A., Charpentier, E., . . . Zhou, Q. (2015). CRISPR germline engineering—the community speaks. Nature Biotechnology,33(5), 478-486. doi:10.1038/ nbt.3227 6. Sander, J. D., & Joung, J. K. (2014). CRISPR-Cas systems for editing, regulating and targeting genomes. Nature Biotechnology,32(4), 347-355. doi:10.1038/nbt.2842 7. Park, A. (2016). The CRISPR Pioneers. Time, 188(25-26), 116. 8. Collins, F. S. (n.d.). Statement on NIH funding of research using gene-editing technologies in human embryos. Retrieved from https://www.nih.gov/ about-nih/who-we-are/nih-director/ statements/statement-nih-funding-research-using-gene-editing-technologies-human-embryos 9. Mentis, A. (2016). Epigenomic engineering for Down syndrome. Neuroscience & Biobehavioral Reviews,71, 323-327. doi:10.1016/j.neubiorev.2016.09.012 10. https://www.newscientist.com/ article/2123973-first-results-of-crispr-gene-editing-of-normal-embryos-released/
IMAGE SOURCES
1. http://proteopedia.org/wiki/index. php/DNA 2. http://www.yourgenome.org/facts/ what-is-crispr-cas9 3. http://www.cell.com/abstract/S00928674(14)00156-1 4. http://www.cell.com/cell/pdf/S00928674(15)01705-5.pdf
Figure 3: A map summarizing the international history and development of CRISPR
34
Berkeley Scientific Journal | SPRING 2017
QUALITY AND VALUE MONITERING OF ACUTE-CARE HOSPITALS Abstract: Rising medical costs is a real and imminent problem; as a major health care services provider, Medicare covers nearly 16% of the U.S. population. This paper aims to investigate factors that affect the cost and quality of acute care hospitals reimbursed by Medicare. Multiple datasets from the Center for Medicare and Medicaid Services and IRS were merged to evaluate the effect of hospital ownership and local socioeconomic levels on hospital performance, based on three main quality metrics: average cost per beneficiary, 30-day readmission rate, and hospital acquired conditions (HAC) score. Both variables were found to have statistically significant effects. In addition, the results showed that CMS policy may have unfairly penalized hospitals serving more vulnerable populations, by using a metric that did not consider hospital location in lower income neighborhoods.
BY GRACE DENG
LAYOUT BY KORE LUM, GABBY SHVARTSMAN, ALAA ABDELMAGEED
INTRODUCTION
The economist Paul Krugman once said that “the U.S. government is […] best thought of as a giant insurance company with a standing army.” In terms of federal spending, this claim does have some merit, since the bulk of government expenditure is on Medicare, Medicaid, Social Security, and defense. In 2011, the total US population was approximately 311.7 million, 48.7 million of which were covered by Medicare. Furthermore, the Center for Medicare and Medicaid Services reported a 5.3% increase in national health expenditure in 2014, which amounts to $9,523 per person and a total of $3.0 trillion dollars, making up 17.5% of Gross Domestic Product (CMS, 2016). The problem of rising healthcare costs is exacerbated by parallel growth in out-of-pocket spending, hospital expenditures, physician and clinical services expenditures, and prescription drug spending. The extra costs not covered by Medicare or Medicaid will eventually fall onto and burden the patients. Therefore, it has become very important to use quantitative metrics to evaluate the quality of care provided by hospitals and maximize the return on dollar spent. For example, encouraging lower readmission or complication rates could reduce the number of claims per patient paid for by Medicare and perhaps keep Medicare solvent for a longer period of time. Meanwhile, peer benchmarking in terms of medical procedure costs could encourage competition between local hospitals to keep prices low.
Traditional rankings or reviews of hospitals and physicians are often based on subjective reviews; for example, reviews on Yelp are written by previous or current patients. However, these reviews are composed more of comments on how friendly the staff was or how patient the doctor was, but does not include objective measures such as how effective the treatment prescribed by the physician was, how accurate the diagnosis was, how reasonable the cost was, etc. In addition, when patients are searching for a doctor or hospital through Google or a website like WebMD, they can filter through criteria such as location or specialty (pediatrician, neurologist, or cardiac surgeon), but are not presented with information on quality metrics such as 30-day readmission rates or Hospital Acquired Condition (HAC) scores. There has been a joint effort between Medicare and the Hospital Quality Alliance (HQA), the first national public reporting system that provides performance data on each hospital, to promote reporting on hospital quality of care (Jha et al, 2005). As a result, many datasets on quality metrics and hospital comparison are available from Medicare.gov, and we can use descriptive statistics and models to objectively analyze value-of-care: measurable health outcomes per dollar spent (Porter, 2010). These stats could be further weighted by clinic demographics to avoid bias, since a clinic that accepts riskier patients may have a lower overall recovery rate. Furthermore, we can use statistical tests to evaluate whether policies enacted through the Center for Medicare and Medicaid Services (CMS)
SPRING 2017 | Berkeley Scientific Journal
35
and Affordable Care Act (Obamacare) are justified. The goal of this research project is to explore the following questions: 1. Comparing cost per beneficiary and quality of care across different types of hospitals In 2003, the largest proportion of acute care nonfederal hospitals is voluntary non-profit, at about 62% (Wikipedia, 2016). The next largest groups are government hospitals (20%) and for-profit (18%). Identifying which subgroups of hospitals provided better care (measured through variables such as 30-day readmission rates and HAC scores) could directly help two groups of people: policy makers who are deciding which types of hospitals to subsidize or penalize, and patients who are choosing a local hospital for short-term treatments. 2. Effect of socioeconomic status on average hospital cost Although it is important to identify the “best” hospitals in terms of low cost and high quality care, these measurements are often affected by patient demographic variables. In addition, simple comparison of these measurements between hospital groups can lead to biased conclusions if socioeconomic status of patients is not taken into account. A model for predicting average hospital cost could prove useful for encouraging peer benchmarking, where more competition between similar hospitals can lead to better services at a lower price. By including the Income Per Capita variable in this model, matched to corresponding hospitals through zip code, we can observe if there is a statistically significant relationship between the two variables. If true, then perhaps hospitals can be evaluated against its “peers” that serve the same patient demographics, which will be more holistic and accurate than just comparison against a national average. 3. CMS payment reduction against low-performing hospitals Effective on October 1, 2012, the CMS established the Hospital Readmission Reduction Program to record and monitor 30-day readmission rates for hospitals, as well as imposing a fine on payments to hospitals with excessive readmissions. The metric used to evaluate which hospitals should be fined is called Excess Readmission Ratio, which measures the ratio of predicted readmissions to expected readmissions. When the ratio exceeds 0.97, the CMS imposes a 3% reduction in payments to the hospital (CMS, 2016). However, there has also been arguments against the policy, which some claim will target “stand-alone hospitals that treat vulnerable patient populations” and force them to “join systems to help absorb the impact of financial cuts” (McKinney, 2012). Furthermore, the evaluation criterion does not take into account the socioeconomic status of the patients served by each hospital. Small, rural hospitals that are penalized and required to reduce readmissions often serve low-income and elderly patients that lack “community support mechanisms” and readmissions data collected from these hospitals may be biased from adverse
36
Berkeley Scientific Journal | SPRING 2017
selection. Hence, it could be worthwhile to split the dataset into two groups, Reduced Payment hospitals affected by the penalty and Full Payment hospitals not affected, and then compare if differences in average cost, income per capita, and HAC scores are statistically different. 4. CMS restrictions on physician operated hospitals As part of the Affordable Care Act, the CMS “imposed additional requirements for physician-owned hospitals to qualify for the whole hospital and rural provider exceptions”, which effectively banned the expansion of Physician Operated hospitals with very few exceptions (CMS, 2016). The motivation behind these restrictions stem mainly from research investigating the effectiveness of physician owned hospitals. For example, Congress’s independent Medicare Payment Advisory Commission (MedPAC) found evidence that 48 physician-operated hospitals would take easier cases with healthier patients and higher paying medical procedures (Rau, 2015). However, it has been over a decade since that report, so it may be worth comparing if the differences in cost and quality measures between physician-operated hospitals and all other hospitals are still statistically significant.
DATA
Four datasets were downloaded from Data.Medicare.gov titled “Medicare Hospital Spending by Claim”, “Hospital General Information”, “Hospital-Acquired Condition Reduction Program”, and “Hospital Readmissions Reduction Program”. The datasets were then merged into a final dataset by Hospital ID, with a total of 2541 acute-care hospitals and 17 variables, including both quality-of-care measurements, hospital costs, and demographic variables for each hospital It’s important to note, however, that there may be a reporting bias in the dataset, since they are all hospitals that accept Medicare. This means the data is not necessarily representative of all hospitals in the United States. Nevertheless, since most of the hospitals listed are acute care hospitals, the results of this dataset could still be extrapolated to acute care and emergency centers. In addition, we can assume that patients who filed claims from these hospitals are from a certain demographic group, since they are eligible and covered by Medicare. This means that the collected metrics for each hospital is comparable across hospitals and geographic locations. Due to the federal law Emergency Medical Treatment and Active Labor Act (EMTALA), which mandates that all emergency room patients must be offered care regardless of ability to pay, there may also be adverse selection in the hospital data. Patients who are more ill than the average person could be included in the data, since people who ordinarily cannot afford health care or physician visits may be brought into the emergency room. Inclusion of these data could possibly inflate the readmission rate, HAC score, or cost of medical procedures. Because none of the datasets provided by Medicare. gov scale individual hospital data by the demographics of the
Figure 1: Correlation Heat Map neighborhoods they serve, another variable was included in the analysis called Income by Zip Code. The data was downloaded from the IRS (Internal Revenue Service) website, and consists of 114 tax related variables for 27790 zip code areas for the year 2013. The variable chosen to represent income per capita by zip code was the Adjusted Gross Income (AGI). We can now match the zip code of each hospital’s location with the corresponding AGI, and test if the socioeconomic status of the hospital’s surrounding neighborhood affects the price and quality of care.
QUESTION 1 -- COST AND QUALITY ACROSS HOSPITAL TYPES 1. Correlation Heatmap: The figure above shows the heatmap of the correlation matrix between Readmission Rate, HAC Score, and Hospital Avg (average cost per beneficiary per hospital). If we assume that higher costs by hospitals result in better quality care, we would expect a negative correlation between average cost per beneficiary and HAC score or readmission rates. However, the heatmap above shows that there is a very low positive correlation (p = 0.167) between Hospital Acquired Conditions (HAC) score and average cost per beneficiary, suggesting that hospitals that charge more does not necessarily have a low number of hospital complications (low HAC score). This could potentially indicate that when
the CMS wishes to penalize hospitals with low-quality care, the HAC score is not necessarily a good reference measure. 2. Bubble Plots: The bubble plot on the next page (see Figure 2) compares hospitals in each state across three factors: readmission rates, HAC scores, and average total cost per beneficiary claim (proportional to the area of each state’s bubble). The intersecting red lines represent the average readmission rates and HAC scores across all the states, which also conveniently categorizes each state into a quadrant: the upper-right quadrant represents high readmission rate, high HAC score; the upper-left quadrant represents high readmission rate, low HAC score; the lower-right quadrant represents low readmission rate, high HAC score; and the lower-left quadrant represents low readmission rate, low HAC score. Lower readmission rates and HAC scores are more preferable, and a smaller sized circle represents a lower average cost compared to the national average. Based on these quadrants, indicators can be created for hospitals in states that are more preferable. We can observe that there are some interesting outliers, such as Washington D.C. in the upper right quadrant, which suggests that hospitals in D.C. on average have higher readmission rates and higher number of hospital acquired conditions, as well as higher than average costs. Therefore, assuming that transportation is not a problem, patients in D.C. may find it more desirable to go to a hospital in Maryland or Virginia. (On the plot, we can see that Virginia scores nearly 2 points lower on
SPRING 2017 | Berkeley Scientific Journal
37
Figure 2: Bubble Plot by State
Figure 3: Bubble Plot by Hospital Type 38
Berkeley Scientific Journal | SPRING 2017
Figure 4: QQ and Scatter Plot of ANOVA Residuals The bubble plot in Figure 3 compares hospitals in each hospital type across three factors: readmission rates, HAC scores, and average total cost per beneficiary claim which is proportional to the area of each state’s bubble. We can see that at roughly the same average readmission rate, government operated hospitals charge less on average than proprietary or even most voluntary non-profit hospitals based on the size of corresponding bubbles. Although it seems reasonable that proprietary hospitals will charge more on average due to profit-seeking behavior, it is surprising that nonprofit hospitals’ costs are similar in amount. To formally test if the cost differences between hospital types are statistically significant, a two-sample t-test will need to be performed. An interesting outlier is the Tribunal hospitals, which has very low readmission rates but high HAC score. However, there was only one data point for hospitals under this type, so it is not very informative. 3. One-way ANOVA: The Hospital Ownership variable in the dataset lists 10 different types of acute-care hospitals, which can be simplified and merged into 4 main categories: Government
Operated, Non-profit, Proprietary, and Physician Operated. Tribunal hospitals were eliminated because there was only one data point. In this case, a one-way ANOVA can be performed to check if the different types of hospitals charge the same ABOUT THE AUTHOR amount of money per beneficiary on average. The one-way ANOVA is more appropriate because it generalizes the two-sample t-test to multiple groups, and is also more conservative and reduces Type 1 error (Wikipedia, 2016). To check whether the Normality and Constant Variance assumptions of one-way ANOVA have been met, a QQ plot and a scatter plot of the residuals were created. The residuals appear normally distributed, with a few outliers at the right tail. The residual plot shows mostly uniform random scatter centered about 0, with no discernible nonlinear shapes or a funnel shape that suggests heteroskedasticity, so we can also assume the errors are constant. H0: The mean cost per beneficiary for different ownership types of acute-care hospitals is equal. HA: The mean cost per beneficiary for different ownership types of acute-care hospitals is not all equal. α = 0.05
Table 1: One-way ANOVA Hospital Ownership vs. Cost
SPRING 2017 | Berkeley Scientific Journal
39
Given a p-value of 7.818e-14, we reject the null hypothesis and conclude that different types of acute-care hospitals do not charge equally per beneficiary. Observing the group means for each type of hospital, it appears the Government Operated hospitals charge the lowest on average per beneficiary. In addition, Nonprofit hospitals cost higher on average than both Government Operated and For-Profit hospitals, which is surprising since a major argument for non-profit seeking hospitals is that they would have responsibilities as charities and serving the community instead of only focusing on financial stability. Intuitively, we would expect privately-owned profit-seeking hospitals to cost more per beneficiary, since previous research has shown that these hospitals attempt to specialize in more profitable services, such as open heart surgery, which are reimbursed at higher rates by insurance companies (Horwitz, 2005). Whether these cost differences are statistically significant will require more specific comparisons (such as a two-sample t-test). Further investigation in how non-profit hospitals are operated financially may be needed to determine if this major subgroup of acute-care hospitals (64.3% of the dataset) is the best method of delivering high value-of-care.
QUESTION 2 -- EFFECT OF INCOME PER CAPITA ON AVERAGE HOSPITAL COST To observe how average hospital cost is affected by Income Per Capita by Zip Code, a log-log OLS regression was performed. Quality metrics such as readmission rates and HAC scores were also included, as well as indicators to control for factors such as hospital ownership and whether the hospital offered emergency services. From the regression output, itâ&#x20AC;&#x2122;s easy to see that the coefficient for Income Per Capita is positive and statistically significant (p-value < 2e-16), meaning that as the average income in a neighborhood increased, so will the average cost charged by a hospital in that neighborhood. From an economic perspective, hospitals can afford to raise prices on high-earning population with more disposable income without losing too much demand. Although the model only explains 17.8% of the variation in average cost per beneficiary (adjusted R2 = 0.178), it is still useful in showing the significance of including socioeconomic status when analyzing hospital performance. Due to the Emergency Medical Treatment and Active Labor Act, hospitals that provide emergency services are often accused of charging other patients high costs to balance out the patients who cannot afford to pay but must be
Figure 5: Regression Output
40
Berkeley Scientific Journal | SPRING 2017
provided care under law. However, it is interesting to note that the Emergency Services Indicator coefficient is not statistically significant in the figure above, suggesting that hospitals with ER do not overcharge after controlling for other variables.
QUESTION 3 - CMS HOSPITAL READMISSION REDUCTION PROGRAM PENALTY As part of the Hospital Readmission Reduction Program, the CMS imposes a 3% reduction in payments to hospitals that have excessive readmissions after surgeries such as heart failure and acute myocardial infarction. Because the metric used, Excess Readmission Ratio, does not weigh in factors such as socioeconomic status, we want to know if the penalty disproportionately affects hospitals that suffer from adverse selection and serving poorer neighborhoods. Given that a hospital is penalized if its Excess Readmission Ratio exceeds 0.97, there are 1713 hospitals that receive reduced payments and 818 hospitals that receive full payments from Medicare in the dataset. We can start by comparing the average Income by Zip Code for the neighborhoods in the two hospital subgroups, Reduced Payment and Full Payment. In this case, a two-sample t-test is not appropriate because the histogram of Income by Zip Code is not normally distributed, as seen in the graph on the right. As an alternative, we can perform the Mann-Whitney U test, which does not require the normally distributed assumption and is almost as efficient as a two-sample t-test in the case of normal distributions. One of the assumptions for the Mann-Whitney test is that the observations in the two hospital groups are independent. It seems reasonable to claim that this assumption holds in our scenario, since the Income by Zip Code for each hospital and Excess Readmission Ratio, the measurement used to split up the dataset, should be independent. Therefore, the Reduced Payment and Full Payment hospital groups should not have been presorted into higher and lower income groups. To formally test for independence, the Pearson product-moment correlation coefficient was calculated to be 0.00449 , with a t-value of 0.2258 and p-value = 0.821. At a significance level of 5%, we fail to reject the null hypothesis, and there is evidence of no statistical dependence between the two variables. With a p-value less than 0.05, we reject the null hypothesis and find evidence that hospitals penalized by the CMS serve neighborhoods with on average lower income per capita than those not penalized. This means that the payment reduction disproportionately affect hospitals which are located in poorer neighborhoods. Since people with more income tend to be able to afford health care before reaching age 65 (when people start qualifying for Medicare), the patients from richer neighborhoods tend to be healthier. A healthier patient demographic could artificially lower the number of readmissions and complications in hospitals serving high-income areas, granting these hospitals an advantage when the CMS penalty is evaluat-
Figure 6: Histograms of Income Per
ed by Excess Readmission Ratios. Mann-Whitney U test: Reduced Payment vs. Full Payment Hospitals H0: The neighborhood income distribution for Reduced Payment Hospitals and Full Payment Hospitals are equal. HA: The neighborhood income distribution for Reduced Payment Hospitals has lower mean ranks (location shift is negative) than that of Full Payment Hospitals. Îą = 0.05
Table 2: Mann-Whitney
SPRING 2017 | Berkeley Scientific Journal
41
With a p-value less than 0.05, we reject the null hypothesis and find evidence that hospitals penalized by the CMS serve neighborhoods with on average lower income per capita than those not penalized. This means that the payment reduction disproportionately affect hospitals which are located in poorer neighborhoods. Since people with more income tend to be able to afford health care before reaching age 65 (when people start qualifying for Medicare), the patients from richer neighborhoods tend to be healthier. A healthier patient demographic could artificially lower the number of readmissions and complications in hospitals serving high-income areas, granting these hospitals an advantage when the CMS penalty is evaluated by Excess Readmission Ratios. Two Sample t-tests: Reduced Payment vs. Full Payment Hospitals H0: The true difference in means between Reduced Payment and Full Payment hospitals is 0. HA: The true difference in means between Reduced Payment and Full Payment hospitals is less than 0. Îą = 0.05 In the first t-test, we reject the null hypothesis with a p-value of 0.004552 and conclude that the average cost per beneficiary is lower in Reduced Payment hospitals compared to that of Full Payment hospitals. In the second t-test, we fail to reject the null hypothesis with a p-value of 0.7991 and find evidence that there is no significant difference between HAC scores of Reduced Payment hospitals compared to that of Full Payment hospitals. These two results combined with that from the Mann-Whitney test on neighborhood income, shed some interesting light on the CMS Hospital Readmission Reduction Program. First, hospitals affected by payment reductions are at a disadvantage in the evaluation process because they serve lower income neighborhoods with patient demo-
graphic that potentially have less access to health care over their lifetime and are less healthy overall than their counterparts in higher-earning neighborhoods. At the same time, these Reduced Payment hospitals charge less per beneficiary on average compared to Full Payment hospitals, and combined with payment reductions from Medicare, results in even less overall revenue. Reduced income to these hospitals could prevent hiring of more physicians and medical staff, upgrading of medical equipment, and expanding of facilities to better serve more patients. Finally, because the Excess Readmission Ratio is only one of many hospital quality metrics, it is worth considering whether the penalty can be justified by comparing another metric between the two hospitals groups, such as HAC scores. However, the difference in hospital acquired conditions between the penalized and non-penalized hospitals is not statistically significant. Overall, the CMS should consider reweighing the Excess Readmission Ratio by demographic and socioeconomic factors, or using a more holistic evaluation process to identify low-performance hospitals and to encourage readmission rate reductions.
QUESTION 4 -- CMS RESTRICTIONS ON EXPANSION OF PHYSICIAN-OPERATED HOSPITALS Two Sample t-tests: Physician Hospitals vs. All Other Acute-Care Hospitals H0: The true difference in means between Physician operated hospitals compared to all other hospitals is 0. HA: The true difference in means between Physician operated hospitals compared to all other hospitals is not equal to 0. Îą = 0.05 From the table above, we can compare whether there is a significant difference between the average cost and quality of physician operated hospitals versus other types of hospitals. It is important to note that the data was not sampled randomly since
Table 3: Two Sample t-tests for Reduced vs. Full Payment Hospitals
42
Berkeley Scientific Journal | SPRING 2017
Table 4: Two-Sample t-tests for Physician Hospitals vs. Other the data was collected as a survey. However, the results could still be useful in analyzing whether the CMS was justified, or not, on banning physician hospitals from expanding facilities and serving more patients. Observing the column of p-values, we fail to reject the null hypothesis for all three t-tests, comparing the average cost per beneficiary per claim, readmission rate, and HAC score between the two types of hospitals respectively. This suggests that physician-operated hospitals are not overcharging their patients on average, and the quality of care provided is not significantly different from those provided by other hospitals, such as nonprofit or government-operated. Of course, it is also important to note that there were only 10 physician operated hospitals in the dataset, possibly a consequence of the restrictions enacted by the Affordable Care Act in 2012. More research should be conducted by the CMS to fully evaluate the effectiveness of physician hospitals and enact appropriate policies to encourage higher value-of-care.
CONCLUSION This paper explores how cost per beneficiary and quality of care varies between different types of acute-care hospitals, as well as the effect of exogenous variables such as socioeconomic status of neighborhoods and income per capita of the patients served by the hospitals. The results of these evaluations were then used to identify drawbacks and potential areas of improvements in two CMS policies involving payment reductions and hospital expansion restrictions. Due to lack of data availability, analysis of performance was not extended to the medical specialty or individual physician level. Further research in this area could prove useful in to all parties in the healthcare industry: individual patients would be able to pinpoint the physician or specialty group that most suits their medical needs, CMS could implement reward systems for high quality care
independent of overall hospital performance, and insurance companies could prioritize reimbursements for procedures with a record of positive measurable health outcomes per dollar spent.
REFERENCES “Analysis of Variance.” Wikipedia. Wikimedia Foundation, n.d. Web. 04 Aug. 2016. Horwitz, J. R. “Making Profits And Providing Care: Comparing Nonprofit, For-Profit, And Government Hospitals.” Health Affairs 24.3 (2005): 790-801. Web. 1 Aug. 2016. Jha, Ashish K., Zhonghe Li, E. John Orav, and Arnold M. Epstein. “Care in U.S. Hospitals — The Hospital Quality Alliance Program.” New England Journal of Medicine N Engl J Med 353.3 (2005): 265-74. Web. McKinney, Maureen. “Hospitals Prepare for Readmissions Rate Penalties.”Modern Healthcare. N.p., 29 Sept. 2012. Web. 02 Aug. 2016. “NHE Fact Sheet.” - Centers for Medicare & Medicaid Services. N.p., n.d. Web. 05 Aug. 2016. “Non-profit Hospital.” Wikipedia. Wikimedia Foundation, n.d. Web. 04 Aug. 2016. “Physician-Owned Hospitals.” - Centers for Medicare & Medicaid Services. N.p., n.d. Web. 05 Aug. 2016. Porter, Michael E., Ph.D. “What Is Value in Health Care? — NEJM.” New England Journal of Medicine. New England Journal of Medicine, 23 Dec. 2010. Web. 20 May 2016. Rau, Jordan. “Doctor-Owned Hospitals Are Not Cherry-Picking Patients, Study Finds.” Kaiser Health News. N.p., 03 Sept. 2015. Web. 01 Aug. 2016. “Readmissions Reduction Program (HRRP).” - Centers for Medicare & Medicaid Services. N.p., n.d. Web. 01 Aug. 2016.
SPRING 2017 | Berkeley Scientific Journal
43