Note from the Editorial Board Dear Reader, As scientific and technological progress marches steadily onwards, it is an increasingly common mistake to compartmentalize the world into two overly simplified camps of science and everyday life. Self-professed members of both groups tend to forget that barriers constructed between academic disciplines are mere constructs to better organize the collection and dissemination of information. Nature does not see walls between Wilder, Steele, or Gilman, nor does it promise the knowledge gathered within those buildings exclusively to white-coated researchers. The science of the day-to-day is often forgotten, but in this increasingly technologically dominated world ignorance can mean being left behind. Thus, it is important to acknowledge the essential science that is all around us even as Dartmouth students. Seemingly simple processes such as the bells of Baker-Berry, described by Andy Zureick ‘13, or the snow and ice covering the green, described by Yuan Shangguan ‘13, are actually governed by complex physical and chemical properties, which took centuries to elucidate. This daily communion with science is our societal inheritance from scientific forebears, but also our way of coping with stress and demands as students. This is further evidenced by articles regarding both alcohol by Jay Dalton ’12 and caffeine by Will Heppenstall ’13. Even the most overlooked and ostensibly personal occupations such as daydreaming, written about by Emily Stronski ’13, and our mood in relation to the food we consume, written about by Sarah-Marie Hopf ’13, have delicate, and often elegant scientific explanations. It is our sincere hope that you enjoy reading this article of the DUJS, and that as always you take part in our mission statement to bring science out of the acetone-washed laboratories and into your minds as readers. Sincerely, The DUJS Editorial Board
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Hannah Payne ‘11 Editor-in-Chief: Jay Dalton ‘12 Managing Editors: Jingna Zhao ‘12, SarahMarie Hopf ‘13, Daniel K. Lee ‘13 Design and Layout Editor: Diana Lim ‘11 Online Content Editor: Runi Goswami ‘13 Publicity Chair: Victoria Yu ‘12 Secretary: Aravind Viswanathan ‘12 Event Coordinator: Jaya Batra ‘13 DESIGN STAFF Shaun Akhtar ‘12 Matthew Curtin ‘14 Brenna Gibbons ‘12 Clinton F. Grable ‘14 Yoo Jung Kim ‘14 Aaron Koenig ‘14 Diana Lim ‘11 Victoria Madigan ‘14 Bradley Nelson ‘13 Diana Pechter ‘12 Ian Stewart ‘14 Andy Zureick ‘13 STAFF WRITERS Prashasti Agrawal ‘13 Jaya Batra ‘13 Runi Goswami ‘13 Thomas Hauch, Jr. ‘13 William Heppenstall ‘13 Sarah-Marie Hopf ‘13 Daniel Lee ‘13 Diana Lim ‘11 Shu Pang ‘12 Krupa Patel ‘13 Diana Pechter ‘12 Medha Raj ‘13 Michael Randall ‘12 June Yuan Shangguan ‘13 Emily Stronski ‘13 Aravind Viswanathan ‘12 Victoria Yu ‘12 Jingna Zhao ‘12 Andrew Zureick ‘13 Faculty Advisors Alex Barnett - Mathematics William Lotko - Engineering Marcelo Gleiser - Physics/Astronomy Gordon Gribble - Chemistry Carey Heckman - Philosophy Richard Kremer - History Roger Sloboda - Biology Leslie Sonder - Earth Sciences Megan Steven - Psychology Special Thanks Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Provost’s Office Whitman Publications Private Donations The Hewlett Presidential Venture Fund Women in Science Project
Winter 2010
DUJS@Dartmouth.EDU Dartmouth College Hinman Box 6225 Hanover, NH 03755 (603) 646-9894 http://dujs.dartmouth.edu Copyright © 2010 The Trustees of Dartmouth College
The Dartmouth Undergraduate Journal of Science aims to increase scientific by providing an interdisciplinary forum for sharing undergraduate research an
In this Issue... DUJS Science News Andrew Zureick ‘13
4
Interview with Physicist Mary Hudson Andrew Zureick ‘13
6
Reflections of an Erstwhile Journal Editor and Writer Christopher Dant, PhD 9 Caffine and Naps: The Fight Against Sleep Deprivation William Heppenstall ‘13 13 Schedules packed with academic, extracurricular, and social obligations make sleep-deprivation a fact of life for many Dartmouth students. Although college students require more sleep than most other age groups, few undergraduate students sleep for nearly the full nine and a quarter hours prescribed.
The Science Behind Social Networking Medha Raj ‘13
16
You Are What You Eat: How Food Affects Your Mood Sarah-Marie Hopf ‘13 18 The Physiology of Stress: Cortisol and the HPA Axis Michael Randall ‘12 22
Visit us online at dujs.dartmouth.edu
Dartmouth Undergraduate Journal of Science
fic awareness within the Dartmouth community h and enriching scientific knowledge.
“I’ll Blitz You Later” Thomas Hauch, Jr. ‘13
25
Deep Below the Snowy Surface Yuan Shangguan ‘13
28
Alcohol from Hydroxyl to Culture Jay Dalton ‘12
29
Vibrations Surround Us: The Science of Music Andrew Zureick ‘13
32
Science of Daydreaming Emily Stronski ‘13
36
Turning Waste into Food Jingna Zhao ‘12
38
tResearch The Development of a Framework to Assess the Cost Benefit of Energy Conservation Nozomi Hitomi ‘11 41 A new framework has been developed to assess and validate energy conservation measures in residential homes. The framework provides a systematic manner to identify the most cost-effective energy conservation measure for a specific building. This paper discusses the development of such a framework and presents a case study using this framework.
Fall 2010
DUJS
News
DUJS Science News
See dujs.dartmouth.edu for more information
Compiled by Andrew Zureick ‘13
tBiology
mammography interpretation. The researchers mailed a survey to 257 community radiologists to assess their perceptions and practices regarding CAD and double reading. The data was analyzed by classifying the radiologists’ perception of both practices based on their relative agreement or disagreement with several statements specific to CAD and double reading. The results showed that 64% of radiologists used CAD for more than half the screening mammograms they interpreted while fewer than 5% said the same for double reading. However, more radiologists believed that double reading improved cancer detection rates as compared to CAD. At the same time, more radiologists thought that CAD decreased recall rates compared to double reading. Perhaps not surprisingly, it was also found that those radiologists with the most favorable perceptions of CAD had significantly higher uses of CAD, greater workload in screening mammography, academic affiliation, and fellowship training.
Temperature extremes affect PGI and physiological performance
Santa Clara University biology professor Elizabeth Dahlhoff lectured at Dartmouth as part of the biology department’s Fall 2010 Cramer Seminar Series. Her research focuses on temperature adaptation and climate variation, and she has recently studied the herbivorous Willow Leaf Beetle in the High Sierra region of California. Because the beetles are ectotherms, their internal body temperature, genetic diversity, and performance are all linked to their thermal environment. Dahlhoff looked at the “1” and “4” alleles of the polymorphic gene for the enzyme phosophoglucose isomerase (PGI), which converts glucose-6-phosphate to fructose-6-phosphate in glycolysis. The beetles’ performance in extreme climates is related to how well they cope, and the different forms of PGI are related to expression of heat shock proteins, which recover and refold proteins that unfold in extreme temperatures. The “1” allele is cold adapted and the “4” allele is warm adapted. Dahlhoff also studied how the gene itself changes physiological performance. When exposed to either -4°C (cold extreme), 20°C (normal), or 36°C (hot extreme), the 1-1 beetles were able to run faster than the 4-4 beetles, but upon a second extreme exposure (-4 or 36), the 4-4 beetles were able to up-regulate heat shock proteins and increase running speed. In addition, 1-1 beetles laid more eggs in colder areas, while 4-4 beetles laid more eggs in warmer sites. Dahlhoff claimed, “The ultimate fate of organisms relies on their ability to adapt to extremes throughout the year.”
4
Image retrieved from http://i271.photobucket.com/albums/jj127/Aegolius/Insects/ Larose0262.jpg (Accessed 2 Nov 2010)
A willow leaf beetle.
tTechnology Radiologists increasingly use computer-aided detection for mammography interpretation
A team of Dartmouth Medical School researchers, led by Tracy Onega, recently conducted a study on radiologists’ perception of computer-aided detection versus double reading for mammography interpretation. Their findings were published in Academic Radiology. The traditional method of double reading by a second clinician in screening mammography interpretation may soon be replaced by computer-aided detection (CAD) of breast cancer. Despite the diffusion of CAD in clinical practice, there is limited and contradictory information as to its relative harms and benefits. Thus, the different perceptions of CAD and double reading are likely to have implications for the variability and performance in
Image retrieved from www.medicexchange.com/news (Accessed 2 Nov 2010)
Mammography image.
Dartmouth Undergraduate Journal of Science
tMedicine When vision is on the line: initial treatment of pigmentary glaucoma David Campbell, an opthalmologist at Dartmouth-Hitchcock Medical Center, addresses the challenges of treating the second most common cause of blindness worldwide in his publication, “Initial Treatment of Pigmentary Glaucoma.” Glaucoma, a condition involving damage to the ocular nerve, is typically the result of fluid pressure build-up in the eye resulting from blockage of aqueous humour flow between the cornea, the iris and the lens. The progression and causes of glaucoma vary between patients, as glaucoma encompasses many diseases. Campbell describes treatment options for a hypothetical patient experiencing glaucoma as a result of pigmentary dispersion syndrome, in which fluid circulation is inhibited by eye pigment flaking off the iris. According to Campbell, reducing intraocular pressure must be a priority in treating the glaucoma, to reduce the possibility of further damage to the patient’s field of vision. To bring about such a reduction, Campbell recommends initially prescribing a beta-blocker, a class of drugs that reduces intraocular pressure by lowering production of aqueous humor. Should symptoms persist after the beta-blocker treatment regimen, the patient would be placed on a miotic, which contracts the pupil, restores the flow of aqueous humor, and flattens the iris in order to prevent further pigmentary flaking. Finally, Campbell suggests surgery such as laser iridectomy as an alternative to medication.
Image retrieved from www.webmd.com (Accessed 2 Nov 2010).
Image courtesy of Jimmy Wu
Proposed mechanism for allylic thioester formation.
tChemistry Image courtesy of CreativeCommons Attribution-ShareAlike
Serotonin neurons are important in modulating respiratory activity and are highly sensitive to stress
tBrain Sciences
Consequences of gestational stress on respiratory control development
Laval University professor of Pediatrics Richard Kinkead recently spoke at Dartmouth on his research in gestational stress (GS). Kinkead claimed that while the neuroendocrine response of releasing cortisol under stress prepares us for “fight-or-flight,” too great of cortisol levels could be harmful. In the case of a pregnant mother, high cortisol levels are linked with preterm birth, which can contribute to birth defects. These include respiratory instability, apnea, and bradycardia, a slow-beating heart. The fetus is very sensitive to its environment during its rapid growth, and because its environment is filled with hormones and other chemicals, it is prone to morbidity from maternal GS. Kinkead studied rat models, and induced stress periodically by exposing pregnant mothers to the odor of a predator. Pups were tested after they were born; Kinkead found that GS lowers birth weight and reduces the response to not having enough oxygen. This implies GS may cause sudden infant death syndrome (SIDS). These results prompted further experimentation into 5-HT neurons (brain cells), which are important in modulating respiratory activity and are highly sensitive to stress. The results showed that GS significantly affects the development of these 5-HT neurons, perhaps from a reduction in serotonin. Kinkead plans to study the effects of GS into adulthood.
Wu develops streamlined synthesis of allylic thioesthers Chemistry graduate student Forest Robertson and professor Jimmy Wu recently published a paper in Organic Letters. The paper focused on the synthesis of allylic thioesthers from phosphorothioate ethers and alcohols. Thioethers are important biological and pharmaceutical agents, and Wu and Robertson were able to create thioethers in a single step by adding an exogenous alkoxide. No malodorous sulfur-containing compounds were required in the thioether synthesis. The formation of carbon-sulfur bonds is an active area of research, and their finding helps fill in a relevant gap. Interestingly, the presence of the C-S bond over C-C or C-O bonds can enhance functionality. For example, “a single sulfur atom bonded to at least one allyl side chain” found in diallyl sulfide inhibits the formation of cancerous cells. Additionally, the chemotherapy drug, doxorubicin, depends on linkage thioethers between doxorubicin and a specific antibody for antitumor properties. Wu and Robertson assert that the reaction proceeds via an SN2 mechanism, which begins with the alkoxide attacking the phosphorothioate ether to create both a phosphate and a thiolate. Thiolate then displaces the phosphate to yield the intended thioether. Wu and Robertson support the SN2 conclusion with a nearly perfect yield of a stereoselective reaction. This improved method allows for a way to use readily available material to form molecules essential to the activity of many cancer-fighting drugs.
Eye of individual with glaucoma. Fall 2010
5
Interview
From Disney to Dartmouth
An Interview with Space Plasma Physicist Mary Hudson Andrew zureick, ‘13
T
he DUJS talked to Mary Hudson, professor of physics and instructor of popular introductory physics courses, many of which are open to both undergraduate and graduate students, to gain insight into her life as a researcher, mentor, and teacher.
What was your path to becoming a professor at Dartmouth?
I knew just from when I was a little girl that I wanted to be a physicist, though I don’t think I really knew what that meant at the time. I had seen this wonderful program when I was a child called “Our Friend the Atom” by Walt Disney. Back before PBS and Nova, Walt Disney would have a science special once a month. I got the book that went with the program and got very excited about what I had perceived as physics and an understanding of the physical world. I went on to major in physics at UCLA, and right at the time I graduated— I graduated early, in March—I had six months before starting graduate school. I had already decided to go to graduate school in physics, but I hadn’t really decided what kind of physicist I wanted to be. I had been working with the experimental cyclotron group at UCLA as an undergraduate. Because I graduated early, I interviewed with some California aerospace companies to find a six-month job. I went to work for The Aerospace Corporation, which does the technical management of the Air Force space program. It is a non-profit chartered-byCongress corporation, sort of like NASA for the Air Force. I was working with a research group there and I just had gotten really interested in space physics; I knew when I started graduate school that I wanted to do this. I grew up in the era of Sputnik and Apollo, so I was primed to be excited about space. I had decided to stay on at UCLA because they had a really good program in space physics and plasma physics, the study of ionized 6
Image retrieved from www.nasaimages.org (Accessed 2 Nov 2010)
Schematic of the Van Allen radiation belts.
gases, which is what most of the space near earth is. I pursued my PhD in space physics at UCLA; I actually stayed on for two years at the Aerospace Corporation, during my first two years of grad school, because they were giving me research projects, and I thought, “Well, this is pretty exciting stuff!” They were really at the cutting edge out there, so I stayed with them while I was taking classes. In the process of working on my research project for my PhD, I started interacting with the experimental group at UC Berkley at the Space Sciences Laboratory. The group has now built its own satellites. Even back then the group was involved in a lot of NASA projects, so I decided that it was where I wanted to go after earning my PhD. I went up to Berkley for a postdoc, and I wound up staying there for 10 years. I built my own research group and I got funding from NASA, NSF, and NOAA, the National Oceanic and Atmospheric Administration. I taught a couple of classes at UC Berkley, and some classes at Mills College in Oakland. I knew I wanted an academic
position, so when the opportunity for the position at Dartmouth came along, I applied. Professor Sonnerup over at the engineering school was on a search committee for this department. Whenever Dartmouth has a “senior hire,” they tend to have people on the search committee not just in one department but somebody from another related department. At that time, professor Agnar Pytte was leaving this department to become Provost of the College; he was a plasma physicist. Then, the junior plasma physicist Chris Celata decided to go to Berkley. I applied for the position here, and David Montgomery applied at the same time. We were hired as two new people in plasma physics; my focus really is on space plasma while David has worked with both space and laboratory plasmas. We both came in 1984, and it was pretty exciting to be starting up a new area within the department focused on space physics. Before I came here, no one was doing space physics in this department. Professor Sonnerup had been doing space physics over at the engineering
Dartmouth Undergraduate Journal of Science
school, and that’s how he knew me and encouraged me to apply. It really makes a difference when someone encourages you to apply for a position. You take it more seriously. So, I came in 1984, and my husband Bill Lotko came at the same time. He is a professor at the engineering school now, so we were both able to work out appointments here at Dartmouth in two separate departments. That’ s how I got to be here at Dartmouth!
What does your current research focus on?
I study the near-earth space environment and the effect of solar activity on the near-earth space environment. In particular, right now I’m focusing on the Van Allen radiation belts. I didn’t always work just in this one area. I started out working on disturbances of the equatorial ionosphere that affect communications near the equator and then I migrated to high latitude and worked on auroral processes and then lately —over the last decade or so—I’ve been working on the radiation belts and how they are affected by solar activity. They’re very dynamic, actually. Particle flux levels have been relatively quiet the last couple of years because we’ve been at solar minimum, but now we’re starting to come out of solar minimum. Sunspot numbers are increasing and solar activity is increasing. But, by 2012-2013, we’ll be at another solar maximum. I’m involved in a number of research projects that are aiming to do experiments at that time. Professor Robyn Millan, in this department, has a balloon program that’s going to launch balloons at that time. I have a meeting I’m going to tomorrow [May 26, 2010] in Minnesota that pertains to one the instruments on a pair of satellites that will be up at that time.
Can you tell me a little about the upper level physics courses you teach?
I taught Physics 13 and 14—Introductory Physics I and II—last year, and I’ll be teaching P14 next spring. This year, I taught a sequence of three courses that I do every other year. Physics 68 in the fall is the Introductory Plasma Physics course, so ionized gases Fall 2010
in space and in the laboratory. In the winter, I taught a graduate level course that follows on to that, Kinetic Theory of Ionized Gases. Then this spring, I taught Physics 74, Introduction to Space Physics. Both the fall and the spring courses are open to undergraduates as well as graduate students, and typically I have some of both in those courses.
As a professor, how do you inspire students?
the atmosphere and the ionosphere. They are going much faster than the average particles in the atmosphere and ionosphere. They’re trying to simulate that in the laboratory, and they use it to calibrate instruments that are flown on rockets in Alaska and Norway and so on. He designed and built this amazing, amazing device.
How broad of careers do physics majors pursue?
I like to think I get students excited about my research. Yesterday was particularly fun for me in the P74 class; it was my last regular lecture of the term. I gave the students a long version of a talk that I’m giving tomorrow [May 26, 2010] at the University of Minnesota, describing the dynamics of the radiation belts. I think I can be particularly inspiring when I’m talking about my own research, although I try to give them the broader picture of things. I’ve also engaged undergraduate students in research. I have a Presidential Scholar working for me and have had students work during off terms for me. Right now I have two graduate students for whom I’m the principal supervisor.
We usually have 12 to 22 physics majors I would say, usually in the high teens. About a third of them go directly to graduate school. Umair is going to the University of Wisconsin, we have one senior going to Berkley, another going to CalTech, and that’s pretty typical. Then, others in the group are going to Penn, Yale... good places. A third of them will take a year off and work, think about going to graduate school, and try to get a better grip on what it is they want to do long term. Probably a third of them go to a professional school: law school, medical school, and so on. That’s kind of a broad cut through what our students do. Of those who work for a year, some go to graduate school in physics and some go to graduate school in other areas.
Can you describe the role of a faculty member on an undergraduate’s research experience?
What are some of the big recent discoveries in physics?
Well, it depends on the project. My work is computational and compares results with data taken by satellites. I tend to have to encourage students to develop their computer skills. I encourage them to take a course like ENGS 20, for example, which is a good course at the engineering school—Introduction to Scientific Computing (from the point of view of engineering). Other people in the department who are doing experimental work have students actually involved in hardware building and designing. I was on a thesis committee yesterday; senior Umair Siddiqui worked with Professor Kristina Lynch, who has a large plasma tank in the building (Wilder). He designed an electron and ion gun that can inject a beam of particles into a preexisting ionized gas, which simulates what happens in the aurora when auroral electrons hit
Oh wow, that’s a harder one to answer! I’m very familiar with the discoveries in my own field. In physics in general, there are going to be some amazing discoveries. Every time you build a new instrument, every time you put new instruments on satellites, you discover new things. New instruments are like opening up a new piece of the electromagnetic spectrum that had not been seen before. The same is true for particle physics; there are going to be some amazing discoveries with the Large Hadron Collider that’s just started in Switzerland. But in space physics—for example, with the earth, we have had a lot of spacecraft around the earth over the past 40 years, but where we’re making advances is with higher and higher resolution measurements and better spatial resolutions. Most of the early measurements were made just with single spacecraft where you don’t get any space -time reso7
lution because you just have one point measurement. Now, recently, there have been spacecraft including the THEMIS spacecraft that the Berkeley group built and flew in the last few years. There are five spacecraft that are designed to study relaxation oscillations, called substorms, of the magnetosphere, the region of space occupied by the earth’s magnetic field. The basic internal mechanisms of the magnetosphere where these relaxation oscillations are triggered were not well-known, because we only had single point measurements before the THEMIS satellites. A lot of progress has been made with that particular program. Of course I was excited by the measurements two solar cycles ago of how dynamic the earth’s radiation belts are that were made with a combined NASAAir Force satellite called CRRES. In recent years, it has been two solar cycles —20-some years—since we’ve had satellites in that particular orbit. We tend not to fly satellites through the radiation belts that we want to survive very long because it’s a very hazardous place for electronics and so forth. We have not had satellites exploring that region now since 1990-91, and that is the region we are going to explore again in two years with the Radiation Belt Storm Probe
8
satellites that I’m involved with. I think we’re going to learn a lot because there is a pair of satellites rather than just a single one, and so we’ll get more information from the pair where we can study spatial gradients and separate out something that is just a boundary in space from something that is changing in time. I can’t really speak for condensed matter—a whole other area of physics that people in the department are engaged in—but I’m pretty excited about the measurements that are going to be made in the next couple of years in my area.
What do you enjoy doing outside of academic work?
I really enjoy hiking. My older daughter, an ‘06, went to Nepal two years ago and did the Annapurna Circuit. She talked my husband and me into going to Ladakh, the Indian Himalayas. We went trekking in Ladakh last August and went over 3 passes between 15.5 and 16.5 thousand feet. It was just an amazing experience; it’s not just the mountains and the altitude and so on, but just seeing a completely different part of the world like Ladakh. We also got some appreciation of the tensions of North-
ern India, because this particular region is surrounded by Pakistan, the Xinjiang province in China, and Tibet on the three sides. It’s a spectacularly beautiful region. In the wintertime, I like backcountry skiing and winter hiking.
Do you ever plan to or want to travel to outer space?
No, I think I will not be going along for the ride—that is a little too high altitude for me at this point, but I do think that your generation will. Commercial access to space is coming along, and people of your generation will go to space just like I went to India last year. The more adventurous people will be doing that in their lifetimes. I envy you!
Dartmouth Undergraduate Journal of Science
science writing
“Your Thoughts Are Like a String of Pearls” Reflections of an Erstwhile Journal Editor and Writer Christopher Dant, PhD
“
Y
our thoughts are like a string of pearls!” This was the reaction to the first draft of my research paper _ provided by a Senior Editor of JAMA for whom I apprenticed some 30 years ago on my first job out of academia. I smiled and enthusiastically thanked him—after all, I worked hard on that paper. “You misunderstand! By that, I meant that your thoughts are like a string of loose unconnected sentences, much the same in form throughout, and, well, otherwise boring, confused, and difficult to understand.” The words cut into me, the young scientist-apprentice. After all, I was a trained scientist, and had written several journal articles and grants in academia. Surely, I had learned something about writing from this experience. I struggled for some comprehension— was it possible I didn’t know how to write? Perhaps my editor could be wrong and another opinion would be more favorable. But he wasn’t wrong and another opinion would have just confirmed the diagnosis—bad writing! The physician-editor sat me down and meticulously went through his
marks—there was a sea of red on every page (yes, this was before computers!). Through the painful lessons, I had obviously forgotten some basic principles from my freshman writing classes. “Omit needless words! Vigorous writing is concise. A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts!” This was the advice essayist and editor E.B. White took from his Cornell English professor William Strunk. “The Elements of Style,” from which this was taken, was one of the first books I had read in writing class. I had forgotten its advice. “Clarity, clarity,” was the constant exclamation from my freshman English professor. My thinking and subsequent prose had become muddled, riddled with redundancies, and confused. “No good editor will accept this paper!” he warned, looking at me sternly over the top of his reading glasses. “But with enough work, you’ll learn how to write well the first time...now, go back and start over with an outline!” I walked away with my proverbial tail between my legs. I thought, an outline, start over…why should I do that?
Image retrieved from halcyonhealth.ca (Accessed 31 Oct 2010).
To be a good writer, you have to be a good reader...of good writing. Fall 2010
Underneath, however, I felt somehow hopeful that he was willing to help me. I was bound not to repeat the failure. But I did. Many times. This could be a big problem for my career, I thought. Why can’t I express my thoughts clearly? Eventually and slowly, I learned how to write clearly through much laborious practice and making mistakes repeatedly—especially, I learned by how easily my readers could grasp what I had meant to communicate. But it was hard work. I tried reading books on writing; there were so many of them, but I quickly learned that they were mostly not helpful and you can’t learn to write by following the advice in a manual, no matter what the advice. In the many intervening years working with young (and experienced) students and faculty, I learned that many investigators never had a mentor that helped them with their writing—or, the advice given them was simply misleading and not helpful. In fact, I have been told by some investigators and students that their mentors believe that writing can’t be taught, and you either sink or swim on your own. This large misconception stems from an ignorance about writing that has been pervasive in academia since I can remember. Over the years that I have taught courses about scientific writing, it continues to amaze me how many scientists struggle with basic writing, many aspects of which can easily be taught. I have frequently pondered why scientists write poorly. One universal tenant of writing is that scientists often equate long, complex sentences and paragraphs with deep thinking. But the simple fact is that ‘academic’ puffery—stilted, complex, and confused writing—is misunderstood by the reader and doesn’t serve the author. For example, consider this sentence, part of an abstract, sent to me by a prominent scientist (their final version): “The influence of age (younger vs older) has been reported recently 9
for multiple sclerosis disease in the context of a more rapid clinical response.” This sort of writing from a confused author confused also the reader, who tried to guess what she really meant. When I asked the author to clarify this sentence, she said, “Well, I think it was clear! We found that our younger multiple sclerosis patients— those under 55 years—had a more rapid clinical response compared to the older patients (over 55 years of age). That’s what it says!” Her stubbornness and emphatic stance confirmed for me what I already knew: many authors are insecure about their writing because it is a personal activity, and they copy the academic language that they see in many journals and other scientific forums—much of which is confusing and difficult to understand. Layer on that confused thinking in the first place. Unfortunately, this author ignored my advice and the abstract was rejected by the editors of the scientific congress. She realized her tenable position and we rewrote the abstract together and it was eventually published. And she learned something about writing, a much more valuable lesson. Unfortunately, many like-minded scientists end up with rejections from a journal or a funding agency because of confused and disorganized writing, which delays the dissemination of important scientific findings to their colleagues. This problem is altogether simply avoidable. Thus, to all you budding scientists and young investigators out there who wish to learn how to write well, I offer some of my advice from experience of working with students and investigators at all levels at many different institutions. It’s advice that most respected journal editors and good writers will also give you. At the risk of oversimplifying this—here are several cardinal rules of writing that I hope you will take to heart.
1. To be a good writer, you first have to be a good reader
Especially of good writing. Unfortunately, much of the literature is poorly written. “There is no form of prose more difficult to understand and more tedious to read than the average 10
scientific paper” wrote Francis Crick in 1994. The co-discoverer of the structure of DNA was acknowledging what everyone in science knows: research papers can be a nightmare to read. What should you read? I suggest something basic, not directly in your field of expertise because you’ll separate your critical scientific thinking when you read the paper. Papers in Nature, Science, Cell, Science, The New England Journal of Medicine are known sources of good writing among many others. And there’s a host of specialty journals—most fields have journals that are considered top in their field, some not so. Frankly, reading good literature in general is essential to re-training your brain to good English construction and style–The Great Gatsby is, in my opinion, a gold standard of outstanding writing. So, read well-written material often–at least once or twice a week.
2. Know your reader
Most writers write for themselves, ignoring their overarching goal–writing for the reader. Without considering who will read your article, you will not first consider how fully you should explain more difficult concepts, what figures or tables to include, what terms and concepts you need to define. In some very specialized journals, this may seem obvious. However, with readership worldwide and science becoming increasingly cross-disciplinary, writers must consider a wider audience than might seem obvious. Before you start typing from that blinking cursor on your screen, consider your audience– many journals publish this information in their instructions online. Talk to an experienced peer reviewer of the journal, if possible, to better understand your readership. It will serve you well.
3. Outline your work
You remember how to do this from your freshman English class. Without an outline, you’ll walk through a minefield of disorganized and wayward thoughts. Starting to write a paper with that blinking cursor is nothing short of trying to build a house without a blueprint of design and foundation. The end product ends up a mess of disorganized and illogical thoughts, redundancies,
and irrelevant material—all of which makes the work much harder for the writer, sometimes impossible. One of my mentors at Stanford, an outstanding writer, told me “The best way to edit a disorganized paragraph is to just swipe through it and hit DELETE.” Many of the papers and grants I receive for review often require substantial rewriting to untangle a mess of long, illogical paragraphs, redundancies, and confused concepts. Unfortunately, once the ideas are put down for consideration, untangling them and reorganizing the paper takes far longer, is tortuous, and often results in an inferior product than if the author had outlined the work— just like a poorly designed house with disorganized spaces, layout, wiring, plumbing. Some of the paragraphs just have to be torn apart and reordered, compounding the writer’s problems, creating a patchworked nightmare, and costing valuable time. A prominent journal editor once told me “In my experience, no experienced researcher writes a grant or paper without a good outline.” Follow this recommendation!
4. Never write the research paper in the same order it’s presented
Starting with your abstract and moving to introduction, methods, etc, is a waste of time and will create more work for you. In writing a paper, especially in making the outline, authors discover new ideas and may take different directions. Abstracts are written last. Also, don’t fret over your title. Start with a working title if necessary, but you’ll refine it once you finish so it can be more sculpted to your paper’s purpose. My advice, and that of many journal editors I have worked with, is to start with your figures and tables. Consider your data, talk to your colleagues, think about what the data is telling you, and then create your results. I often print my figures and tables and spread them out in front of me—what are they telling me? In your outline, you start framing your findings carefully. I say carefully because so many writers take the lazy route and end up regurgitating what’s obvious from the figure or table, wasting valuable space but more importantly, insulting the reader’s intelli-
Dartmouth Undergraduate Journal of Science
gence. If you have carefully crafted your figures and tables—and this means going through many revisions–your reader will be able to immediately understand them. Point out for them what’s not obvious. Look through some toplevel journals in your field—or better yet, outside of your field—and see how they do it. You’ll see some pretty sophisticated figures and tables that stand on their own and are clear at first read– they’ve gone through countless edits by the authors and journal editors alike. Once you frame your findings using your figures and tables, and from your outline, frame your discussion and introduction next. The methods section often can be put together anytime, but it usually will need refining once you finish your results. The discussion is critical to a paper, and so many investigators make the mistake of going off on sidetracks not relevant to their central hypothesis and findings, and many times, authors will make conclusions not clearly supported by their results. This is a deal-breaker for journal editors, and can often be a central reason for the paper’s rejection. Discussions should put your findings in the context of other research findings, discuss weaknesses, and especially tell the reader what’s next? Research is not carried out in a vacuum—your findings always suggest future studies, and it’s important to tell your reader what you plan to do now that you’ve gotten these results. Also, the introduction often suffers, largely because writers have not outlined their thoughts, and they end up writing an exhaustive background, some of which is not relevant to the problem. Your introduction should be short and strong—a precise background and significance that follows a logical framework: what is the problem, what do we know, and what are the gaps in our knowledge, what is my hypothesis, and how am I going to fill that gap to help solve the problem? And, most importantly, why is this important? Take a clue from the NIH—the new grant structure now must include a separate section, Significance, in which you must detail why this problem is important to human health and disease. You then end the introduction with a clear and short statement of objective like “Here, our objective was to ….”. Fall 2010
5. Revision is at the heart of good writing
Put the paper draft away for a couple of days. When you re-read it, I guarantee you will find basic errors, many redundancies (which have been minimized by your outline), and confusing sections. Think of your reader when you’re revising—who are my readers and what do they need to know? Also, give your paper to a colleague—it’s a necessary part of revising. A different point of view, whether you agree with it or not, always refreshes your perspective. You think of yourself as an “independent” investigator, but that does not mean you should by working in a vacuum. Do not exclude your colleagues’ ideas! And in revising your work, learn to cut ruthlessly. Most papers I edit are 20% to 30% too long, with so many redundancies and convoluted sentences that the author did not see—the track changes help them to see, but learn to carefully edit your own work. Sometimes, swiping through a tangled paragraph and hitting the DELETE key is necessary!
6. Take some lessons from professional writers
Scientists often ask me how I write. I find the time—usually I schedule the time—then pick a quiet place to write, free from distractions, close my door, and decide on a goal. “Today, I will write my results and discussion from my outline” might be a good goal. But under no circumstances will I open my paper when I have only a few minutes and try to do any serious work on it. Good writing requires dedication and concentration, and time. Unfortunately, many scientists try to write a paper in one sitting, go through one or two cursory edits on their own, and send it into the journal, all within perhaps a few days. Most good papers require weeks to write and will undergo many revisions– sometimes 15 drafts and other authors’ input and consideration. But remember, if you’re the paper’s first author, it is your solemn responsibility to take all your other authors’ input, consider them for inclusion or not, and assure that the paper holds together with all the additions and deletions I have seen some big papers turn into a nightmare of confused paragraphs and differing
styles that sometimes is unrecoverable and the paper has to be rewritten largely from scratch. Don’t go there.
7. When faced with problematic passages and confused writing, read it out loud to yourself or a colleague
Linguistic research confirms that seeing and hearing what you’ve written will help clarify the difficulties. And, when speaking the thoughts before they are put on paper, especially because the writer is not trying to wordsmith the writing to impress their reader, the thoughts often flow more naturally and easily. In an elevator, a colleague asks William, a young scientist, about his recent study on medical curriculum. ”What did you find in your study, Bill?” “Basically, we found .that medical teachers .of. undergraduates tend .not to let students look .after the difficult patients.” Later that evening, William sits down at his computer and writes: “The present analysis confirmed the hypothesis that clinical instructors of undergraduate medical students would rather choose education instructional techniques limiting active student involvement in patient-care activities when faced with problematical clinical situations.” It probably took Bill a long time to write that sentence and I’m sure he felt particularly gratified at its complexity and seemingly deep thinking. But it just confused the reader, who became increasingly annoyed with Bill’s convoluted writing. Often, scientists can more easily express their thoughts through speaking. Writing those verbalized thoughts down usually makes it easier to navigate through the many complex ideas and thoughts, especially when they have already been outlined. So, the next time you are navigating through your cumbersome prose, stop and read it aloud. You’ll more clearly see the problems.
11
Image retrieved from www.cell.com (Accessed 2 Nov 2010)
The journal Cell has launched a new format for their online presentation of research articles.
8. Learn the elementary rules of punctuation and use them to your advantage
Skillful punctuation is the backbone of good writing. As a classic example, if you read “Woman without her man is nothing,” you’d wonder if it should be punctuated as “Woman, without her man, is nothing.” Or, “Woman: without her, man is nothing.” Skillful use of semicolons, comas, parentheses, dashes, and periods can clarify a confused sentence or string of tangled thoughts, assuming the message is all there. In general, I would say that most often, scientists need to insert more periods and make their sentences shorter and more direct. For a good summary of punctuation, see the “Elements of Style” at www.bartleby.com/141/strink.html
9. Learn to be visually literate From the beginning of time, humans have communicated visually and have learned to interpret, negotiate, and make meaning from complex information presented in the form of an image. Visual literacy is based on the idea that pictures can be “read” and that meaning can be communicated through a process of reading. I believe that in communicating science, particularly in an increasingly complex word of subspecialized ideas and language, becoming more visually creative will serve you well. One of the first telling aspects of a manuscript or 12
a grant to a reviewer are its figures and tables. But many investigators think only of the obvious ways to display complex information. There are many creative ways to simplify or convey complex mechanisms of action, study designs, and other concepts visually. This has been recognized by some journals as paramount. The highly respected journal Cell has launched a new format for their online presentation of research articles. This “Article of the Future” offers a visual display of the authors’ complex ideas in a visual abstract that helps readers easily grasp the points of the paper. I believe it’s the future of publishing. Especially in grants, visually representing the study’s progress in a simple chart of milestones and timeline or explain complex organization of the work with multiple laboratories and investigators impresses reviewers. Such visual literacy, I believe, also helps writers think through complex ideas. Drawing it in some sort of graphical format will help to clarify your thought. And clarity of thought is what clear writing is all about. Without it, a writer remains tangled in his own muddled thoughts. Where does this leave us? Writing is a very personal activity, much like drawing or playing a musical instrument—the writer and artist learns much the same way: trying different approaches, making mistakes, and ultimately through practice, becoming more and more proficient. The scientist who wishes to communicate
through the written word must also practice frequently, but they must have help, much like from a music teacher, to point out their mistakes and help them improve. Unfortunately, not all scientists have a mentor to help them. It is my hope that this advice helps you, the budding scientist, to improve or gives you some push to do more. But find a trusted colleague anyway and work together to read each others work, form a journal writing club, anything to get feedback on your writing. And don’t get me wrong, writing is hard work for the novice and experienced writer alike. With a lot of practice, you’ll eventually get to that more confident place, a place in which your writing really sings with simple and lucid sentences and paragraphs, and tells your reader everything that you meant to say.
__________________________ Guest columnist Christopher Dant is a faculty member at the Dartmouth Medical School and Norris Cotton Cancer Center. He teaches students, postdoctoral fellows, and faculty how to write, and works with faculty on their grants and papers.
Dartmouth Undergraduate Journal of Science
Neurology
Caffeine and Naps
The Fight Against Sleep Deprivation Kyle Heppenstal ‘13
S
chedules packed with academic, extracurricular, and social obligations make sleep-deprivation a fact of life for many Dartmouth students. Although college students require more sleep than most other age groups, few undergraduate students sleep for the full nine and a quarter hours prescribed by Cornell sleep expert and author of Power Sleep, James Maas (1). College students use many different methods to combat their perpetual sleepiness. Two of the most popular methods for keeping students alert and productive are caffeine consumption and naps.
Consequences of Inadequate Sleep Other than daytime sleepiness, acute sleep loss correlates with decreased neurobehavioral performance including the inability to focus, poor memory retention, and moodiness (2). Sleep-deprivation can also increase stress levels and weaken the immune system. Sleep deprivation leads to an increase of proinflammatory cytokines, chemical messengers that cause tissue inflammation. High levels of these cytokines are also “associated with an unfavorable metabolic profile, a higher risk of cardiovascular adverse effects, and decreased longevity” (2). Consequently, sleep loss over extended periods of time can lead to increased risk of heart attack, stroke, diabetes and depression (3). With busy schedules, college students often have a hard time catching up on their sleep. According to J.T. Szymczak of Nicolas Copernicus University in Poland, the popular technique among college students of catching up on sleep on the weekends does not eliminate sleep deficit (3). Therefore, since many Dartmouth students live continuously with a lack of sleep, they search for ways to limit the negative short-term effects of their sleep debt. Fall 2010
Image courtesy of Kendrike, photographer
Over 80 percent of adults consume caffine daily through coffee or tea.
Consumption of Caffeine to Combat Sleepiness Many Dartmouth students rely on caffeine, the most commonly used psychotropic drug in the world, to prevent sleepiness (4). Over 80 percent of American adults consume it daily through coffee or tea (5). Americans use caffeine for a variety of reasons, one of which is the feeling of arousal or rejuvenation that the drug can create (6). Among college students, caffeine is often deliberately used for this physiological purpose. Adenosine, an adenine molecule attached to a ribose sugar, is continuously created while humans are awake. As humans stay awake longer, more adenosine is created, filling the adenosine receptors in the brain (7). As increased numbers of these receptors are filled by adenosine, nerve cell activity is slowed down and drowsiness increases. When caffeine enters the body, it also binds to these adenosine receptors. However, when caffeine binds to the adenosine
receptors it actually has the opposite effect of adenosine; it speeds up nerve cell activity (7). When the body’s regulators notice this abnormal increase in brain activity, the pituitary gland responds as if there is an emergency and begins to produce adrenaline, which increases heart rate and blood sugar levels (7). In addition to increasing alertness and reducing sleepiness, the increased nerve cell activity and higher adrenaline levels impact human cognition (5). Although caffeine-induced stimulation can reduce a person’s ability to complete complex tasks, caffeine can also improve performance on less engaging tasks. This is because complex tasks sufficiently stimulate the brain alone, and the addition of caffeine can cause excess stimulation, lowering cognitive performance. Consequently, caffeine overdoses can reduce cognitive performance. However, low doses of caffeine allow individuals, especially sleep-deprived individuals, to better complete tasks that are not highly stimulating. In the same scenario, caffeine can increase memory recall 13
and storage. For doses less than 300 milligrams (and higher for more frequent consumers), studies have also shown that caffeine tends to elevate mood, without increasing anxiety. (5)
Napping Another method used by college students to combat sleepiness is taking a nap. Although the appeal of a nap is undeniable among sleep-deprived individuals, recent research suggests that napping, if done correctly, has a definite impact on alertness, concentration, and information storage, especially under sleep-deprived conditions (8). A 2007 study by the Centre for Sleep Research at the University of South Australia found that a nap during a night shift helped to limit the decline of worker performance and alertness (9). Each participant of the study was randomly selected to engage in a thirty-minute nap during his or her simulated nightshift. During the simulated nightshift, performance was assessed using reaction time tests and sleep latency tests to quantify productivity and sleepiness. Participants who had the thirty-minute nap during their nightshift maintained more consistent performance and lower sleepiness throughout the simulated nightshift (9). Another study conducted by the Department of Preventive and Social Medicine at the University of Otago in New Zealand demonstrated similar results, showing that twenty-minute naps could counteract performance declination among nightshift workers. In a two-week study involving male aircraft maintenance engineers, a twenty-minute nap was shown to increase performance a on computerized neurobehavioral test battery (10). Shorter naps, usually lasting ten to twenty minutes were found to be the most effective in both studies (9,10). Brief naps only allow the body to enter stage one sleep, the drifting off period, and stage two sleep, an intermediate stage between the first stage and deep sleep. Once the body enters stage three sleep or deep sleep, it becomes far more difficult for the body to wake back up. Therefore, after longer naps, many people often suffer from sleep inertia, the prolonged drowsiness feeling that the body undergoes as it transitions from 14
deep sleep to its awakened state (11). Studies like those performed by the Centre of Sleep Research indicate, due to the impact of sleep inertia, that naps less than thirty minutes most effectively improve alertness. By increasing levels of cortisol, a stress hormone produced by the adrenal gland that boosts blood sugar levels, and decreasing proinflammatory cytokines, short naps help alleviate drowsiness and lack of focus (11). Furthermore, 1996 research from the Karolinska Institute in Stockholm, Sweden demonstrated that naps tended to have stronger effects on individuals suffering from a sleep deficit, which suggested that short naps could be beneficial in restoring college students to near baseline performance and efficiency (8). Another study showed that a twenty minute daytime nap had a more significant impact on daytime sleepiness than adding an extra twenty minutes on to a longer nighttime sleep (12). Another important outcome of napping among college students is its influence on memory retention. Sleep plays a vital role in the transfer of information from short-term to long-term memory. Sleep-deprived individuals do not translate information from short-term memory to longterm memory as well as they would with more sleep (13). However, naps
can actually provide some of the benefits of a longer night’s sleep in much less time. Napping can help to consolidate information into declarative memory, the portion of the long-term memory used to recall explicit facts and memories (13). Although they may seem counterproductive when trying to study or memorize material, naps “appear to facilitate the formation of direct associative memories” (13).
The Caffeine Nap Studies from the Sleep Research Laboratory of Loughborough University in the United Kingdom suggest that naps and caffeine can be combined to combat sleep deprivation in a way that can be more effective than either used separately. Twelve graduate students with healthy sleeping patterns participated in an experiment in which they performed weekly two-hour simulated driver tests that measure afternoon sleepiness by counting the number of at risk incidents during two hours. When the participants were limited to five hours of sleep the previous night, there was an increase in sleepiness as detected by the simulated driver test. In the following weeks, the participants
Image retrieved from http://www.fromsingletomarried.com/wp-content/uploads/2008/10/alarm-clock-istock-photo.jpg (Accessed 2 Nov 2010).
Longer time spent awake and influences from the body’s circadian rhythm. Dartmouth Undergraduate Journal of Science
were randomly selected to take thirtyminute breaks between a preliminary fifteen-minute driving period and the longer two-hour session. During the thirty-minute breaks, some participants consumed caffeine and took a nap, some participants only consumed caffeine, and others did neither (placebo). The results from the experiment indicated that caffeine consumption followed by a brief fifteen to twentyminute nap was the most effective way to keep drivers alert and at lower risk of incidents like swerving out of a lane. In each half-hour section of the twohour driving test, the participants with the caffeine and nap performed significantly better than the participants who only consumed caffeine. Since it takes nearly 20 minutes for the body to feel the physiological effects of caffeine consumption, a short nap during that time period allows an individual to receive the best of both methods (13).
Summary Many Dartmouth students are engaged in a daily battle to get the appropriate amount of sleep. Taking naps and consuming caffeine are two of the most common methods that Dartmouth students use to help limit the negative effects of their lack of sleep. Caffeine, a stimulant usually consumed through certain beverages, binds to adenosine receptors in the brain and causes increased nerve cell activity and production of adrenaline (7). These two physiological effects stimulate the brain and influence human cognition and emotions. Caffeine can help elevate mood and also increase cognitive performance (5). Although short naps only marginally affect long-term recovery of sleep deficit, they can be useful in the short-term in helping students to not only stay alert and productive but also to convert information from shortterm to long-term memory (8,13). Furthermore, it is also helpful to consume caffeine and then quickly take a brief nap. This approach to fighting off sleepiness combines the positive effects of both methods and is even more beneficial than either one on its own (13).
Image retrieved from http://upload.wikimedia.org/wikipedia/commons/4/4c/Caffeine-2D-skeletal.png (Accessed 2 Nov 2010)
Structure of caffeine. 1. J. B. Maas. Power Sleep (Random House Inc., New York, 1998). 2. H. Lau, M. A. Tucker, W. Fishbein, Neurobiology of Learning and Memory 93, 554-560 (2010). 3. B. M. Altevogt, H. R. Colten, Eds., Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem (National Academy of Sciences, Washington D.C., 2006). 4. P. B. Dews, Ed., Caffeine: Perspectives from Recent Research (Springer-Verlag, Berlin, 1984). 5. G. A. Spiller, Ed., Caffeine (CRC Press, New York, 1998). 6. A. B. Ludden, A. R. Wolfson, Health Educ. Behav. 37, 330-332 (2009). 7. B. B. Fredholm, Exp. Cell Res. 316, 1284-1288 (2010). 8. M. Gillberg, G. Kecklund, J. Axelsson, T. Akerstedt, Sleep 19, 570-575 (1996). R. Tremaine et al., Appl. Ergon. xxx, 1-10 (2010). 9. M. T. Purness, A. M. Feyer, G. P. Herbison, J. Sleep Res. 11, 219-227 (2002). 10. A. N. Vgontazas et al., Am. J. Physiol. Endorcinol. Metab. 292, E253-E261 (2007). 11. J. Horne, C. Anderson, C. Platten, J. Sleep Res. 17, 432-436 (2008). 12. A. Takashima et al., PNAS 103, 756-761 (2006). 13. L. A. Reyner, J. A. Horne, Psychophysiology 34, 721-725 (1997).
References Fall 2010
15
Psychology
The Science Behind Social Networking Medha Raj ‘13
H
umans have always expressed their social nature through reliance on groups. By pure necessity, social networks were created as a means to share experiences, needs and desires. In fact, the concept of social networking, which is the forging of relationships between different groups of people, has strong roots in other species as well. According to researcher Eric Clemons, professor at the Wharton School of Business, Social networks are familiar to all who study primates, from baboon troops and gorilla and chimpanzee groups to human societies of all levels of cultural development (1). In the past few years, with the creation of Myspace and Facebook in 2004 and the development of Twitter in 2006, the rules of social networking have continued their rapid rate of change (2). In the past and even now, social networking is partially marked by the amount of material goods one possesses. The current era of social networking can be attributed to the development of the Social Age, and the modernization of technology. Prior to the Industrial Revolution, communication was both difficult and slow. As people migrated from the countryside to the cities, communication, even among friends, was uncommon and expensive. It was not until the Second Industrial Revolution in 1875 that technology and social networking began to change, becoming cheaper and less of a hassle. The invention of electricity, automobiles, and telephones made communication more affordable. The popularization of the internet at the beginning of the 21st century led to the creation of many internet-based forms of communication and social status markers such as wikis, blogs, and online profiling websites. Our generation, the Net Generation, also known as Generation Y, has added a new dimension to social networking via the internet (3).
16
Online Friends Common markers of social networking and social status have long been ownership of material goods, such as cars, houses, and the number and type of friends that one possesses. In the Internet Age, this has not really changed. Perhaps the most ubiquitous social networking website is now Facebook. Mark Zuckerberg, a Harvard undergraduate, created Facebook in 2004 as a hobby. Initially, Zuckerberg and his friends possessed Facebook, which contained pictures of all their dormates. Expanding on this concept, Zuckerberg created a website to serve as a virtual Facebook (4). In Facebook, friends are easily made by simply requesting someone’s friendship; the receiver must accept the friendship invitation. However, unlike relationships that occur in person, relationships that occur on Facebook are more impersonal, more flippant, and more emotionless. Updates allow friends to keep track of one another, belonging to a particular group signals interests, and the number of friends one possesses may be an indication of one’s social status (5). According to research published by the Department of Communication at Michigan State University, college students were asked to view Facebook profiles that were completely identical except for the number of friends that a person had—102, 302, 502, 702, and 902. Interestingly, when asked to rate the target’s social attractiveness, the best results were found to belong to the profile with only 302 friends. In this finding, having too many friends might have appeared desperate, rather than popular (6). In the 1990s, Robin Dunbar, an anthropologist at the University College of London proposed that the maximum number of stable relationships that a person could have was 150 (7). The new age of social networking challenges this hypothesis. The average
Facebook user has 130 friends, and many have substantially more than this. This calls into question the stability and strength of these supposed relationships, and their validity as social status networking indicators (8).
The Impacts of the New Age The new age of social networking has significantly impacted the youth. Past forms of social networking—faceto-face interactions or sending snail mail—were no doubt more personal than current social networking. Online social networking allows individuals to make contact with a wider range of potential social contacts, but the impact of these new modes of communication has generated mixed opinions about whether this is beneficial or costly. Although obviously different, online interactions share some similarities with face-to-face interactions. Research conducted by the psychology department of the University of Virginia indicates that higher positivity and lower negativity predicted a larger number of friends online. In a study of 172 adolescents and their social interactions online, the researchers discovered that there was a strong overlap between real life interactions and those online. Self-and peer-reported positive friendship quality were associated with more supportive comments online, and self-reports of positivity were associated with a larger number of friends (9). This new form of online social networking might actually be beneficial to shy adolescents. Researcher Scott Caplan, associate professor at the University of Delaware, suggests that the more impersonal nature of online interactions might aid the formation of relationships, or at least, the strengthening of networks between different people. After all, websites such as Facebook allow people to select what personal information they would like to post.
Dartmouth Undergraduate Journal of Science
Websites such as Facebook provide some degree of anonymity, create space between the creator and the viewer, and allows shy individuals to exercise greater control over others’ impressions of them, according to Professor Caplan. These factors are enhanced by a greater ability to fabricate and exaggerate the positive aspects of one’s self online. In his 2003 paper, Caplan suggested that a preference for online social interaction might result from oneís perception that online communication is easier, less risky, and more exciting than face-to-face communication (10). However, researchers at the University of Windsor in Ontario have discovered that the effect of online social networking on shy individuals is not significant—shy individuals do not seem to gain an advantage socially over the internet. Shy individuals were found to spend more time on websites such as Facebook, and reported more favorable opinions about online websites than did non-shy individuals. Interestingly, even online, shy individuals still had fewer friends than did more confident individuals. Perhaps the most distinguishing factor between shy and non-shy individuals in terms of their online social interactions is that non-shy individuals do not tend to substitute Facebook usage for other forms of social interaction. These individuals tend to use online social networking to maintain relationships and share information (11).
The Demographics of Social Networking Through the Internet Social networking via the internet has had different effects on different segments of the population. For example, particular demographic groups are more likely to depend on the internet to form relationships. In a study conducted by researchers Janis Wolak, Kimberly Mitchell, and David Finkelhor at the Crimes against Children Research Center at the University of New Hampshire, researchers used telephone interviews to gather information from a national sample of 1501 young people ages 10-17 in order to determine the group of individuals most susceptible to online relationships (12). The results of the study showed that non-Hispanic white boys were about twice as likely to form close online relationships than boys belonging to minority racial and ethnic groups. The study also indicated that being highly troubled was found to be a motivating factor for forming social networks online for both girls and boys. In fact, boys whose parents were oblivious to the boys’ friends and problems were more likely to use online relationships as a source of comfort; for girls, conflict with their parents was a highly motivating factor for forming online relationships (12). The Internet is definitely a source of social support for these individuals.
Building Social Capital through Networking
Images retrieved from www.wikimediacommons.org (Accessed 31 Oct 2010).
Facebook, Twitter, and Myspace are some of the leading social networks. Fall 2010
There is little doubt that online social networking is another way of building social capital. The information posted, the number of friends, the kinds of friends, and the groups that a person joins are clearly indicative of an individual’s social worth, and are important factors in a person’s feeling of confidence and well-being. In a 2007 study conducted by researchers Nicole Ellison, Charles Steinfield, and Cliff Lampe at Michigan State University of 286 undergraduate students, the researchers measured the effect of Facebook usage on social capital. Findings indicated that
Facebook use improved social capital, which held benefits of increased information and opportunities. Although more research is required, the researchers at Michigan State University suggest that Facebook usage can also lower barriers to participation and can help form weak ties, but will not necessarily form close friendships associated with bonding social capital (13).
Conclusion Since the advent of the Industrial Revolution, social interactions and social networking have changed dramatically. Although research conducted in the field of online networking is relatively new, there is a general consensus that although social networking will not help form new relationships, it may help strengthen old ones. References 1. E.K. Clemons, S. Barnett, A. Appadurai, ACM International Conference Proceeding Series 258, 268-76 (2007). 2. B.J. Tolentino, A look at YouTube’s Success. Available at www.helium.com/items/1945096youtubes-success-story (22 September 2010). 3. The Net Generation, 1974-83 – Brainiac (2010). Available atwww.boston.com/ bostonglobe/ideas/brainiac/2008/03/net_ generation.html (12 June 2010). 4. S. Yadav, Facebook- The Complete Biography (2005). Available at mashable. com/2006/08/25/facebook-profile (12 June 2010). 5. C. Ross et al., Computer in Human Behavior 25, 578-86 (2009). 6. S.T. Tong, B. Van Der Heide, L. Langwell, J.B. Walther, Journal of Computer-Mediated Communication 13, 531-549 (2008). 7. The Dunbar Number as a Limit to Group Sizes (2004). Available at www.lifewithalacrity. com/2004/03/the_dunbar_numb.html (12 June 2010). 8. Press Room | Facebook (2010). Available at www.facebook.com/press (12 June 2010). 9. A.Y. Mikami, D.E. Szwedo, J.P. Allen, M.A. Evans, A.L. Hare, Developmental Psychology 46, 46-56 (2010). 10. S.E. Caplan. Communication Research 30, 625-48 (2003). 11. E.S. Orr et al., CyberPsychology & Behavior 12, 337-40 (2009). 12. J. Wolak, K.J. Mitchell, D. Finkelhor, Escaping or Connecting? Characteristics of Youth Who Form Close Online Relationships 26, 105-19 (2003). 13. N.B. Ellison, C. Steinfeld, C. Lampe, The Benefits of Facebook “Friends”: Social Capital and College Students’ Use of Online Social Network Sites (2008). Available at jcmc.indiana. edu/vol12/issue4/ellison.html (12 June 2010).
17
Biology
You Are What You Eat How Food Affects Your Mood Sarah-marie Hopf ‘13
F
or thousands of years, people have believed that food could influence their health and well-being. Hippocrates, the father of modern medicine, once said: “Let your food be your medicine, and your medicine be your food” (1). In medieval times, people started to take great interest in how certain foods affected their mood and temperament. Many medical culinary textbooks of the time described the relationship between food and mood. For example, quince, dates, and elderberries were used as mood enhancers, lettuce and chicory as tranquilizers, and apples, pomegranates, beef, and eggs as erotic stimulants (1). The past 80 years have seen immense progress in research, primarily short-term human trials and animal studies, showing how certain foods change brain structure, chemistry, and physiology thus affecting mood and performance. These studies suggest that foods directly influencing brain neurotransmitter systems have the greatest effects on mood, at least temporarily. In turn, mood can also influence our food choices, and expectations on the effects of certain foods can influence our perception.
Complex Mood-Food Relationships The relationship between food and mood in individuals is complex and depends “on the time of day, the type and macronutrient composition of food, the amount of food consumed, and the age and dietary history of the subject” (2). In one study by Spring et al. (1983), 184 adults either consumed a protein-rich or carbohydrate-rich meal. After two hours, their mood and performance were assessed (3). The effects of the meal differed for female and male subjects and for younger and older participants. For example, females reported greater sleepiness after a carbohydrate meal whereas males reported greater 18
Image by Diana Lim ‘11, DUJS Staff
Chocolate is a powerful mood enhancer.
calmness. In addition, participants aged 40 years or older showed impairments on a test of sustained selective attention after a carbohydrate lunch. Furthermore, circadian rhythms influence our energy levels and performance throughout the day. “Early birds” feel most productive the first part of the day and their food choices become particularly important during lunch and throughout the afternoon. “Night Owls” feel most energetic later in the day and should pay attention to their breakfast choices as they can increase or decrease energy levels and influence cognitive functioning. For example, according to Michaud et al. (1991), if you are an evening person and you skip breakfast, your cognitive performance might be impaired. A large breakfast rich in protein, however, could improve your recall performance but might impair your concentration (4). This illustrates the complexity of relationships between food and mood and the need to find a healthy balance of food choices.
The Serotonin Theory The effects of carbohydrates and protein
Serotonin is an important neurotransmitter that the brain produces from tryptophan contained in foods such as clams, oysters, escargots, octopus, squids, banana, pineapple, plum, nuts, milk, turkey, spinach, and eggs (1). Functions of serotonin include the regulation of sleep, appetite, and impulse control. Increased serotonin levels are related to mood elevation. Wurtman and Wurtman (1989) developed a theory suggesting that a diet rich in carbohydrates can relieve depression and elevate mood in disorders such as carbohydrate craving obesity, pre-menstrual syndrome, and seasonal affective disorder (SAD) (5). They theorized that increased carbohydrate intake associated with these disorders represented self-medicating attempts, and that carbohydrates increased serotonin synthesis. A protein rich diet, in con-
Dartmouth Undergraduate Journal of Science
trary, decreases brain serotonin levels. The synthesis of serotonin in the brain is limited by the availability of its precursor tryptophan. The large amino acids such as tryptophan, valine, tyrosine, and leucine share the same transport carrier across the blood-brain barrier (1). The transport of tryptophan into the brain is proportional to the ratio of its concentration other large amino acids since they compete for available transporters (1). Eating foods high in protein increases the amount of many amino acids in the blood but not of tryptophan, which is only found in low doses in dietary protein. Therefore, many large amino acids compete with a small amount of tryptophan for transport into the brain, meaning that less tryptophan is available for serotonin synthesis. Consuming foods high in carbohydrates can also change amino acid levels in the blood. As blood glucose levels rise, insulin is released and enables muscle tissues to take up most amino acids except for tryptophan, which is bound to albumin in the blood. As a result, the ratio of tryptophan relative to other amino acids in the blood increases, which enables tryptophan to bind to transporters, enter the brain in large amounts, and stimulate serotonin synthesis (5). The potential of increased carbohydrate intake to treat depression, pre-menstrual syndrome and SAD remains small, however. Benton and Donohoe (1999) found that only a protein content of less than two percent
of a meal favored the rise in serotonin levels. Foods high in carbohydrates such as bread and potatoes contain 15 percent and 10 percent of calories, respectively, that come from protein thereby undermining the effects of carbohydrates on serotonin levels (5). In addition, “carbohydrate craving” is not an accurate description to describe the craving for foods such as chocolate, ice cream, and other sweets. Although people might think that these foods are high in carbohydrates because of their sweet taste, most of the calories come from fat and contain enough protein to undermine any effect of carbohydrates on serotonin levels (6). Rather, taste preferences for sweets seem already present at birth. For example, the facial expressions of newborns indicate a positive response to sweet stimuli and a negative response to bitter stimuli (7). The innate preference for sweet-tasting foods might have adaptive value since bitter tastes could indicate the presence of toxins and sweetness signals a source of energy in the form of carbohydrates.
The Effects of Chocolate Chocolate has a strong effect on mood, generally increasing pleasant feelings and reducing tension. Nevertheless, some women, especially those trying to lose weight, experience guilt after eating chocolate (8).
Many people consume chocolate when they are in negative moods such as boredom, anger, depression, and stress, or are in a particularly happy mood. Furthermore, many women label themselves as “chocoholics,” which led researchers to examine the effects of psychoactive substances in chocolate that potentially could create a drug-like addiction (6). Chocolate contains a number of potentially psychoactive chemicals such as anandamines which stimulate the brain in the same way as cannabis, tyramine and phenylethylamine which have similar effects as amphetamine, and theobromine and caffeine which act as stimulants (6). Nevertheless, these substances are present in chocolate in very low concentrations. For example, 2 to 3 g of phenylethylamine are needed to induce an antidepressant effect, but a 50 g chocolate bar only contains a third of a milligram (6). In 1994, Michener and Rozin showed that the sensory factors associated with the consumption of chocolate produce the chocolate cravings rather than psychoactive substances. Participants were supplied with boxes that contained either milk chocolate, white chocolate, cocoa powder capsules or white chocolate with cocoa and instructed to eat the contents of one box when they experienced a craving for chocolate. If the chemicals in chocolate produced the craving, the intake of pure cocoa would satisfy it. Interestingly, only milk chocolate could alleviate the desire for chocolate. White chocolate was not as effective and adding cocoa to white chocolate did not alter the results. Cocoa powder could not satisfy the craving at all. The unique taste and feel of chocolate in the mouth is responsible for the chocolate craving (8). Therefore, chocolate can serve as a powerful mood enhancer.
Caffeine: A Psychoactive Drug
Image retrieved from http://vetinsider.files.wordpress.com/2009/09/899_9906.jpg%3Fw%3D510%26h%3D312 (Accessed 1 Nov 2010).
Fish oil pills are sold as Omega-3 fatty acid supplements. Fall 2010
Caffeine, mostly consumed in the form of coffee and tea, has stimulant effects enhancing alertness, vigilance, and reaction time but also increases anxiety in susceptible individuals. It is the most commonly used psychoactive substance in the world with an estimated global 19
consumption of 120,000 tons per year (7). Caffeine blocks adenosine receptors in the brain and can relieve headaches, drowsiness, and fatigue. Short-term caffeine deprivation in regular users can lead to withdrawal symptoms (7). Personality might contribute to caffeine use. For example, evening people who have difficulty getting up in the morning can improve their alertness and energy levels through caffeine. Contrarily, caffeine can cause unpleasant effects in people who have high levels of anxiety.
Omega-3 Fatty Acids Omega-3 fatty acids can influence mood, behavior, and personality. Low blood levels of polyunsaturated omega-3 fatty acids are associated with depression, pessimism, and impulsivity, according to a study by the University of Pittsburgh Medical Center (9). In addition, they can play a role in major depressive disorder, bipolar disorder, schizophrenia, substance abuse, and attention deficit disorder. In recent decades, people in developed countries have consumed greater amounts of omega-6 polyunsaturated fatty acids, contained in foods such as eggs, poultry, baked goods, wholegrain bread, nuts, and many oils, that outcompete omega-3 polyunsaturated fatty acids. Especially docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA), both members of the omega-3 fatty acid family, contribute to the fluidity of the cell membrane, thereby playing an important role in brain development and functioning (10). Omega-3 fatty acids are found in fish, other seafood including algae and krill, some plants, meat, and nut oils. Many foods such as bread, yogurt, orange juice, milk, and eggs are oftentimes fortified with omega-3 fatty acids as well.
Micronutrients Thiamine
According to one study by Benton and Donohoe (1999), insufficient amounts of thiamine or Vitamin B1 caused “introversion, inactivity, fatigue, decreased self-confidence and generally poorer mood� in partici20
pants (5). Improved thiamine status increased well-being, sociability, and overall energy levels. Thiamine is contained in foods such as cereal grains, pork, yeast, potatoes, cauliflower, oranges, and eggs, and can influence mood states. Thiamine deficiency is very rare in the United States, however.
Iron Status
Iron deficiency represents one of the most common nutritional problems in both developing and developed countries affecting over two billion people worldwide. Iron deficiency anemia can result in depressed mood, lethargy, and problems with attention (5). A low iron status is most common among women, children, vegetarians, and dieters. Iron deficiency also results in a decreased ability to exercise. Foods rich in iron include liver, vegetables such as broccoli, asparagus, and parsley, seafood, iron-fortified grains, greens, nuts, meat, and dried fruits.
Folic Acid
Besides helping in the prevention of neural tube defects, folic acid also plays an important role in the brain. Folic acid deficiency, which is rare in the general population, is associated with depressed mood. Psychiatric patients are particularly at risk for developing folic acid deficiency because of disordered eating habits caused by a loss of appetite and anticonvulsant drugs, which inhibit folic acid absorption (6). Foods rich in folic acid include dark, leafy green vegetables, liver and other organ meats, poultry, oranges and grapefruits, nuts, sprouts, and whole wheat breads.
Food Effects on Emotions Studies have found that diets low in carbohydrates increased feelings of anger, depression, and tension. Also, diets high in protein and low in carbohydrates increased anger (6). Diets high in carbohydrates have a generally uplifting effect on mood.
Mood effects on food choice
As much as food can affect our mood, our mood can also affect our
food choices. In a study by Macht (1999), female and male participants were asked to report how their eating patterns changed with emotions of anger, fear, sadness, and joy. When experiencing anger and joy, participants experienced increased hunger as compared to feelings of fear and sadness. Anger increased comfort and impulsive eating, and joy increased eating for pleasure (6). Another study found that people eat more un-healthy comfort foods when they are sad (11). Participants either watched a happy or a sad movie and were provided with buttered popcorn or seedless grapes throughout the movie. The group watching the upbeat movie consumed significantly more grapes and less popcorn than the group watching the sad movie. In addition, when participants were provided with nutritional information, the sad people consumed less popcorn than the happy people and the happy people did not alter their consumption (11).
Psychological Effects of Food Consumption
Cognitive factors are often more powerful than physiological factors (6). For example, if a group of dieting individuals is asked to eat foods high in calories, they might experience anxiety and other negative emotions because they are afraid of gaining weight. These effects have nothing to do with the ingredients of the foods themselves. In addition, learned appetites can also influence our experience of foods. For example, our favorite foods usually trigger positive emotions. Even the smell of food can evoke a strong emotional experience. Furthermore, the situation in which food is consumed and our past experience with particular foods also affects our emotional response (6,7). For example, a person who thinks that drinking a cup of coffee will increase alertness might feel more alert even after drinking decaffeinated coffee.
How to Maximize the Benefits of Food on Mood
The perfect diet to enhance mood and optimize performance and health
Dartmouth Undergraduate Journal of Science
remains unknown. Although abundant re- search exists on food-mood relationships, the findings of these studies are often generalized and subjective. For example, the ability of carbohydrates to positively influence mood remains controversial. Therefore, it seems best to follow a well- balanced diet rich in protein, moderate in carbohydrates and low in fat since this could generally improve mood and energy levels. This should also ensure the adequate supply of micronutrients such as omega-3 fatty acids, iron, folic acid, and thiamine. Furthermore, to avoid the sense of guilt evoked from overindulging in craved foods such as chocolate, one should manage their intake by including them in small amounts with meals and avoiding them when hungry. In addition, reading the labels before consuming these comfort foods can also deter overconsumption. References 1. Prasad, C., Food, mood and health: a neurobiological outlook. Brazilian Journal of Medical and Biological Research, 31, 1517-1527 (1998). 2. Rogers P.J. & Lloyd H.M., Nutrition and mental performance. Proceedings of the Nutrition Society, 53, 443-456 (1994). 3. Spring, B et al., Effects of protein and carbohydrate meals on mood and performance: interactions with sex and age. Journal of psychiatric research, 17, 155 (1983). 4. Michaud C., Musse N., Nicolas DI & Mejan L., Effects of breakfast size on short-term memory concentration and blood glucose. Journal of Adolescent Health, 12, 53-57 (1991). 5. Benon D. & Donohoe, RT., The effects of nutrients on mood. Public Heath Nutrition, 2, 403-409 (1999). 6. Ottley, C., Food and mood. Nursing Standard, 15, 46-52 (2000). 7. Rogers, P., Food, mood and appetite. Nutrition Research Reviews, 8, 243-269 (1995). 8. Macht, M. & Dettmer, D., Everyday mood and emotions after eating a chocolate bar or an apple. Appetite, 46, 332-336 (2006). 9. University of Pittsburgh Medical Center (2006, March 4). Omega 3 Fatty Acids Influence Mood, Impulsivity And Personality, Study Indicates. ScienceDaily. Retrieved June 28, 2010, from http://www.sciencedaily.com/ releases/2006/03/060303205050.htm 10. Pawels, E. K. & Volterrani, D., Fatty acid facts, Part I. Essential fatty acids as treatment for depression, or food for mood?. Drug news & perspectives, 21, 446 (2008). 11. Lang, Susan, Mood-food connection: We eat more and less-healthy comfort foods when we feel down, study finds. Cornell Chronicle. (2007)
Fall 2010
21
Neurology
The Physiology of Stress
Cortisol and the Hypothalamic-Pituitary-Adrenal Axis Michael Randall ‘12
W
e all know the feeling: your hands tremble as you flip through a blank exam, and you stay awake at night worrying about approaching deadlines. Stress is an inevitable aspect of life through college and beyond. While everyone understands the symptoms of the stress response, few know the underlying physiological mechanisms. When we probe beneath the surface of our anxiety, an elegant balance of stimuli and responses emerges. This paper will present a broad discussion of stress: how stress is defined, the chemistry and physiology underlying it at the cellular level, and the micro and macro level consequences of the stress response.
Defining Stress Understanding the biochemical interactions that constitute the stress response requires a definition of stress. In the realm of biology, stress refers to what happens when an organism fails to respond appropriately to threats (1). While the “threats� humans face today often take more benign forms compared to those our hunter-gatherer ancestors faced, they can be equally taxing on our bodies. Some stress, of course, can be beneficial. The pressure it exerts can be an incentive to accomplish necessary goals. Often, however, stress reaches chronic, harmful levels, and deleteri-
Image retrieved from https://www.psychologytoday.com/files/u631/untitled.JPG (Accessed 31 Oct 2010).
Schematic diagram of how stress affects the body. 22
ous consequences follow, from compromised immune function to weight gain to developmental impairment (2). The intensity of the stress response is governed largely by glucocorticoids, the primary molecules involved in the stress response. Stress can be ephemeral and beneficial, or it can be long-lasting and harmful, causing suffocation, depression, and paralysis (3). Proper stress management takes on great importance given the wide range of bodily systems impacted by stress hormones.
Neurochemistry of Stress The human stress response involves a complex signaling pathway among neurons and somatic cells. While our understanding of the chemical interactions underlying the stress response has increased vastly in recent years, much remains poorly understood. The roles of two peptide hormones, corticotropin-releasing hormone (CRH) and arginine-vassopressin (AVP), have been widely studied. Stimulated by an environmental stressor, neurons in the hypothalamus secrete CRH and AVP. CRH, a short polypeptide, is transported to the anterior pituitary, where it stimulates the secretion of corticotropin (4). Consequently, corticotropin stimulates increased production of corticosteroids including cortisol, the primary actor directly impacting the stress response (5). Vasopressin, a small hormone molecule, increases reabsorption of water by the kidneys and induces vasoconstriction, the contraction of blood vessels, thereby raising blood pressure (6). Together, CRH and vasopressin activate the hypothalamicpituitary-adrenal (HPA) axis. The HPA axis comprises the system of feedback interactions among the hypothalamus, pituitary gland, and adrenal glands (7). In sum, the hypothalamus releases CRH and vasopressin, which activate the HPA axis. CRH stimulates the anterior pituitary to release corticotropin,
Dartmouth Undergraduate Journal of Science
which travels through the bloodstream to the adrenal cortex, where corticoptropin then upregulates cortisol production. Vasopressin, the other hormone secreted by the hypothalamus, stimulates the cortical collecting ducts of the kidneys to increase reuptake of water, resulting in smaller volumes of urine formed. As the next section will illuminate, corticosteroids such as cortisol act across the entire body to promulgate the stress response (8).
Cortisol: Stress Hormone Cortisol is a glucocorticoid hormone synthesized from cholesterol by enzymes of the cytochrome P450 family in the zona fasciculata, the middle area of the adrenal cortex (9). Regulated via the HPA axis, cortisol is the primary hormone responsible for the stress response. Expressed at the highest levels in the early morning, cortisol mainly functions to restore homeostasis following exposure to stress (10). The effects of cortisol are felt over virtually the entire body and impact several homeostatic mechanisms. While cortisol’s primary targets are metabolic, it also affects ion transport, the immune response, and even memory. Cortisol counters insulin by encouraging higher blood sugar and stimulating gluconeogenesis, the metabolic pathway that synthesizes glucose from oxaloacetate. The presence of cortisol triggers the expression of enzymes critical for gluconeogenesis, facilitating this increase in glucose production. Conversely, it also stimulates glycogen synthesis in the liver, which decreases net blood sugar levels (11). In these ways, cortisol carefully regulates the level of glucose circulating through the bloodstream. Cortisol’s beneficial effects are clear from its role in metabolism: during states of fasting, when blood glucose has been depleted, cortisol ensures a steady supply of glucose via gluconeogenesis. Cortisol’s role in ion regulation, particularly regarding sodium and potassium, has also been widely studied. Cortisol prevents cells from losing sodium and accelerates the rate of potassium excretion. This helps regulates bodily pH, bringing it back into equilibrium after a destabilizing event. Cortisol’s ability to regulate the action of Fall 2010
Image retrieved from http://www.yourmenopausetype.com/steroidpathway/cortisol_files/image003.gif (Accessed 2 Nov 2010)
Structure of cortisol.
cellular sodium-potassium pumps has even led to speculation that it originally evolved as a sodium transporter (12). Cortisol’s weakening effects on the immune response have also been well documented. T-lymphocyte cells are an essential component of cell-mediated immunity. T-cells respond to cytokine molecules called interleukins via a signaling pathway. Cortisol blocks T-cells from proliferating by preventing some T-cells from recognizing interleukin signals. It also stifles inflammation due to inhibition of histamine secretion (13). Cortisol’s ability to prevent the promulgation of the immune response can render individuals suffering from chronic stress highly vulnerable to infection. A role for cortisol in memory has also been demonstrated. The hippocampus, the region of the brain where memories are processed and stored, contains many cortisol receptors. While normal cortisol levels have no adverse effects on the hippocampus, excess cortisol overwhelms the hippocampus and actually causes atrophy. Studies of the elderly have indicated that those with elevated cortisol levels display significant memory loss resulting from hippocampus damage, but the exact age range at risk is unclear. There is a reprieve, however, for the chronically stressed: the damage incurred is usually reversible (14). Finally, cortisol participates in an inhibitory feedback loop by blocking the secretion of corticotripin-releasing hormone, preventing the HPA axis
interactions central to glucocorticoid secretion. Many in the scientific community speculate that chronic levels of high stress disrupt the delicate feedback balance, resulting in the failure of feedback inhibition to operate and the continued release of cortisol (15).
Sleep Deprivation, Caffeine, and Alcohol Increase Cortisol Stressed Dartmouth students often sacrifice sleep while increasing consumption of caffeine and alcohol, all of which impact cortisol levels and thus, the physiological markers of the stress response. While no connection has yet been established linking sleep deprivation to long-term HPA axis activity, acute sleep loss confuses the HPA axis and disrupts negative glucocorticoid feedback regulation (16). Leproult et al. found that plasma cortisol levels were elevated by up to 45 percent after sleep deprivation, an increase that has implications including immune compromise, cognitive impairment, and metabolic disruption (17). These consequences should give pause to anyone contemplating an all-nighter the day before an exam. The relationships among caffeine, stress, and cortisol secretion will also be of interest to Dartmouth’s caffeinated masses. Repeated doses of caffeine 23
over a single day result in markedly increased cortisol levels, regardless of the stressor involved or the sex of the individual. Although the extent of the link has not been fully elucidated, a positive relationship clearly exists between caffeine intake and cortisol release, and this relationship is exacerbated when other stressors are introduced. Thus, supplementing a lack of sleep with multiple cups of coffee or energy drinks actually reinforces the negative effects of the stress response and further undermines performance. The benefits of caffeine intake must be balanced with its implications for cortisol secretion. (18) Often, students decide to celebrate after a stressful episode by consuming alcohol, often in large quantities over a short time frame. Ironically, this method of releasing stress actually stimulates the HPA axis and encourages the manufacture and release of cortisol. In fact, the elevation in glucocortioid levels as a result of alcohol consumption can be greater than the elevation from stressful stimuli. Alcohol probably functions to activate the HPA axis by disinhibiting it: alcohol depresses the nerve cells responsible for HPA inhibition, thereby elevating HPA axis activity (19). As a result, the adrenal cortex secretes higher levels of cortisol. It is hardly surprising, then, that Dartmouth students and college students generally complain of the consequences of considerable anxiety and pressure: our common responses to stress, lack of sleep, caffeine intake, and alcohol consumption act in conjunction to raise the amount of cortisol in our bodies, augmenting the very stress we seek to combat.
Stress and Health: ShortTerm and Long-Term Effects Many of us know from experience that stress compromises the immune response, an empirical observation buttressed by our understanding of cortisol’s physiological effects. Indeed, the effects of acute and chronic stress on human health are myriad and severe. During periods of increased stress, “the immune cells are being bathed in molecules which are essen24
tially telling them to stop fighting,” according to Dr. Esther Sternberg (20). These molecules, namely cortisol, suppress the immune system and inflammatory pathways, rendering the body more susceptible to disease. High levels of stress, even over relatively short periods and in vastly different contexts, tend to produce similar results: prolonged healing times, reduction in ability to cope with vaccinations, and heightened vulnerability to viral infection (21). The long-term, constant cortisol exposure associated with chronic stress produces further symptoms, including impaired cognition, decreased thyroid function, and accumulation of abdominal fat, which itself has implications for cardiovascular health. The bottom line is that both episodes of acute stress and more prolonged stressful circumstances precipitate lower levels of general health, and exposure to such stress should be minimized. In the most extreme cases, Cushing’s Syndrome, characterized by dangerously high cortisol levels, can result. Those afflicted with Cushing’s experience rapid weight gain, hyperhydrosis, and hypercalcemia, along with various psychological and endocrine problems (22).
Conclusion Stress is unavoidable. Our bodies are designed to react to our environment in an effort to preserve homeostasis. Arming ourselves with an understanding of the mechanisms, agonists, and antagonists of the stress response, however, positions us to minimize stress and its impact on our minds and bodies. It is both a blessing and a curse that the HPA axis evolved to be so sensitive to factors like circadian rhythm, caffeine, and alcohol. We are experts at maintaining homeostasis but often novices at managing stressful circumstances. The good news is that stress levels rest largely on our own behavior and decisions and that we can optimize our bodies’ responses to stress based on how we live our daily lives.
2. A. Park, Fat-Bellied Monkeys Suggest Why Stress Sucks. (August 2009) Available at http://www.time.com/time/health/ article/0,8599,1915237,00.html (5 May 2010). 3. R. Sapolsky, Taming Stress.(August 2003) Available at http://www.scientificamerican.com/ article.cfm?id=taming-stress (7 May 2010). 4. J. Santos et al., Am J Physiol Gastrointest Liver Physiol. 277, G391-G399 (1999). 5. Adrenocorticotropic Hormone (ACTH, Corticotropin). Available at http://www.vivo. colostate.edu/hbooks/pathphys/endocrine/ hypopit/acth.html (7 May 2010) 6. H.K. Caldwell and W.S. Young. Oxytocin and Vasopressin: Genetics and Behavioral Implications (Springer, Berlin, 2006). 7. Engelmann et al., Frontiers in Neuroendocrinology 25,132–149 (2004). 8. C. Tsigos and G.P. Chrousos, Journal of Psychosomatic Research 53, 865-871 (2002). 9. Adrenal Steroid Synthesis. Available at http://themedicalbiochemistrypage.org/images/ adrenalsteroidsynthesis.jpg. (8 May 2010). 10. C. de Weerth et al. Early Hum. Dev. 73, 39-52 (2003). 11. J. Baynes and M. Dominiczak, Medical Biochemistry (Elsevier Limited, 2009). 12. R.P. Knight et al., J. Clin. Endocrinol. Metab. 15, 176-81 (1955). 13. M. Onsrud and E Thorsby, Scand. J. Immunol. 13, 573-9 (1981). 14. S. Comeau, Stress, Memory, and Social Support (26 September 2002). Available at http://www.mcgill.ca/reporter/35/02/lupien (21 September 2010). 15. R. Yehuda et al., Psychoneuroendocrinology 31, 447-51 (2006). 16. R. Leproult et al., Sleep 20, 865-870 (1997). 17. Ibid. 18. W. Lovallo et al., Pharmacol. Biochem. Behav. 83, 441-447 (2006). 19. R. Spencer and K. Hutchison, Alcohol Research and Health 23, 272-283 (1999). 20. H. Wein, Stress and Disease: New Perspectives. Available at http://www.nih.gov/ news/WordonHealth/oct2000/story01.htm. (13 May 2010). 21. Ibid. 22. H. Raff and J. Findling, Annals of Internal Medicine (2004).
References 1. H. Selye, The Stress of Life. (McGraw-Hill, New York, NY, 1956).
Dartmouth Undergraduate Journal of Science
computer science
“I’ll Blitz You Later”
The Technology of Campus Communication Thomas Hauch ‘13
T
o most people, the phrase, “I’ll blitz you later,” would surely sound like a strange farewell. But for anyone who has spent time here at Dartmouth, it’s just another way of saying goodbye. For over twenty years, the electronic mail system known as BlitzMail has been a ubiquitous part of Dartmouth life. Stand inside the entrance to Thayer Dining Hall, Novack Café, or the Collis Center, and you are bound to see the same scene just about any time of the day. Students enter, glance around, and if they are lucky, they’ll find an empty “Blitz terminal” waiting to be used. For those unfamiliar with Dartmouth, it might seem strange that students would care much about a basic computer equipped with a grimy keyboard, a slimy mouse, and a cramped screen. After all, Dartmouth has required every student to own a computer since 1991. But these terminals, despite their limitations, are able to offer quick and easy access to BlitzMail. And for students, as well as faculty and staff, BlitzMail remains the essential gateway to the social and academic life of Dartmouth. Today, the BlitzMail servers at Dartmouth host more than 15,000 accounts for faculty, students, and staff (1). On a typical weekday, the system delivers nearly one million messages, and that number continues to grow.
The Beginnings Dartmouth began developing its own e-mail system in 1987 (2). The computing infrastructure at the time was largely Macintosh-based, and the College believed that commercially available e-mail applications for Macintosh were either too difficult to operate or could not support the number of potential users at Dartmouth. After the College decided to create a new system, the final product was written in just two months. The name “Blitzmail,” referring to its hasty development, beFall 2010
gan as a joke among its developers, but the name ultimately stuck with users. By 1988 the entire student body was connected from their dormitories to a campus-wide network. Within only a few years of its release, BlitzMail had become the dominant means of communication at Dartmouth (2). Five years after its introduction, the developers of BlitzMail tried to release the application as a commercial product. The College, however, lacked the resources needed to install and integrate the system for potential customers. As a result, BlitzMail has remained a unique and defining feature of Dartmouth life.
BlitzMail Infrastructure The BlitzMail system at Dartmouth consists of multiple interacting servers (3). Every member of the Dartmouth community is initially assigned to one of these servers, which holds their personal electronic mailbox. The name of a user’s BlitzMail server is recorded in the Dartmouth Name Directory (DND), which is a database that stores personal information on a set of independent servers, separate from those dedicated to BlitzMail (4). A DND entry acts like a fingerprint, allowing the Dartmouth network to identify individual users. A DND name and password are necessary to access BlitzMail, as well as other web-based tools including Blackboard and BannerStudent. BlitzMail and the DND work in tandem. When a student walks up to a “Blitz terminal” and opens BlitzMail, the application will prompt the user for his DND name and password. When the user enters this personal information, the BlitzMail client encrypts a random number along with the usersupplied password and sends the product to the DND server (3). The server reverses the encryption and then compares the result with the information stored in the database. The BlitzMail client then consults the DND to determine which BlitzMail server it should
Image by Colby Chiang ‘10, DUJS Staff Alumni
BlitzMail has become a part of Dartmouth culture.
connect, allowing the user to access his personal mailbox from any location.
Technical Aspects The user data stored on the BlitzMail server consists of messages, folders, mailing lists, and preferences, all of which are referred to collectively as the user’s mailbox (3). When composing a new message, users are able to specify a header, a body of text, summary info, and optional enclosures. The BlitzMail server supports two message formats, a proprietary BlitzMail format as well as the universal Multipurpose Internet Mail Extensions (MIME) format (3). Although the BlitzMail client continues to support the original format, current versions of the application compose new messages in MIME by default. This allows BlitzMail users to easily send and receive messages to and from other mail systems via Simple Mail Transport Protocol (SMTP), an Internet standard for data transfer that supports MIME (5). When sending a new message, users are also able to append one or more enclosures, which are simply Macintosh files stored in MacHost format. In specifying a recipient, BlitzMail once again simplifies the pro25
mercial e-mail applications, BlitzMail continues to operate exclusively in plaintext. In addition, the Windows BlitzMail client delivers information across the Dartmouth network without encryption, making it a poor choice for sharing personal information (12). Admittedly, there are several solutions available. Windows users have the option to sign on to BlitzMail using Kerberos, an encryption protocol developed at MIT (13). Many users, however, are not aware of this application and continue to use BlitzMail without activating any kind of encryption. Using the Virtual Protocol Network, a standard feature among computers purchased from Dartmouth, also offers an additional, albeit limited, level of security. Even though the VPN application can deliver messages securely to the BlitzMail servers, it cannot guarantee security on the other end unless the recipient has also activated VPN (14).
IMAGE RETRIEVED FROM http://www.morguefile.com/data/imageData/public/files/a/alexfoley/preview/fldr_2004_01_16/file0001867014505.jpg (Accessed 2 Nov 2010)
The BlitzMail system at Dartmouth consists of multiple interacting servers.
cess by cooperating with the DND, which allows users to send messages using any form of a recipient’s name (6). Parts of the name can be left out, and users can add any number of “nicknames” to their DND record. Of course, this can also serve as a source of error for the BlitzMail client (3). When different users have chosen the same nickname, or when only part of a recipient’s name is used, BlitzMail must try to resolve these discrepancies by searching for a match within personal mailing lists, public mailing lists, and the DND. In some cases, a message may not reach the intended recipient. When a match is found, however, the BlitzMail servers are able to use data from the DND to automatically route messages to the proper server and the appropriate recipient (3). Since all of this information is stored on servers, and not the computers themselves, students are able to share the public “Blitz terminals” that are scattered throughout campus.
Other BlitzMail Features BlitzMail supports a number of additional applications and features. Bulletins, for example, provide a means of delivering information across the entire Dartmouth community (7). Authorized posters can submit updates to a 26
bulletin and users can, in turn, choose to monitor a Bulletin topic. BlitzMail works with another utility, Notify, to automatically alert users of any new messages they receive or any updates to a particular Bulletin topic (8). In 2002, BlitzMail was updated to allow functionality with a thirdparty utility called SpamAssassin, which operates by assigning a “spam score” to each incoming message based on the use of particular words and phrases in the subject line and text (9). Users can also access their mailbox from the BlitzMail servers using another e-mail client. Any client that uses the Internet Message Access Protocol (IMAP) and that supports Secure Sockets Layer (SSL) connections can retrieve information from the Dartmouth servers (10). Webmail services that use the Post Office Protocol (POP3) rather than IMAP, including Gmail and Yahoo, can also be configured in order to access the BlitzMail servers.
Limitations of BlitzMail Computing Services recently introduced a BlitzMail client for Mac OS X in 2007, but there have not been any significant updates to the existing application in the past decade (11). For example, although HTML has become standard among com-
The Future Several years ago, it became clear that BlitzMail could not keep pace with the technical advancements of many commercially available clients. Beginning in 2008 and throughout 2009, Dartmouth’s Taskforce on E-Mail and Collaboration Tools (TEC-T) discussed replacements for the current system, as well as the introduction of new webbased tools (15). Budget constraints, however, limited the Taskforce’s search to only a few applications, including those from Google and Microsoft, which are available at little to no cost to institutions of higher education. With these constraints in mind, the TEC-T recommended the adoption of Google Apps. In recent years, however, several institutions affiliated with Dartmouth, including the Tuck School of Business and Dartmouth-Hitchcock Medical Center, have begun transitioning to the Outlook/Exchange platform offered by Microsoft. With this in mind, the College administration requested Computing Services to revisit the issue. Computing Services has since initiated its E-mail/Calendar Migration Project. This project plans to implement a campus-wide transition from BlitzMail, as well as another outdated application called Oracle Calendar, to Microsoft’s Business Productivity On-
Dartmouth Undergraduate Journal of Science
line Services (16). The transition project plans to gradually move all existing BlitzMail and Oracle Calendar data to Microsoft Online Services by the beginning of 2012, marking the end of one of the longest continuously operating email systems in the United States (17). References 1. Where did the term ‘BlitzMail’ come from (25 May 2008). Available at http://ask.dartmouth. edu/categories/stulife/30.html (17 May 2010). 2. K. Fogarty, Dartmouth to turn loose homemade mail via the Internet, Network World. 20 June 1994, 42. 3. D. Gelhar, “The BlitzMail Protocol” (Dartmouth College, Hanover, NH, 2008). 4. Dartmouth (E-mail) Passwords (10 February 2010). Available at http://www.dartmouth. edu/comp/systems/accounts/passwords/ dartmouthpasswords.html (19 May 2010). 5. M. Peralta, Trusted S/MIME Gateways, Dartmouth Computer Science Technical Report, TR2003-461, 5-6 (May 2003). 6. D. Gelhar, “The DND Protocol” (Dartmouth College, Hanover, NH, 1998). 7. Other Features of BlitzMail (27 January 2010). Available at http://staging.dartmouth.edu/ comp/emailcal/email/BlitzMail/blitz-other.html
Fall 2010
(19 May 2010). 8. Additional Hosting Resources (20 January 2010). Available athttp://www.dartmouth.edu/ comp/ web-media/web/web-accounts/hostingresources.html (18 May 2010). 9. Spam Assassin (18 February 2010). Available at http://www.dartmouth.edu/ comp/software/downloads/windows/blitz/ spamassassin.html (18 May 2010). 10. Using IMAP E-mail Clients (18 August 2010). Available at http://www.dartmouth.edu/ co mp/email-cal/email/imap.html (30 August 2010). 11. K. Farley, College to release new Blitz for Macs, The Dartmouth, (28 September 2007). Available at http://www.dartmouth.edu/comp/ softcomp/software/downloads/windows/ blitz/ spamassassin.html (15 May 2010). 12. R. Speers, E. Tice, Cyber Attacks on the Dartmouth College Network, Dartmouth Undergraduate Journal of Science (Fall 2009). Available at http://dujs.dartmouth.edu/ fall-2009/cyber-attacks-on-the-dartmouthcollege-network. 13. Using Kerberos Authentication with BlitzMail (2003). Available at http://www. dartmout h.org/services/email/access/ blitzmail/kerberos.html (28 May 2010). 14. A. Cohen, Newly improved Blitz waiting in limbo, The Dartmouth, (11 January 2007). Available at http://thedartmouth. com/2007/01/11/news/newly (28 May 2010). 15. E-Mail and Collaboration Tools
Announcement (8 April 2010). Available at http://www. dartmouth.edu/comp/about/ committees/etp/announcement.html (17 May 2010). 16. E. Waite-Franzen, “E-Mail Calendar Migration Project: Project Charter” (Dartmouth College, Hanover, NH, 2009). 17. Communication and Collaboration Tools Announcement (5 August 2010). Available at http://www.dartmouth.edu/comp/email-cal/ initiatives/mos/announcement.html (30 August 2010).
27
Ecology
Deep Below the Snowy Surface The Metamorphisms Within
June Yuan Shangguan ‘13
I
t is a tranquil winter morning. Snow falls silently onto the Green, onto Occom pond, and into the hectic life of Dartmouth students. The delicate snow crystals will soon undergo a series of Metamorphisms, which are changes in the crystalline form of the snow due to fluctuating exterior chemical and physical conditions (1). Within a few days, a snow clearer comes to shovel the accumulation away and students will proceed with their lives oblivious to the metamorphosing snow crystals. Snow constantly moves and metamorphoses to reduce its surface free energy (the increase in entropy energy of the snow when the area of surface increases by a unit area) (2). During the initial changes in the snow structure, dry snow obtains a greater density by packing the snow grains tighter through rearrangement of its crystals. However, when the rearrangement no longer serves densification purposes, snow grains begin to form stronger bonds between one another. Eventually, snow in deeper layers are packed so close with morphed bonds that spaces in between individual grains of the snow disappear and are replaced by separate bubbles instead. When this step is completed, the snow mass has turned from permeable snow to impermeable ice. Dr. Henri Bader, a scientist from the Cold Regions Science and Engineering Laboratory, identified four types of snow metamorphisms during conversion to ice (3). Soon after the snow is deposited on a surface, melting, sublimation and surface diffusion will cause the snow to lose its original crystalline structure. The resultant aggregation, known as Destructive Metamorphism, consists of crystals with relatively rounder shapes and smoother angles. In contrast, dry snow on glacier surfaces undergoes Constructive Metamorphism. Uprising vapor carried by convective airflow from evaporated or sublimated snow crystals will condense on crystals of a lower tempera-
28
ture. Coarse-grained snow (also known as “depth hoar” or “Schwimmschnee”) is then formed. Coarse-grained snow comprises large grains with loose connection between one another. Constructive metamorphism increases the mass of the snow grains and thus eliminates smaller grains. Melt Metamorphism happens in relatively warm snow in which a rise in temperature will melt the snow crystals. Similar to Destructive Metamorphism, the snow grains in Melt Metamorphism both become rounded and are covered by a thin film of water. Water molecules accumulate at the contacts between snow grains due to surface tension. Strong adhesion between the grains thus acts as a strong bond and may cause composite grains when the snow re-encounters a low temperature. The resultant grain formed varies with the surface, slopes and the presence of impermeable layers (1). The last type of Metamorphism is Pressure Metamorphism, which happens due to compression or compaction. This metamorphism occurs when the snow grains first begin to rearrange their crystals to close up the empty inter-grain spaces. However, a maximum density between 0.50 – 0.55 g/ cm3 can be reached in this way. When pressure on the grains is still increas-
ing, actual deformation of the grains and their bonds takes place as a result. Similar to Melt Metamorphism, the stronger inter-grain bonds will eventually seal off the connected inter-snow grain airways and produce air bubbles in the ice. At this stage, the snow mass reaches a density of 0.8-0.83 g/cm3 and a depth from 40 to 150 meters (1). Through these four Metamorphism processes, snow slowly morphs from the fluffy accumulation of grains into a tightly bonded aggregation and eventually to ice under high pressure and extremely cold temperature. How the snow grain structure will turn out depends on the local temperature gradient, wind speed, ventilation and slope. In these ways, deep below the snow surface, molecules of snow are constantly but stealthily changing, moving and morphing. References 1. M. Mellor, F.J. Sanger, Ed., Snow and Ice on the Earth’s Surface (Hanover, NH, US Army Material Command Cold Regions Research & Engineering Laboratory, 1964), pp. 57-58. 2. S.C. Colbeck, Snow Metamorphism and Classification (Hanover, NH, 1986). 3. F.J. Sanger, Ed., Snow as a Material (Hanover, NH, U.S. Army Cold Regions Research and Engineering Laboratory, 1962) Part II, Sect. B.
Image by Brad Nelson ‘13, DUJS Staff
Snow crystals on the ground near Beaver, Oregon. Dartmouth Undergraduate Journal of Science
Biology
Alcohol
From Hydroxyl to Culture Jay Dalton, ‘12
E
thanol, Ethyl Alcohol, Spirits, all of them names for the simple molecule CH3CH2OH. It would be difficult to conceive of a string of letters that have had a greater impact on our global culture. From semi-clandestine high school parties, to ubiquitous college drinking, and even to such lofty heights as matrimony, the Eucharist, and other religious ceremonies, nearly every rite of passage is imprinted by alcohol’s distinctive hydroxyl signature. It is through these social means that we tend to romanticize and abstract alcohol. However, the biochemical reality is that the human body produces three grams of ethanol a day through normal fermentation processes alone and that this catabolic process is so essential to life that certain amino acid sequences in ethanol oxidation enzymes are conserved from humans to bacteria. No matter how ornate the chalice, nor how benign the plastic cup, the chemical breakdown of alcohol in the body, and how it affects the brain, is immutably scientific and thus largely predictable. It provides a common framework for humanity, and an equalizing factor, which extends from the social to the biological realm. However, many of the most crucial truths about alcohol are shrouded in mystery due to social taboo. Questions regarding what alcohol does to our bodies both in the short and long term, and perhaps the darker sides of why we drink go largely unanswered. Hopefully this article will provide some answers to a college that, for better or worse, has what is undoubtedly a noticeably above-average love affair with the organic molecule alcohol.
What Happens When We Drink? The complicated process incurred by ethanol consumption transforms the initial alcohol molecule first into acetaldehyde, then acetic acid, and fiFall 2010
Image retrieved from http://www.nida.nih.gov/pubs/teaching/teaching3/largegifs/slide-4.gif (Accessed 3 Nov 2010).
The reward pathway.
nally to acetyl-CoA, at which point it enters the citric acid cycle (1). The final step is catalyzed by a complicated enzymatic synthetase reaction (1). The second step, however, is where most of the potentially adverse effects of alcohol occur. Acetic acid is an unstable compound, which is capable of forming toxic free radical structures if not supplied with sufficient antioxidants such as Vitamin C and Vitamin B1 (1). These radicals can cause a wide variety of illeffects including birth defects in expecting mothers, severe liver and kidney damage in chronic alcohol abusers, and even the common hangover (1). The first topic to consider is how we rate our own inebriation, which is generally measured in Blood Alcohol Content, or BAC. Although somewhat helpful in determining a subjective state of drunkenness, evidence suggests that this provides a misleading statistic in terms of how much alcohol is actually entering and subsequently affecting our bodies. This misconception is related to the first-pass effect, which is a drug phenomenon in which concentration of a given drug is dramatically reduced via drainage into the liver
before it reaches systemic circulation. This effect has been shown in rats to cause a large discrepancy between ethanol ingested and both BAC and ethanol expelled from the blood (2). This implies that one’s level of intoxication is potentially dangerously unrelated to overall alcohol consumption and subsequent damage from its metabolism. Another area of consideration in terms of alcohol’s physical properties is the neurological source and scope of alcohol’s psychoactive effects. Research indicates that alcohol stimulates the release of dopamine and serotonin in the nucleus accumbens, which is a collection of neurons within a subcortical region of the forebrain (3). Studies indicate that 1 g/kg of alcohol is enough to significantly increase extracellular levels of these two neurotransmitters in rats (3). This finding may explain many of alcohol’s subjective effects. For instance, low levels of dopamine have been tied to social anxiety, which may explain the generally sociable effect alcohol inspires in people. However, psychological research indicates that this neurological change can result in what is known as “alcohol myopia” 29
Image retrieved from http://en.wikipedia.org/wiki/File:GABA-3D-balls.png (Accessed 4 Nov 2010)
Ball-and-stick model of the gamma-aminobutyric acid (GABA) molecule.
(4). This mental state is typified by a polarization of social and emotional responses, an enhancement of self-evaluation, and a temporary relief of anxiety (4). This perhaps helps to answer the age-old question of the subjective effect of alcohol on the psyche. It is not that alcohol is allowing an individual to intrinsically change, but rather to lower natural inhibitions. Thus, it is up to the individual to decide what role these inhibitions play in defining the self. This alcohol myopia is not merely mental, but also physical and expresses itself in a gradient of physical impairment as alcohol consumption increases. Research indicates a significant linear correlation between subjects’ BAC and performance on a variety of cognitive psychomotor tests (5). Results suggested that for each 0.01% increase in blood alcohol, performance decreased by 1.16% (5). What this implies is that at a BAC of 0.10%, which is roughly the legal limit, performance drops by almost 12% (5). The other consideration of this research project was the effect of sleep deprivation on performance. For instance, after 17 hours awake, cognitive psychomotor performance was equivalent to that observed in individuals with a BAC around 0.05% (5). What this implies for the typical Dartmouth student drinking on a 30
school day is that the roughly equal effects of sleep deprivation will compound any impairment from alcohol. The final consideration in terms of alcohol’s direct effects on students is the potential for injury. Not only are the above-described physiological effects likely to increase one’s chances for injury, but alcohol also has a potentially negative effect on cellular immunity, which is critical in defense of infection from acute injury. Research indicates that changes to cellular immunity are correlated to alterations in the cytokine milieu, which are found more prevalently in injury cases related to alcohol (6).
Why Do We Drink? As previously mentioned, dopamine and serotonin are found to be released in higher levels upon consumption of alcohol. This basic evidence was used to try to explain alcohol-seeking behaviors in a neurobiological context. It was found that rats genetically selected to crave alcohol were also deficient in both serotonin and dopamine in regions of the nucleus accumbens (7). Similarly, any neurologically active agents targeted at increasing levels of serotonin or dopamine were able to assuage alcohol-seeking tendencies (7).
What these results imply is that the lack of serotonin and dopamine (or neuronal receptors to receive them) in certain individuals may account for increased alcohol seeking behavior. Not only are these two neurotransmitters sought for the subjective effects that they directly incite (anxiety relief, euphoria, pain relief, etc.), but also because they are both involved in the brain’s reward pathway. Research has shown that dopaminergic neurons fire more when reward is perceived as imminent (1,7). In addition, when reward is greater than expected these neurons strengthen their firing in the future, which further solidifies this cycle of pleasure-seeking behavior (1,7). Another trait of alcohol seeking rats was a greater basal level of anxiety (8). This is predictable, as lower levels of dopamine have been previously mentioned as leading to higher anxiety. Alcohol’s depressant effect on the CNS, which is mediated through its agonistic effect on GABA, is likely the explanation for more anxious rats seeking alcohol. GABA, which is the main mammalian inhibitory neurotransmitter, is increased in efficacy in the presence of alcohol. This leads to short-term relaxation, which more anxious rats, and perhaps people as well, would be incentivized to pursue.
Dartmouth Undergraduate Journal of Science
The overarching theme to this string of research is that alcohol-seeking behavior likely has a large neurochemical component. This implies that each individual’s approach to alcohol is much more custom-tailored than originally thought. This goes beyond questions of tolerance in terms of weight or gender, and stretches into the very fabric of our individual brains, which are potentially each crafted on a spectrum of alcohol-seeking behavior.
Why Do We Abuse Alcohol? Once again, rat models were used to genetically select for alcohol-seeking behavior. This time, the ventral tegmental area, which is a group of neurons located close to the midline on the floor of the midbrain, was looked at in detail during chronic alcohol consumption by rats. It was found that the projections of the VTA are potentially implicated in the reinforcing effects of drug abuse (9). The VTA in general has been found to be important in a variety of drug dependence and other psychological disorders (9). Again, dopamine plays the major role because the VTA is the origin of the dopaminergic cell bodies of the mesocorticolimbic dopamine system, which is largely involved in the aforementioned reward circuitry (9). Unfortunately, this system is essential for not only negative psychological processes such as addiction, but also motivation in general. The VTA is also connected to the aforementioned nucleus accumbens, which receives increased dopamine and serotonin activity in the presence of alcohol. Thus, compulsive drug abuse behavior is largely the result of modifications to both the VTA and the nucleus accumbens, which furnish their neurons with a greater ability to produce dopamine, and a greater sensitivity to dopamine (9). Because this cycle is both started, and propagated by dopamine itself, it is extremely difficult to break this cycle of addiction. It is also indicated by research that the neurobiological substrates in the VTA, which aid in the reward pathways of alcohol are likely influenced by genetics (9,10). In addition, studies have explained that the withdrawal state durFall 2010
ing detoxification from alcohol is a physiochemical reality caused by the body entering a distress cycle due to its inability to restore homeostasis in the absence of the drug. Finally, any subsequent exposure after withdrawal symptoms have passed will re-instate the drug behavior. The neurological structure of the brain is permanently altered by chronic alcohol abuse. There is hope, however, in the form of opioid (related to dopamine) antagonists such as naltrexone, which has been found to be helpful in the treatment of alcoholism (11). It seems that this drug, which decreases the amount of dopamine released by the brain, is capable of not only quelling the intense cravings felt by recovering alcoholics, but also of decreasing the euphoria experienced upon consumption of alcohol (11,12). There is also the potential to use the malfunctioning serotonin system, which is similarly implicated in alcoholism, to be both a source of preventative medicine, and a method of custom-tailoring alcohol recovery strategies to each patient (13). Pharmacological and clinical studies have shown that the 5-HT transporter and 5-HT1A are both candidates for loci of alcohol dependence (13). What this means is that detection of the malfunctioning serotonin allele could provide multiple therapeutic options.
In Conclusion What all of this evidence implies is that no matter how we view alcohol in the context of society or culture, it is still a drug with biochemical implications. Again, I am not advocating for or against the consumption of alcohol. The social inertia would make either argument ineffectual and largely pointless. I am instead hoping to provide a wide range of facts to put the effects of alcohol in a scientific, and thus predictable context. The bottom-line is that regardless of the nuances of weight, or age, or gender, ethyl alcohol will be metabolized by the body, and that will have an effect. The liver and kidneys will be shouldered with the responsibility of processing an influx of alcohol, and then aldehydes, and then acids. The nucleus accumbens and VTA will communicate in the language of
neurotransmitters, and some individuals will be much more receptive to the neural restructuring that this will entail. This increased sensitivity can lead to alcoholism, but there is hope in the form of new drugs and therapies. It is important that we as Dartmouth students understand these facts instead of blindly consuming alcohol as though we were all uniformly equipped to deal with its effects. It is my prediction that until the day comes when social anxiety at every level has been extinguished, alcohol will continue to prevail as the recreational drug of choice. Until that potentially unreachable goal is achieved, it is important to understand what is possibly the most influential substance of all time. References: 1.) J. McMurry, Organic Chemistry 6t ed. (United States: Thomson, 2004), 587-854. 2.) R.J.K. Julkunen, L. Tannenbaum, E. Baraona, C.S. Lieber, Alcohol. 2, 437-441 (1985). 3.) K. Yoshimotoa, W.J. McBride, L. Lumenga, T.-K., Li Alcohol. 9, 17-22 (1992). 4.) C.M. Steele, R.A. Joseph, Am. Psychol. 45, 921-933 (1990). 5.) D. Dawson, K. Reid, Nature. 388, 235 (1997). 6.) K.A. Messingham, D.E. Faunce, E.J. Kovacs, Alcohol. 28, 137-149 (2002). 7.) W.J McBridge, J.M. Murphy, L. Lumeng, T.-K. Li, Alcohol. 7, 199-205 (1990). 8.) R.B. Stewart et al., Alcohol. 10, 1-10 (1993). 9.) G.J. Gatto et al., Alcohol. 11, 557-564 (1994). 10.) C.R. Cloninger, M.Bohman, S. Sigvardsson, Arch Gen Psychiatry. 38, 861-868 (1981). 11.) C.P. O’Brien, L.A. Volpicelli, J.R. Volpicelli, Alcohol. 13, 35-39 (1996). 12.) R.D. Myers, S. Borg, R. Mossberg, Alcohol. 3, 383-388 (1986). 13.) S. Hammoumi et al., Alcohol. 17, 107-112 (1999).
31
Physics
Vibrations Surround Us The Science of Music ANDREW ZUREICK ‘13
D
artmouth’s campus teems with music. The bells in Baker Tower chime every hour, professor Brison hosts an array of performances for East Wheelock residents, and the Hopkins Center brings in esteemed artists from around the world. Many students get their feet wet with music classes while others sing or play in ensembles on campus. Everyone else owns an iPod. Performers learn to master the technique necessary to play their instruments, and listeners grow to prefer genres. Yet, music lovers often overlook the foundation of music: science. Sounds are governed by principles of physics, from the vibration of strings on a violin to the measurements and dimensions that go into the acoustics of a concert hall or stadium. At the heart of the physics of music is the wave, an energy-carrying disturbance that travels through particles in a given medium. Additionally, the way we hear and process musical sounds requires a complex biological system.
A Brief History The study of sound goes back thousands of years to ancient cultures— Chinese, Japanese, and Egyptian, to name a few—that invented instruments in order to create music (1). Around 550 BC, Pythagoras of Samos invented the monochord using a soundboard and a single string draped across a stationary bridge and a movable bridge (2). By manipulating the location of the movable bridge, he discovered that when the string length was halved, the pitch produced was an octave higher. He later concluded that small whole number ratios were the foundation for consonant sounds in strings as well as in volumes of air in pipes and water in vases. The frequency ratios for a major scale are as shown in Table 1. Aristotle also studied strings and their vibrations about 200 years later. He observed a relationship between the string’s vibrations and the 32
Scale Degree
Note
Frequency
Interval with Tonic
1
C
1
Unison
2
D
9/8
Major Second
3
E
5/4
Major Third
4
F
4/3
Perfect Fourth
5
G
3/2
Perfect Fifth
6
A
5/3
Major Sixth
7
B
15/8
Major Seventh
8
C
2
Octave Image by Andrew Zureick ‘13, DUJS Staff
Table 1: Frequency ratios are from the first note in the scale to the n scale degree. th
air and proposed the idea that each small part of the air struck a neighboring part (3). He also established that a medium was required for sound to travel. Until Galileo Galilei’s new wave of study in experimental acoustics during the sixteenth century, followers of Aristotle did most of the work in this field, including Euclid, who wrote an “Introduction to Harmonics” (1). In his 1687 Principia, Isaac Newton contributed mathematical calculus to the field of wave motion study and also proposed the speed of sound to be about 1100 ft/s after studying fluid motion and the density of air (1). In 1711, English trumpeter John Shore invented the tuning fork, a resonator that produces a single, pure tone when struck. In the 1800s, Christian Doppler studied sounds emitted from a moving source and concluded that waves compress when the source moves toward the listener and expand when the source moves away (4). Georg Ohm applied an earlier theorem by Jean Baptiste Joseph Fourier to acoustics, leading to Ohm’s law for sound. This law states that tones are composed of different combinations of simple tones of different frequencies (1). The list of modern contributors goes on.
Vibrations Vibrations are small oscillation disturbances of the particles in a
given body, such as water or a string. Regular vibrations have a defined period, the amount of time it takes to complete a cycle, while irregular vibrations do not, like those created by snare drums or a giant wave crashing on the ocean shore (5). The regular, periodic vibrations have a given frequency, ƒ, in cycles per second.
Waves
When struck, a tuning fork vibrates at a single, particular frequency. The constant movement of the prongs back and forth causes repeated sound impulses, which are really disturbances in the air (6). The prongs hit the air, and the air continues to hit neighboring air molecules. At one instant, some molecules are close while others are further apart, a phenomenon known as condensation and rarefaction of air (see Figure 1). This pattern’s propagation is a wave, and in one vibration, the molecules move one wavelength, λ. After measuring the frequency, the sound wave’s velocity can be given as V=ƒλ (1). The more common, sinusoidal depiction of a given wave comes from an oscilloscope, an apparatus that senses the pressure from a wave and translates it into an electric signal. Mathematically modeling a wave’s propagation requires a second order partial differential equation. Instruments produce a fundamental tone, the most audible tone, as well
Dartmouth Undergraduate Journal of Science
Image from “Sound Waves and their Sources,” Encyclopedia Britannica Films, 1933 (6)
Figure 1: Tuning fork vibration: three snapshots within a second.
as many additional frequencies above the pitch, known as overtones. The fundamental and the additional overtones form the series of harmonics that give a sound or pitch a certain quality. A note’s unique quality, or timbre, is based on the relative energies of the harmonics. In other words, each note sounds the way it does because the wave is a complex combination of frequencies.
Vibrating strings
Musical instruments create sounds through the physical communication of a primary vibrator, resonant vibrator, and a sound effuser (7). These components provide the initial vibrations, amplify the vibrations, and allow the vibrations to escape. String instruments like the viola or cello amplify the sounds made by bowing or plucking the strings. The strings are held tightly around the pegs at one end and the tailpiece at the other end. When the string vibrates, the vibrations propagate down to the bridge, which carries them to the soundboard that spans the inside of the wooden body (8). The soundboard amplifies the vibration, and the sound waves emerge through the two f-shaped holes. Marin Mersenne, a sixteenthcentury French mathematician and the Fall 2010
“father of acoustics,” devised three laws to calculate the frequency of the fundamental tone produced by a string. Together, they state that the frequency is inversely proportional to the length, proportional to the square root of the tension, and inversely proportional to the square root of the mass per unit length, as shown in Figure 2 (1). As the string is pulled more tautly on a violin by turning the peg towards the scroll, the pitch increases, and as the musician presses on the string, the length of the string decreases and the frequency increases. Cellos and basses have longer strings than violins and violas, which is why these larger instruments can play in a lower pitch register. Consequently, a musician can produce the same pitch on different string; this causes challenges in determining optimal “fingerings” when studying a piece of music (9). While string instruments typically have four or six strings, the piano has 230 (two or three per note). An enormous amount of combined tension, up to 30 tons in a concert grand piano, compensates for the great length of the strings. Since the fundamental frequencies of the piano range from 27 Hz for the lowest A to 4096 Hz for highest C, it would be farcical to make the A1 (lowest note) strings 150 times longer than the C8 (highest note) strings (8). Instead, both the tension and the length vary for each note. The piano is especially unique with its equal temperance. Ever since the days of Johann Sebastian Bach (1685-1750), the difference between each key’s frequency has
in two categories: the edge tones of direct vibrations between the musician and the instrument, as in the flute and piccolo, and the vibration of a reed indirectly causing a sound, as in the clarinet and oboe (4). In both cases, the player supplies the energy to cause a vibration. The molecules are set into random motion when the player blows a note, moving the sound forward within the column. Molecules in a column of air, just as in a string, have a frequency of free vibration and can be excited by matching frequencies. Similar to string instruments, the standing waves produced have a fundamental tone and many overtones. The pitch produced by many wind and brass instruments depends on the embouchure, or shape of the mouth and tongue when creating a note, as well as the keys pressed, both of which alter the size of the column through which the air travels. Brass instruments are set up much differently. The trombone has a slide to change the size of the air column and the further out the slide, the lower the note. Trumpets have three valves: the first lowers the frequency by a whole tone, the second lowers the frequency by a half tone, and the third raises the frequency by a whole tone (4). Again, the embouchure allows a trumpeter to create a wide range of notes. Lastly, the human voice relies heavily on vibrating air. The vocal anatomy has three key parts: the lungs for power, the vocal cords to vibrate, and the vocal tract to resonate the sounds. To produce different notes, a singer varies the tension in his or her vocal cords.
Basic acoustics Image created by Andrew Zureick ‘13
Figure 2: Mersenne’s Laws. T, M, and L represent tension, mass/length, and length of the string, respectively.
been equal. This allows pieces to sound pleasing in any key. As a result, the frequencies are not perfectly aligned with the whole number ratio patterns that characterize “consonance,” but they are only a few hundredths of a hertz off.
Vibrating air
The vibration of air in woodwind instruments to produce sounds comes
Many instruments bring together different sounds to produce rich music in an ensemble. All of these waves travel through the space in which they are played, and the acoustical energy of the waves decreases with the square of the distance from their sources. For music created indoors, sound waves either reach a listener directly or after reflecting off of other surfaces and losing some energy to the surface; the nature of the surface affects how much energy is reflected and how much is absorbed. Hard surfaces like marble reflect most of the acoustical energy, and soft surfaces like carpet absorb most of the 33
energy (4). In addition, the flatness or curviness of a surface affects how it reflects sound. When constructing a concert hall for a performance, reverberation time—the time it takes for a sound to decay to a millionth of its initial intensity—makes a considerable impact (4). The sound should be powerful and carry, but should not be reflected too strongly as to cause a mesh of auditory confusion. Symphony Hall in Boston has a reverberation time of 1.8 seconds, for example. Even outside the concert hall, rooms must take reverberation and echoing into account, whether they are small conference rooms in which many voices could be talking at once or large lecture halls designed for a single professor’s voice to carry.
Human Interface With Music Our eyes and ears only pick up a discrete range of frequencies. We can only see the “visible light” section of the electromagnetic spectrum, and we can only hear frequencies between 20 Hz and 20,000 Hz. Yet, within this range of audible sound, the brain can elicit an enormous array of responses.
The ear
After their journey through the air, sound waves have to travel through three different mediums in the regions of our ears before we fully process them: air in the outer ear, solid bone in the middle ear, and the labyrinth of fluid-filled canals of the inner ear (see Figure 3). The initial tube through which sound waves travel is called the ear canal, and this both collects sound and resonates certain frequencies, which can create an “ocean” effect (1). Sound waves then exert pressure upon the very sensitive eardrum, setting it into vibration. The three ossicle bones—the malleus, incus, and stapes—act together as a lever in the middle ear and increase the pressure on the eardrum by 25 times as the vibration travels through the oval window and into the inner ear (1). The perilymph fluid-filled inner ear is predominantly composed of the cochlea. The two chambers of the cochlea are separated by the basilar membrane, a small strip of skin lined with about 30,000 hair cells, each with many 34
cilia. These hair cells transmit nerve impulses to the brain when they are bent by sound waves passing through, converting the mechanical wave energy into electrical signal energy (8). Hermann von Helmholtz explained how we recognize different pitches after sound waves propagate through our ears. There are “strings” on the basilar membrane that resonate on many different frequencies—long strings with low tension on one end and short strings with high tension on the other end (4). When one of these strings vibrates, it triggers a hair cell to send a nerve impulse because it picks up the frequency from the perilymph fluid. In the early twentieth century, Georg von Beksey saw that a wave moved across the basilar membrane and had a maximum amplitude at a certain point, which is when the hair cells would fire and send a message to the brain.
Effects of music on the brain
We can hear sounds because the vibrations are processed through a receiver. Consider the clichéd tree-fallingin-a-forest example: the tree certainly causes vibrations, but sound is associated with how the brain interprets the disturbance that travels through the air (10). Once the vibrations reach the brain, the response of electrical activity can be measured by electroencephalography (EEG). Schaefer et al. describe multiple studies over the last ten years that have used EEG and seen different
electrophysiological responses by the brain due to variance in musical characteristics. These characteristics include “subjective loudness, beat or syncopation, complexity of harmonic structure, melodic events, large interval jumps, novelty, and the level of expectations answered or violated in the harmony, rhythm, timbre, and melody” (11). Some studies have shown that classical music, particularly the works of Wolfgang Amadeus Mozart, has the right combination of characteristics to improve academic performance. This intellectual enhancement is commonly referred to as the “Mozart effect.” First noted by Rauscher et al. in 1993, subjects either listened to ten minutes of Mozart’s K. 448 Sonata for Two Pianos in D Major, a relaxation tape, or silence, and those who listened to the Mozart had higher performance in various spatial reasoning tasks (12). While all music and sound activates the parts of the brain associated with emotions, a UCLA neurobiologist used MRI imaging on subjects and found that Mozart’s music actually activates other parts of the brain that affect motor skills (13). Some studies have refuted the Mozart effect in the context of IQ testing, perhaps because the Mozart effect involves only temporary stimulation (13). Not everyone has the ability to enjoy music, however. People who suffer congenital amusia are more or less incapable of discerning different tones. This tone-deafness results from dam-
Image from http://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Ear-anatomy-text-small-en.svg/790px-Ear-anatomy-text-small-en.svg.png
Figure 3: Basic ear anatomy. Dartmouth Undergraduate Journal of Science
to make the creation of music more accessible and more powerful (16, 17). At the same time, digital music has revolutionized the way we experience music. The iPod can store thousands of MP3 files and has many more capabilities than the 33-1/3 rpm records used just a few decades ago (4). Also, electronic instruments are getting closer and closer to reproducing the authentic sounds and timbres of traditional instruments, especially for synthesizers and other pianos preloaded with hundreds or thousands of sounds. Digital capabilities will continue to skyrocket. Science has helped pave the way for a multifaceted, exciting generation of music. References
Image courtesy of Gavin Huang ‘14
Baker Tower fills the Hanover air with musical vibrations daily.
age to the temporal lobe of the brain. Congenital amusia is specific to music and does not affect language processing (14). Those with congenital amusia not only have trouble distinguishing intervals, melodies, and other pitch relations, but they also have trouble detecting the natural contour of people’s voices. The biggest hindrance caused by congenital amusia is the inability to recognize songs and other environmental sounds, not to mention the inability to remember them or sing them back. Scientists and musicians seem to view the impact of music on emotions differently. Leonard Bernstein said it best at a Young Peoples Concert with the New York Philharmonic many years ago: “We’re going to listen to music that describes emotions, feelings, like pain, happiness, loneliness, anger, love. I guess most music is like that, and the better it is, the more it will make you feel the emotions the composer felt when he was writing” (15). To scientists, the performing ensemble produces an array of sound waves from Fall 2010
its instrumental components—each of which produces one or more pitches with distinct timbres. To artists, the different chords, cadences, and other musical components form patterns that we associate with variable emotions. Pieces written in minor keys will contain minor thirds, the notes of which are in a six-to-five frequency ratio, and these chords often convey sadness. This does not form a simple objective dichotomy, but rather fuels the fire of an ongoing understanding.
Looking ahead
While the physical fundamentals of sound have been well established over thousands of years of study, the neurological effects of music continue to puzzle and excite scientists around the world. Physicians have integrated music into medicine through “music therapy” to ease anxiety and other conditions, visual artists create illustrations using sound through the art of cymatics, and engineers work tirelessly
1. R. Stephens, A. Bate, Wave Motion and Sound (William Clowes and Sons Ltd, London, 1950). 2. S. Caleon, R. Subramaniam, Physics Education. 42, 173-179 (2007). 3. A. Cheveign, Pitch: Neural Coding and Perception. 24, 169-233 (2005). 4. B. Parker, Good Vibrations: The Physics of Music (The Johns Hopkins University Press, Baltimore, 2009). 5. D. Butler, The Musicians Guide to Perception and Cognition (Schirmer Books, New York, 1992), pp. 15-31. 6. Sound Waves and their Sources, Available at http://www.youtube.com/watch?v=cK26cgqgYA. 7. Physics of the Orchestra, Available at http:// www.sasymphony.org/education/ypc0607/ ypc1_guide.pdf. 8. J. Jeans, Science & Music (Dover Publications, New York, 1968). 9. S. Sayegh, Computer Music Journal. 13(3), 76-84 (1989). 10. D. Levitin, This is Your Brain on Music (First Plume Printing, New York, 2007). 11. R. Schaefer et al., NeuroImage, in press (Available at http://www.sciencedirect. com/science/article/B6WNP-508PPSJ1/2/00edfacdbc3a4682b7506ea7874ef2a1). 12. F. Rauscher, G. Shaw, K. Ky, Nature. 365, 611 (1993). 13. The Mozart Effect: A Closer Look, Available at http://lrs.ed.uiuc.edu/students/lerch1/edpsy/ mozart_effect.html#The%20Mozart%20 Effect%20Studies 14. J. Ayotte, I. Peretz, K. Hyde, Brain. 125, 238-251 (2002). 15. Leonard Bernstein- Tchaikovsky 4, Available at http://www.youtube.com/watch?v=AQ3GpUld YvE&feature=related 16. H. Jenny, Cymatics: A Study of Wave Phenomena & Vibration (Macromedia Press, USA, 2001). 17. L. Chlan, Heart & Lung: The Journal of Acute and Critical Care. 27(3), 169-176 (1998).
35
Neuroscience
Science of Daydreaming EMILY STRONSKI ‘13
U
nlike previously thought, the universal phenomenon of daydreaming is a normal part of our cognitive processes. Daydreaming is defined as “spontaneous, subjective experiences in a no-task, no stimulus, no-response situation…[and] includes unintended thoughts that intrude inadvertently into the execution of intended mental tasks… and undirected ideas in thought sampling during wakefulness” (1). Although a single daydream usually lasts only a few minutes, it is estimated that we spend one-third to onehalf of our waking hours daydreaming, although that amount can vary significantly from person to person (2). In contrast to what its name may suggest, daydreaming seems to be quite different from the dreams experienced during sleep. Another interesting fact about daydreaming is that “the seemingly continual stream of consciousness is discontinuous, consisting of a sequence of concatenated, psychophys-
iological building blocks … that follow each other in fractions of seconds” (1). Daydreaming is often looked down upon, as John McGrail, a Los Angeles clinical hypnotherapist, explains: “Daydreaming is looked upon negatively because it represents ‘non-doing’ in a society that emphasizes productivity…. We are under constant pressure to do, achieve, produce, and succeed” (3). Sigmund Freud even believed that fantasies were the creations of the unfulfilled, and that daydreaming and fantasy were early signs of mental illness (2). Experts now agree, however, that daydreaming is a normal, and even beneficial, cognitive function—albeit one that is largely still not understood. An area of the brain called the “default network,” which becomes more active as the level of external stimulus decreases, is often considered responsible for daydreaming. The default network mainly includes the medial prefrontal cortex (PFC), the posterior
cingulated cortex/precuneus region, and the temporoparietal junction (4). Neuroimaging studies have offered support for this hypothesis, though only indirectly. These studies demonstrated “correlations between reported frequency of task-unrelated thoughts and default network activation during conditions of low cognitive demand, as well as stronger default network activation during highly practiced compared with novel tasks in people with higher propensity for mind wandering.” A different interpretation of these data, offered by Gilbert et al., argued that “instead of mind wandering, activations in the medial PFC part of the default network may reflect stimulus-related thought such as enhanced watchfulness toward the external environment that is also likely to occur during highly practiced tasks”(4). A study by Christoff et al. using functional magnetic resonance imaging (fMRI) found that the executive system
Image courtesy of Pechter Photography
One-third to one-half of our waking hours are spent daydreaming. 36
Dartmouth Undergraduate Journal of Science
of the brain as well as the areas of the brain at the core of the default network, namely the medial PFC, the posterior cingulated/precuneus, and the posterior temporoparietal cortex, were, in fact, active during daydreaming. They also found that “brain recruitment associated with off-task thinking is most pronounced in the absence of metaawareness,” meaning when the person is not aware that he or she is daydreaming. The study also clarified: “Although our findings yield strong support to the notion that the medial PFC is involved in mind wandering, they do not specify whether it is involved in stimulus-independent or stimulus-oriented mind wandering, an important question that remains subject for further research” (4). An important finding of the study was the activation of the dorsal anterior cingulate cortex (ACC) and the dorsolateral prefrontal cortex (DLPFC), the two main regions of the executive network of the brain, during mind wandering. The recruitment of the executive system of the brain during daydreaming may help explain why daydreaming “can undermine performance on demanding tasks.” Christoff et al. proposed that this dual activation—of both the default and executive networks—suggested “the presence of conflict inherent to the content of mind wandering. This possibility would also be consistent with observations that the content of mind wandering is closely related to current personal concerns and unresolved matters” (4). The study then suggested the possible implication of finding that both the executive and default networks of the brain are active during mind wandering. The activation of both the default and executive networks of the brain is similarly seen in both creative thought and “naturalistic film viewing,” (4) suggesting that “mind wandering may be part of a larger class of mental phenomena that enable executive processes to occur without diminishing the potential contribution of the default network for creative thought and mental simulation. Although it may undermine our immediate goals, mind wandering may enable the parallel operation of diverse brain areas in the service of distal goals that extend beyond the current task.” (4) In another study, Sayette, Reichle—professors of psychology at the UniFall 2010
versity of Pittsburgh—and Schooler—a professor of psychology at UC Santa Barbara—found interesting results of alcohol’s effect on mind wandering. The study reported that “a moderate dose of alcohol simultaneously increases mind wandering while reducing the likelihood of noticing that one’s mind has wandered.” In this study, the consumption of alcohol doubled the incidence of daydreaming during a reading task. The study explained that “these findings represent the first demonstration that alcohol disrupts individuals’ meta-awareness of the current contents of thought. Although novel, this conclusion is consistent with prior observations that alcohol inhibits processes related to meta-awareness” (5). An article by M.F. Mason et al. published in Science proposed three possibilities on the reason the mind may wander at all. First, the authors suggested that perhaps stimulus independent thought (SIT) “enables individuals to maintain an optimal level of arousal, thereby facilitating performance on mundane tasks.” Second, they suggested that perhaps SIT “lends a sense of coherence to one’s past, present, and future experiences.” Thirdly, they suggested that perhaps “the mind may generate SIT not to attain some extrinsic goal…but simply because it evolved a general ability to divide attention and to manage concurrent mental tasks. Although the thoughts the mind produces when wandering are at times useful, such instances do not prove that the mind wanders because these thoughts are adaptive; on the contrary the mind may wander simply because it can” (6). Although daydreaming is still not fully understood, it is clear that during daydreaming the mind is very active. Marcus Raichle, a neurologist and radiologist at Washington University, sums it up: “When you don’t use a muscle, that muscle really isn’t doing much of anything…. But when your brain is supposedly doing nothing and daydreaming, it’s really doing a tremendous amount. We call it ‘resting state,’ but the brain isn’t resting at all” (7).
2. Daydreaming (2001). Available at http:// findarticles.com/p/articles/mi_g2699/is_0000/ ai_2699000083 (22 May 2010). 3. C. Frank, Why does daydreaming get such a bad rap? (2006). Available at http://www. webmd.com/balance/features/why-doesdaydreaming-get-such-bad-rap (22 May 2010). 4. K. Christoff, A. M. Gordon, J. Smallwood, R. Smith, J. W. Schooler, Experience sampling during fMRI reveals default network and executive system contributions to mind wandering. PNAS. 106, 8719-8724 (2009). 5. M. A. Sayette, E. D. Reichle, J. W. Schooler, Lost in the sauce: the effects of alcohol on mind wandering. Psychological Science. 20, 747-752 (2009). 6. M. F. Mason et al. Wandering minds: the default network and stimulus-independent thought. Science. 315, 393-395 (2007). 7. J. Lehrer, Daydream achiever (2008). Available at http://www.boston.com/ bostonglobe/ideas/articles/2008/08/31/ daydream_achiever (22 May 2010).
References 1. D. Vaitl et al., Psychobiology of altered states of consciousness. Psychological Bulletin. 131, 98-127 (2005). 37
biology
Turning Waste Into Food Cellulose Digestion Jingna Zhao ‘12
F
iber constitutes an essential element in the human diet. It has been shown to prevent cholesterol absorption and heart disease and help control diabetes (1). The National Academy of Sciences Institute of Medicine recommends the adult male consume at least 38 grams of soluble fiber per day—the only kind of fiber humans can digest (1). The other more abundant type of fiber, insoluble fiber, passes through the human digestive system virtually intact and provides no nutritional value. What if humans could digest fiber? Cellulose, the main type of insoluble fiber in the human diet, also represents the most abundant organic compound on Earth (2). Almost every plant has cell walls made from cellulose, which consists of thousands of structurally alternating glucose units (Figure 1). This configuration gives cellulose its strength but prevents it from interacting with human enzymes. Cellulose contains just as much energy as starch because both molecules consist of glucose subunits. It is only possible to use that energy by burning wood and other cellulose materials. However, if that energy were physiologically available, humans could lower their food consumption and produce much less digestive waste than they currently do.
Image retrieved from http://upload.wikimedia.org/wikipedia/commons/8/87/CelluloseIbeta-from-xtal-2002-CM-3D-balls.png (Accessed 2 Nov 2010).
Figure 1: Structure of cellulose. 38
Image retrieved from http://www.scientificamerican.com/media/gallery/A97EB868-BB61-DEDB-0B40903E73813FEA_1.jpg (Acccessed 2 Nov 2010).
Figure 2: The organs of the human digestive system.
The Human Digestive System Disregarding cellulose digestion, human digestion is still a very efficient process (Figure 2). Even before food enters the mouth, saliva glands automatically start secreting enzymes and lubricants to begin the digestive process. Amylase breaks down starches in the mouth into simple sugars and teeth grind up the food into smaller chunks for further digestion. After swallowing the food, hydrochloric acid and various enzymes work on the food in the stomach for two to four hours. During this time, the stomach absorbs glucose, other simple sugars, amino acids, and some fat-soluble substances (3). The mixture of food and enzymes, called chyme, then moves on to the small intestines where it stays for the next three to six hours. In the small intestines, pancreatic juices and liver secretions digest proteins, fats and complex carbohydrates. Most of the nutrition from food is absorbed during its journey through over seven feet of small intestines. Next, the large intestines absorb the residual water and electro-
lytes and store the leftover fecal matter. Although the human digestive system is quite efficient, discrepancies among the human population exist concerning what individuals can or cannot digest. For example, an estimated seventy percent of people cannot digest the lactose in milk and other dairy products because their bodies gradually lost the ability to produce lactase (4). Humans can also suffer from various other enzyme or hormone deficiencies that affect digestion and absorption, such as diabetes. Comparative studies show that the human digestive system is much closer to that of herbivores rather than carnivores. Humans have the short and blunted teeth of herbivores and relatively long intestines—about ten times the length of their bodies. The human colon also demonstrates the pouched structure peculiar to herbivores (5). Yet, the human mouth, stomach, and liver can secrete enzymes to digest almost every type of sugar except cellulose, which is essential to a herbivore’s survival. In the case of lactose intolerance, lactase supplements can easily rectify the deficiency, so what rectifies the inability to digest cellulose?
Dartmouth Undergraduate Journal of Science
Ruminants and Termites Ruminants—animals such as cattle, goats, sheep, bison, buffalo, deer, and antelope—regurgitate what they eat as cud and chew it again for further digestion (6). Ruminant intestines are very similar to human intestines in their form and function (Figure 3). The key to specialized ruminant digestion lies in the rumen. Ruminants, like humans, also secrete saliva as the primary step in digestion, but unlike humans, they swallow the food first only to regurgitate it later for chewing. Ruminants have multi-chambered stomachs, and food particles must be made small enough to pass through the reticulum chamber into the rumen chamber. Inside the rumen, special bacteria and protozoa secrete the necessary enzymes to break down the various forms of cellulose for digestion and absorption. Cellulose has many forms, some of which are more complex and harder to break down than others. Some of the microbes living in the rumen, such as Fibrobacter succinogenes, produce enzyme cellulase that breaks down the more complex forms of cellulose in straw while others such as Ruminococci produce extracellular cellulase that hydrolyzes the simpler amorphous type of cellulose (7). Conveniently, cellulose hydrolysis produces several byproducts, such as cellobiose and pentose disaccharides, which are useful to rumen microbes. The reactions produce other byproducts such as methane, which is eventually passed out of the ruminant (7). Thus, the microbes and ruminants
live symbiotically so that the microbes produce cellulase to break down cellulose for the ruminants while gaining a food source for their own sustenance. The various microbes within ruminants may hydrolyze certain types of cellulose, but ruminants still cannot eat wood or cotton. Termites, on the other hand, can feed on various types of wood. It was believed for a long time that termites also depended on microorganisms that lived inside their bodies to digest cellulose for them, but research in the late 1990s showed that certain types of termites had the ability to produce enough cellulases and xylanases in the midgut to support their own survival (8). However, other species of termites do not have the capacity to produce enough cellulase independently and must depend on microbes from the domains Archaea, Eubacteria and Eucarya to break down cellulose. Regardless of the various levels of termite independence, there exists a symbiotic relationship between termites and over 400 species of microorganisms, analogous to that of ruminants and their microbes (8). The termite gut is even designed to provide energy-yielding substrates for the microbes (8). Both protists and fungi are attributed to the production of supplementary enzymes, but their specific roles and mechanisms are still being debated and have yet to be fully elucidated, because isolating pure cultures has proven technically difficult. Despite the ubiquity of these microbes and the benefits they bring to ruminants and termites, research has yet to fully elucidate their mechanisms.
Current Technologies People have long been interested in tapping into the energy in cellulose. However, most companies and research groups are only focused on ways to harness that energy as biofuel and not as food. Major research is aimed at converting cellulosic material into ethanol, although that process is still inefficient and requires refinement. Cellulose must first be hydrolyzed into smaller sugar components such as glucose, pentose or hexose before it can be fermented into bioethanol (9). For example, one method uses acids can be used to hydrolyze cellulose but this can destroy many of the sugars in the process. Another way to hydrolyze cellulose is by mimicking the microorganisms inside ruminants and termites. Bioenergy engineers can use the enzymes produced by microbes to break down cellulose. However, enzymes have biological limitations and implement natural feedback inhibition that poses a problem for industrial manufacturing (9). Other technical barriers to efficient enzymatic hydrolysis include the low specific activity of current commercial enzymes, the high cost of enzyme production, and a lack of understanding of the mechanisms and biochemistry of the enzymes (9). Companies and governments all over the world are eager to invest heavily in research to turn biomass into biofuel, which could bring enormous benefits to the world economy and environment. Biomass is readily available, biodegradable, and sustainable, making it an ideal choice as a source of energy for both developed and developing countries. This could also help reduce waste problems plaguing society today. The United States produces 180 million tons of municipal waste per year, and about fifty percent of this is cellulosic and could potentially be converted into energy with the right technology (10).
Cellulose Digestion in Humans Image retrieved from http://courses.bio.indiana.edu/L104-Bonner/F08/images08/L18/Cow2.jpg (Accessed 2 Nov 2010).
Figure 3: The ruminant digestive system. Fall 2010
The benefits of turning cellulose into biofuel are just as relevant when considering engineering humans to digest cellulose as a food source. Right 39
now, technology focuses on controlling cellulose hydrolysis and processing in factories, but perhaps in the future, humans could serve as the machine for extracting energy from cellulose, especially since the enzymes used to hydrolyze cellulose are hard to isolate in large quantities for industrial use. Termites themselves are tiny creatures, but as a colony, they can break down houses and entire structures. A healthy human digestive system already carries an estimated 1 kg of bacteria, so adding a couple of extra harmless types should not pose a problem (11). Termites and ruminants serve as a great example of how organisms can use microbes effectively. However, the human body would need some adjustments to introduce the microbes into the body. Our stomach is much too acidic for most microbes to survive. The acid, among other secretions and enzymes, follows the food into the small intestine, where the microbes might end up competing with us for food. By the time the food has reached the large intestines, only the cellulosic material is left for dehydration and possibly hydrolysis. However, our large intestines lack the ability to absorb the sugars that the microbes would produce from hydrolysis. Perhaps another organ could be added to the end of the human gastrointestinal tract to accommodate cellulose-digesting microbes. Modern medicine allows safe inter-species transplantation, but the ideal solution would be to genetically engineer humans to develop the organs themselves to avoid the complications of surgery and organ transplantation. Genetic engineering for the purpose of treating disease and illness is still undergoing intense debate, so nonessential pursuits such as cellulose digestion will not be possible until the scientific and medical communities accept genetic engineering as a safe and practical procedure. A simpler solution would be to take supplements similar to the ones used to treat lactose intolerance. Cellulose broken down in the stomach can be absorbed as glucose. Extracting the right enzymes to work in the human stomach can bypass the problems of supporting microbes inside the human body. Additionally, since the process would occur inside the human body, the limitations that posed a problem 40
for commercial hydrolysis of cellulose would become necessary biological controls. In the case of lactose intolerance, lactase is easily extracted from yeast fungi such as Kluyveromyces fragilis, so perhaps the easiest solution for cellulose indigestion is to extract the appropriate enzyme from the right microbes (12). As mentioned previously, the commercial extraction of enzymes is not yet possible. As previously stated, this field of human enhancement does not receive much research, because companies and funding institutions are much more interested in the lucrative biofuel industry. Consequently, many questions remain unasked and unanswered. For example, what would the removal of cellulose weight from stool do to the process of defecation? What other effects might the microbes have on the human body? How do we deal with the other byproducts of cellulose hydrolysis such as methane production? These questions could be latteranalyzed through observation. Other mammals have survived many millennia by digesting cellulose with microbes, and since humans are mammals, there are no underlying reasons why human bodies cannot be compatible with these organisms. The microbes that currently reside in the human body already produce gases inside the digestive system, ten percent of which is methane (3). Methane production used to be viewed as a problem at cattle ranches and dairy farms, but methane itself is a highly energetic biogas that can be used as fuel. Harnessing it might prove difficult considering that current social graces do not favor open flatulence even for the sake of renewable energy. However, certain diets richer in alfalfa and flaxseed have been proven to reduce methane production in cows, which could potentially solve that problem (13).
Conclusion Vegetation, which is severely lacking in the modern diet, is the major source of insoluble fiber. Vegetables contain many vitamins, nutrients, and soluble fiber, which has numerous health benefits as mentioned in the introduction. Adding these foods to our diet after adding
cellulose-digesting capabilities could help assuage the obesity epidemic and significantly improve human health. Ultimately, improving human digestion could vastly reduce waste generated by humans and increase the efficiency of human consumption. We only need to better observe and understand those particular microbes to integrate them into our bodies, which are already structurally favorable for such a change. With the successful integration of microbes, we could cut down on food intake by making use of the energy in previously indigestible cellulose, reduce cellulosic waste by turning it into food, solve problems of food shortages by making algae, grass, straw, and even wood edible, and eventually turn human bodies into a source of renewable energy. References 1. B. Kovacs. Fiber. Available at http://www. medicinenew.com/fiber/article.htm. (15 April 2010). 2. Cellulose (2010). Available at http://www. britannica.com/EBchecked/topic/101633/ cellulose. (17 April 2010). 3. Human Digestive System (2010). Available at http://www.britannica.com/EBchecked/ topic/1081754/human-digestive-system. (15 April 2010) . 4. H. B. Melvin, Pediatrics. 118, 1279-1286 (2006). 5. M. R. Mills, Comparative Anatomy of Eating (2009). Available at http://www.vegsource.com/ news/2009/11/the-comparative-anatomy-ofeating.html. (17 April 2010). 6. D. C. Church, Digestive Physiology and Nutrition of Ruminants (O & B Books, Corvallis, Oregon, 1979). 7. R. L. Baldwin, R.L., Modeling Ruminant Digestion and Metabolism (Chapman & Hall, London, UK, 1995). 8. T. Abe, D. E. Bignell, M. Higashi, Ed., Termites: Evolution Sociology, Symbiosis, Ecology (Kluwer Academic Publishers, Dordrecht, Netherlands, 2000). 9. A. Demirbas, Biofuels (Springer-Verlag London Limited, London, UK, 2009). 10. S. Lee, Alternative Fuels (Taylor & Frances, Washington D.C., 1995). 11. Friendly Bacteria in the Digestive System (2000). Available at http://www.typesofbacteria. co.uk/friendly-bacteria-digestive-system.html. (19 April 2010). 12. Lactase (2006). Available at http://www. vitamins-supplements.org/digestive-enzymes/ lactase.php (20 April 2010).. 13. L. Kaufman, Greening the Herds: A New Diet to Cap Gas (2009). Available at http://www. nytimes.com/2009/06/05/us/05cows.html. (20 April 2010).
Dartmouth Undergraduate Journal of Science
Engineering
Cost Benefit of Energy Conservation
Development of a Framework to Assess Conservation Measures in Residential Homes Nozomi hitomi ‘11
A
new framework has been developed to assess and validate energy conservation measures in residential homes. The framework involves modeling and simulating the energy consumption behavior of residential homes along with a cost benefit analysis on specified energy conservation measures. The framework provides a systematic manner to identify the most cost-effective energy conservation measure for a specific building. This paper discusses the development of such a framework and presents a case study using this framework.
EnergyPlus
EnergyPlus is a modeling program developed by the Department of Energy (DOE) that “models heating, cooling, lighting, ventilating, and other energy flows as well as water in buildings (1).” EnergyPlus uses multiple input sources including the properties and specifications of the building envelope and geometry, information (design data or manufacturer data) about the mechanical and electrical system, information about the internal heating loads, and local weather data to simulate the energy behavior of a building (Figure 1).
Introduction The Philadelphia Housing Authority (PHA) has been awarded over $90 million dollars in stimulus funds by Congress last year. A series of priority projects are being planned and implemented that focus on improving existing building energy efficiency, providing more energy efficient low-income houses, and creating jobs. There is no existing framework, however, that PHA can use to systematically assess and validate the efficacy and benefit-cost ratios of the potential energy conservation measures (ECM) implemented in the projects. There is a need to develop a framework for public housing authorities and multifamily residential building owners to access the cost benefit ratios of various ECMs, especially when uncertainties, such as oil price, are considered. Using such a framework, PHA or other multifamily residential building owners would decide on how to best investigate money on different ECMs. Therefore, in this paper, the previously mentioned framework is initiated using one of PHA’s development projects as a case study. The goal of this research is to develop an easy-to-use and effective framework, which includes building energy simulation and cost benefit analysis to identify the best combination of ECMs.
Framework Developed framework
A framework, which consists of a building energy simulation component and a cost-benefit analysis component, is developed here. The building energy simulation component uses EnergyPlus (introduced further below) to develop a base-line building energy model using building design data. Energy consumptions of both baseline scenario and other scenarios with various proposed ECMs are simulated. The developed cost-benefit analysis component then evaluates which ECM is the one with the best benefit cost ratio, which is defined as the ratio of discounted benefits accruing over time to the cost of the initial investment. More description about the cost benefit analysis is described in Section 2.3. Fall 2010
Figure 1: EnergyPlus inputs.
In this study, Google SketchUp is used to develop the building layout and geometries. SketchUp works with OpenStudio, a plugin developed by the National Renewable Energy Laboratory (NREL), to further work with EnergyPlus (2). The OpenStudio file is accessible through EnergyPlus where additional information about the construction of the envelope is developed. Construction materials are specified by thickness, roughness, thermal conductivity, density, and specific heat, and then are layered in the correct order to create the respective surfaces. The mechanical system refers to the heating, ventilation, and air conditioning (HVAC) system implemented in the building for heating and cooling. The HVAC system specifications including equipment efficiency, coefficient of performance, heat recovery effectiveness, energy source, and fan performance are taken into account as well as heating and cooling setpoints. The heating and cooling setpoints are defined in schedules that can be set to specified temperatures for specific hours during the day, specific days during the week, and specific days during the year. The electrical system pertains to the lighting in the building. The energy used by lights is determined by two factors; the power of the light bulb and the amount of time the light is on. An hourly schedule, similar to the schedule for the heating and cooling setpoints is input into EnergyPlus to specify when the lights are on during the day. The internal gain is defined by the amount of heat 41
generated by appliances or occupants inside the building. Household appliances, such as refrigerators, laundry machines, dryers, dishwashers, computers, and occupants generate heat and release it into the indoor environment. The amount of internal gains will determine when the HVAC system turns on and therefore affect the energy usage. In EnergyPlus, internal gains are determined by schedules that can be specified at the hourly level. Local weather data sets for the United States are provided by the NREL (3). These weather data sets are series of hourly solar radiation and meteorological elements for a typical year at a specified site. With all of these input elements, EnergyPlus is able to simulate energy consumptions for the modeled building. EnergyPlus is capable of producing a variety of output files. These include dry-bulb temperature of each zone, energy used by the HVAC system, lighting, and electrical equipment for the entire building or for each individual zones, heat transfer and heat transfer rates of each zone, and the total energy purchased or sold back to the utilities (if electricity is produced through photovoltaic cells, wind turbines, etc.). The simulation can be set to output values on an hourly, daily, weekly or monthly basis. Once an EnergyPlus model is developed, a baseline scenario is defined. The baseline scenario is the original state of the building with no ECM implemented. The baseline scenario will provide energy consumption behavior that can be compared with the proposed ECMs to evaluate and assess the improvement.
Figure 2: The housing unit at 802 Markoe Street.
The base case scenario for this case study incorporates single glazed windows, wall insulation of R values 13 h·ft²·°F/ Btu, and roof insulation of R value 38 h·ft²·°F/Btu. The two shared walls are modeled as adiabatic under the assumption that the neighbors would have similar heating and cooling setpoints. The local weather file is chosen as the Philadelphia International Airport which is obtained through NREL (3). The schedules of the HVAC system, electrical equipment, occupancy, and lighting are all based on a benchmark building developed by the DOE on midrise apartments in Chicago shown in Figure 3 and Figure 4 (3). Figure 3 shows the percent usage per hour of the total capacity of the building’s respective systems. Figure 4 shows the heating and cooling setpoint schedules used for both weekdays and weekends.
Cost-benefit analysis
The cost-benefit analysis developed in this project aims at comparing different ECMs while considering factors such as initial capital and operating costs, inflation, fuel price escalation, discount rates, and tax credits. With these assumptions, a project balance can be extrapolated for a specified number of years or its project life, and ultimately, a benefit-cost ratio can be calculated. The project balance is calculated by ________________ , where Pk is the project balance in year k, Pk-1 is the project balance in the previous year, and Ak is the savings incurred during the kth year. The benefit-cost ratio is given by _____ where Bc is the benefit cost ratio, Db is the discounted benefits provided by the project, and Cc is the initial capital cost. A benefit-cost ratio greater than one signifies that investing in a particular ECM will yield greater benefits than keeping the money in a savings account, and the greater the ratio, the greater the benefit of that ECM. Based on the value of the benefit-cost ratio, an informed decision can be made as to which ECM or combination of ECMs is the best investment.
Figure 3: Lighting, occupancy, and equipment schedules.
Case study
A case study for the developed framework is conducted on a housing unit located on 802 Markoe Street in West Philadelphia. The 802 unit is a row home that has two shared walls with the neighboring row homes, 10 windows of varying sizes, a unitary HVAC system, two floors above the ground floor, and a basement (Figure 2). Figure 4: Heating and cooling setpoint schedules. 42
Dartmouth Undergraduate Journal of Science
Two ECM scenarios are simulated. The two ECMs are 1) improving single glazed windows to double glazed windows and 2) increasing the R values of insulation from 13 to 21 h·ft²·°F/Btu for the external walls and from 38 to 44 h·ft²·°F/Btu for the roof . The benefit-cost analysis is conducted with the assumptions to inflation, fuel price escalation, and discount rates and tax credits shown in Table 1. The inflation, escalation, and discount rates are assumed as uniform. The inflation rate is an average of the consumer price index (CPI) of the years 1999 to 2009 (4). Fuel escalation increase is obtained from the US Bureau of Labor, Mid-Atlantic Information Office. The discount rate is an average of the primary credit rates from January 2003 to February 2010 (5). Tax credit rate was set at 30% of the product cost for a maximum of $1,500 (6). Capital costs are obtained from manufacturer and retailer websites (7-9). Fuel Escalation (%)
1.9
Federal Grant (%)
30
Inflation (%)
2.53
Discount Rate (%)
3.59
Initial Fuel Price ($/kWh)
0.159
Figure 5: Building electricity usage comparing window types.
Table 1: Assumed rates.
Results of the case study
The energy consumption over a one year period of the simulated building is shown in Figure 5 and Figure 6 for both base cases and other scenarios using different ECMs, respectively. Upgrading the windows shows a significant decrease in the electricity usage in the building, especially during the winter months. The transition seasons of the spring and autumn months, however, show increased electricity usage with the double glazed windows. Upgrading insulation has the same effects on electricity usage as upgrading windows, but affected the electricity usage much less. This is because of high implementation costs and the relatively low increase in R-values. Based on the simulated energy, the benefit cost ratios for both ECMs are analyzed. Figure 7 and Figure 8 show the projected balance of implementing the two ECMs over a 30 year period. Figure 7 shows that double glazed windows will have a pay-back period of just over 10 years, whereas Figure 8 shows that increasing insulation will not pay itself back within a 30 year period. The capital costs, discounted benefit, savings earned over the 30 year period, and the benefit-cost ratio are shown in Table 2.
Figure 6: Building electricity usage comparing insulation types.
Figure 7: Project balance for improved windows.
Conclusion
A framework that incorporates energy consumption simulations and cost-benefit analysis was created to better compare various ECMs. A building is modeled in EnergyPlus using a variety of inputs including information on local weather data, electrical and mechanical systems, internal heat gains, and building envelope and geometry. The behavior of several ECMs can be simulated and then compared using a cost benefit analysis to make the most informed decision on which ECM is the most cost-effective. Fall 2010
Figure 8: Project balance for improved insulation.
43
Improved Windows
Improved Insulation
Capital Costs
$1,890
$2,112
Discounted Benefits
$4,230
$788
Project Balance After 30 Years
$8374
$-1988
Benefit-Cost Ratio
3.20
0.53
Table 2: Cost-benefit results
A case study was conducted on one of PHA’s development projects located on 802 Markoe Street in West Philadelphia. This housing unit was a row home that shared two of its facades with the neighboring row homes, had three floors and a basement, and had a unitary HVAC system. The two ECMs simulated in the case study were improving the windows from single glazed windows to double glazed windows and improving the insulation in the walls from R-13 to R-19 and in the roof from R-38 to R-44. In this case study, installing double glazed windows was cost-effective because the benefit-cost ratio was significantly greater than one. Improving insulation in the walls from R-13 to R-19 and in the roof from R-38 to R-44, on the other hand, was not a cost-effective ECM. Over the 30 year life span, the benefit-cost ratio was significantly less than one which means that it was more beneficial in investing that money elsewhere. The low benefit-cost ratio was due to four factors; the relatively high costs of replacing old insulation, the slight increase in R-values, the small area over which the insulation is replaced, and the relatively low cost of energy. The high capital cost was a disincentive to installing new insulation and the slight increase in R-values only resulted in marginal benefits. Also, because the housing unit was a row home, only the front and back facades and the roof required insulation. The other two walls are shared and modeled as adiabatic so the total effect of the improved insulation was reduced. Finally, the low energy costs made improving the insulation a cost-ineffective solution. From Figure 8, it is shown that only after the 26th year, there is a reverse in the trend, and the ECM begins an increasing trend. This is because the fuel expenses start to outweigh the discounted benefits and begin a positive annual cash flow. The comparison between various ECMs becomes easier when energy consumption simulations are combined with cost-benefit analyses. The efficacy of each ECM can be assessed and evaluated through such a framework. Many ECM scenarios can be modeled, simulated, and analyzed with relative ease once a baseline model is constructed. The most involved part of the process is acquiring the data and specifications of the building material, schedules, geometry, systems, and equipment. All of this input data must be as accurate and detailed as possible for the most precise results.
be acquired to calibrate the model to further improve the model accuracy. A rigorously calibrated model will yield the best available data from which an informed decision can be made. The ultimate goal of this project is to extend the framework on a larger scale for other housing authorities nationwide or, more generally, building contractors.
Acknowledgements
This research was funded and supported by the Philadelphia Housing Authority and the National Science Foundation under Grant No. EEC0 851827. The author would also like to thank Liam Hendricken and Jared Langevin for their assistance with the EnergyPlus models and the cost-benefit analysis. References 1. EERE. 2010. Commercial Building Initiative. [Online] May 06, 2010. [Cited: August 17, 2010.] http://www1.eere.energy.gov/buildings/ commercial_initiative/new_construction.html. 2. Google. 2010. Google SketchUp. [Online] Google, 2010. [Cited: July 10, 2010.] http://sketchup.google.com/intl/en/. 3. EERE. 2010. EnergyPlus Energy Simulation Software. [Online] April 26, 2010. [Cited: July 20, 2010.] http://apps1.eere.energy.gov/buildings/ energyplus/. 4. US Bureau of Labor Statistics. 2010. Consumer Price Index. [Online] August 13, 2010. [Cited: August 16, 2010.] http://www.bls.gov/cpi/. 5. Federal Reserve Bank. 2010. Historical Discount Rates. [Online] February 19, 2010. [Cited: August 16, 2010.] http://frbdiscountwindow.org. 6. Energy Star. 2010. Federal Tax Credits for Consumer Energy Efficiency. [Online] February 18, 2010. [Cited: August 16, 2010.] Federal Tax Credits for Consumer Energy Efficiency. 7. Lowes. 2010. [Online] 2010. [Cited: August 16, 2010.] www.lowes.com. 8. Biobased. 2010. Biobased Insulation. [Online] 2010. [Cited: August 16, 2010.] http://www.biobased.net/. 9. Window World. 2010. Replacement Windows. [Online] 2010. [Cited: August 16, 2010.] http://www.windowworldphiladelphia.com/.
Future work
The energy simulation model used for the case study can further be improved once more manufacturer data are available. In addition, once the building is physically built, measurements of the actual energy usage can 44
Dartmouth Undergraduate Journal of Science
Article Submission
DUJS
t What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories:
Research
This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline.
Review
A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class.
Features (Reflection/Letter/Essay or Editorial)
Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide.
t Guidelines: 1. The length of the article must be 3,000 words or less. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can
be sent via email to the DUJS account.
3. Any co-authors of the paper must approve of the submission to the DUJS. It is your responsibility to contact the
co-authors.
4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)’s
specifications on the diagrams.
For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide Specifically, please see Science Magazine’s website on references: http://www.sciencemag.org/feature/contribinfo/prep/res/refs.shtml
Fall 2010
45
DUJS Submission Form t Statement from student submitting the article: Name:__________________
Year: ______
Faculty Advisor: _____________________ E-mail: __________________ Phone: __________________ Department the research was performed in: __________________ Title of the submitted article: ______________________________ Length of the article: ____________ Program which funded/supported the research (please check the appropriate line): __ The Women in Science Program (WISP)
__ Presidential Scholar
__ Dartmouth Class (e.g. Chem 63) - please list class ______________________ __Thesis Research
__ Other (please specify): ______________________
t Statement from the Faculty Advisor: Student: ________________________ Article title: _________________________ I give permission for this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: _____________________________ Date:______________________________ Note: The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal. Please answer the following questions about the article in question. When you are finished, send this form to HB 6225 or blitz it to “DUJS.� 1. Please comment on the quality of the research presented:
2. Please comment on the quality of the product:
3. Please check the most appropriate choice, based on your overall opinion of the submission:
46
__ I strongly endorse this article for publication
__ I endorse this article for publication
__ I neither endorse nor oppose the publication of this article
__ I oppose the publication of this article
Dartmouth Undergraduate Journal of Science
Write
Fall 2010
Edit
Submit
Design
47
48
Dartmouth Undergraduate Journal of Science