UMass
D EPAR TMENT
OF
P HYSICS , U NIVERSITY
OF
PHYSICS
M ASSACHUSETTS
December 19, 2008
ATLAS: LORD OF THE RING Sam Bingham and Matthew Mirigian, Amherst Some people are concerned that new experiments at the world’s largest particle physics laboratory could have some disastrous consequences. Perhaps it is due to the talk of high energy collisions possibly resulting in black holes. The Large Hadron Collider (LHC) will be capable of smashing protons at the highest energies seen in a laboratory setting. The collider’s major ring, where the protons will zoom around at 99.999999 percent of the speed of light, is a circular track 17 miles in circumference and is located underground straddling the Franco-Swiss border near Geneva. [ . 9]
A titan under construction DARK ENERGY AND MATTER
Where Did (Just About) Everything Go? Lorenzo Sorbo, our local cosmologist Sebastian Fischetti and Rob Pierce, Amherst A couple walking down a street late at night notice a man, obviously drunk, searching for something under a streetlight. When asked about his behavior, the man replies, “I lost my keys.” “Where?” the couple ask. “In the park.” “Then why are you looking for them here?” “Because here I can see.”
This story describes the manner of research of a cosmologist like Lorenzo Sorbo, a Professor of Physics at UMass Amherst. Because many ideas in cosmology, like dark energy, are still very obscure, sometimes cosmologists can only research within the known realm of physics before making progress on more mysterious topics. [ . 6]
LIGO AND BOREXINO
Making Waves at UMass Laura Cadonati brings a revolutionary perspective on the cosmos to Amherst Paul Hughes & Daniel Rogers, Amherst Albert Einstein is a household name. Most people know that E = mc2 , and that funny things happen to space and time at speeds near that of light. A less commonly known prediction of relativity, however, is the existence of gravitational waves, which have yet to be directly observed. Since the cornerstone of scientific progress is the verification of such predictions, this is a problem for Einstein and his theory. Fortunately for him, there is LIGO (the Laser Interferometer Gravitational-Wave Observatory), which hopes to Laura Cadonati finally detect gravitational waves. The project is a collaborative effort of about 500 scientists from across the country. One of them, Dr. Laura Cadonati, has helped to start a gravitational waves research group in the UMass Amherst Physics department. [ . 12]
Boone & Rines: Fifth force– 2 Kerrigan: Future physics– 4 O’Donnell: The electron: a dipole?– 5 Fischetti & Pierce: Dark cosmology– 6 Parker: Universe expanding– 8 Bingham & Mirigian: God particle– 9 Deegan & MacLellan: Atom smasher– 10 Hughes & Rogers: Gravity waves– 12 Fratus & Lund: Neutrino oscillations– 14 Drake: Complexity– 16 Emma: Food nanotech– 17 Cervo: Cell imaging– 18 Mortsolf: Molecule flashlight– 20 Kiriakopoulos: Single molecules– 19 Herbert: New matter– 22 Lally: Thin-film buckling– 24 UMass
PHYSICS
University of Massachusetts
“A student will never do all that he is capable of doing if he is never required to do that which he cannot do.” —Herbert Spencer
UMass PHYSICS
December 19, 2008
2 / 24
THE FIFTH FORCE
Rules of Attraction How a simple experiment challenged centuries of physical theory Sam Boone and Rich Rines, Amherst
In Northfield, Massachusetts, 935 feet above sea level on Northfield Mountain, engineers in the 1970s created a massive lake (pictured). The mountain and area surrounding the lake serves as a public recreational area for the surrounding towns. At first glance, the lake may not seem unusual (save for its artificial cement edges), but with an extended stay, visitors would notice an overwhelming peculiarity: the height of the water in the lake is in dramatic constant fluctuation. Deep underground and below the surface of the lake, water turbines are responsible for continuously draining and refilling of the lake. When there is an excess of electricity in the area, the pumps use it to fill the reservoir with water from the Connecticut River. This water is then released from the lake through large generators in times of electrical demand. And what is most peculiar about this lake is not its intrinsically unusual nature, but
a single physical experiment completed there in 1991. Physicists have long characterized interaction in terms of four fundamental forces. Gravity keeps us on the ground. Electromagnetism, being much stronger than gravity, keeps atomic structures rigid, so that we don’t fall through that ground. The strong nuclear force, stronger still, holds those atoms themselves together. The weak nuclear force, though less obvious in everyday experience, is responsible for the nuclear activity that makes the sun burn. Without these forces the infrastructure of our universe would collapse leaving us with a completely chaotic and intangible universe that we cannot begin to imagine. Each of these forces are defined by their particular “sources,” or properties of matter which are attracted or repelled. Electromagnetism, for example, is a product of the “electric charge.” Gravity, by contrast, is solely a function
c http://www.physics.umass.edu - Created using paperTEX
of an object’s mass. In this way, all objects (including ourselves) experience a small attraction toward one another, which is greater the closer together the objects are. This is the cause of Galileo’s infamous 17th-century claim: that all objects, without the effects of wind resistance, fall to the Earth with the same acceleration. This principle is known as “weak equivalence,” and has been tested with high precision many, many times. Years later, Sir Isaac Newton used this as the basis for his gravitational explanation of planetary orbits. But late in the 1970s, three centuries after the times of Galileo and Newton, physicists began questioning their fundamental assertions. Motivated by certain sub-atomic phenomena that could not be explained by any existing physical law, they were beginning to suspect yet another, fifth fundamental force, with its own unique “source.” As this source would presumably be different for
UMass PHYSICS
3 / 24
December 19, 2008
different kinds of materials, they would fall to Earth with slightly different accelerations. This would mean an end for weak equivalence. And being much weaker and less prevalent than even gravity, this force could have gone undetected through any previous experiment. It was the existence of this new force that Paul A. Nakroshis, as part of his PHD thesis, for the University of Massachusetts under the direction of Professor Bob Krotkov, began to experimentally search for on the side of Northfield Mountain, in the Metropolitan District Commission (MDC) Building. Nakroshis saw that the Northfield Reservoir provided a perfect testing sight for a possible fifth force. All objects are, to some small extent, attracted to each other via gravity. A fifth force would, presumably, provide this same universal interaction between objects. However, unlike gravity, different objects of the same mass could be either attracted or repelled from one another. The strength of this attraction or repulsion would also vary between different materials. Nakroshis used this as the basis for his experiment. The changing volume of water in the reservoir provided a constantly changing source of attraction for nearby objects. When the reservoir is full, objects suspended nearby should have a measurable, however miniscule, attraction toward the
great water mass. This attraction diminishes as the water is emptied from the lake. Under just the effects of gravity, this attraction would be identical for all objects of the same mass. However, with the addition of a fifth force that is dependent on some other property of the material, the attraction of different kinds of materials toward the lake would be slightly different. This difference is readily tested: by hanging two different materials of equal mass in balance once the lake has been drained and then allowing the lake to refill, the balance would “tip” so that the object most attracted to the water could get closer. Nakroshis used a variant of this procedure in his experiment. He assumed, as did most fifth-force research at the time, that the source of the force was an atomic property known as baryon number. This is a property which varies slightly between different kinds of atoms, and therefore between different kinds of materials. The two materials he compared were copper and polyethylene, which are known to have very different baryon numbers. He balanced these materials on a metal rod which was allowed to oscillate, much like a horizontal version of a clock’s pendulum, known as a torsion pendulum. Just as the speed at which a clock’s pendulum swings changes when the pendulum is made heavier or lighter,
the attraction between each material and the changing amount of water would theoretically alter how fast the torsion pendulum oscillates. Nakroshis’ experiment was one of many in search for a violation of the weak equivalence principle at the time. The physics community at the time was captivated by the possibility of a new, undiscovered fundamental force. Nakroshis’ test differed, however, from the majority of fifth force experiments, in his utilization of such a large amount of fluctuating material. The unique Northfield Reservoir provided this huge fluctuating body. Like the other fifth-force experiments at the time, Nakroshis’ results were inconclusive. As these results came in, physicists more and more began to flee from the idea of the existence of a fifth force. A little more than a decade after its proposal, the fifth-force proposal was beginning to flounder. Eventually, as no positive, reproducible experimental results were being reported, the force was largely considered dead. In this way, weak equivalence and the ideas of Newton and Galileo were once again considered accurate. It was not until the rise and fall of the fifth force, three centuries and thousands of experiments after their conception, that these laws had been tested and confirmed with such high meticulousness.
Left to Right: Nakroshis, Sakai and Krotkov with the experimental apparatus.
c http://www.physics.umass.edu - Created using paperTEX
UMass PHYSICS
December 19, 2008
4 / 24
CONTEMPORARY THEORY
A Ripening Reality Exploring two controversial new physical theories Chris Kerrigan, Amherst A chapter in contemporary physical theory may be drawing to a close. Physicists have long made the comparison of reality to an onion, having layers of truth to be explored and later discarded when further research reveals a “deeper” or more fundamental existence. Right now, many of the world’s top physicists have thoroughly explored current theories sufficiently enough to decide that it’s time to peel back a new layer. The current description of reality agreed upon by most physicists is called the Standard Model. This theory depends on an idea everyone has heard, that of the “fundamental building blocks” of existence. The Standard Model describes a few fundamental interactions, or forces, that occur in nature, and the elementary particles that take part in these interactions. When all is said and done, this theory gives us 26 constants (unchanging numbers that describe various quantities) which describe reality; the mass of the electron is one of these, for example. Physicists are now ready to take the theory a step further and discover what created these particles and what determines these constants. I was recently able to sit down with Professor of Physics at the University of Massachussetts at Amherst John Donoghue, who quite literally wrote the book on the Standard Model (it is entitled The Dynamics of the Standard Model), and discuss the future of physics. His recent claim to fame is a theory of the “multiverse.” The theory is as surprising as the name suggests. The idea is that the 26 constants brought forth by the Standard Model could have been assigned at random by nature. Not only that, but in different “domains” of our universe, which is perhaps larger than we expect, there exist places where the 26 constants were assigned differently. This means that our entire universe consists of separated, smaller universes that each have their own set of constants and therefore behave very differently. Perhaps in one universe the mass of the electron turned out to be very great and so gravity made it collapse very quickly. Perhaps in another the constants worked in such a way that there was only one type of particle, and so nothing ever happened. The implications of this theory are that we live in one of the few possible universes that could exist as something other than a clump of particles, and that this universe only came about through the trial and error process of nature playing with 26 constants. It is a lot to stomach to hear this about the universe we have come to know and love, but it may also be a step closer to the truth. A different theory suggested by Donoghue is even more abstract, but pushes further toward the center of the onion. It involves the concept of “emergence,” the way patterns arise out
c http://www.physics.umass.edu - Created using paperTEX
of simple interactions, and has been called the opposite of the popular String Theory. Though easy to define, the idea of emergence is hard to properly conceptualize. The classic example is to look at the notion of a wave. We can have waves in water, sound waves in the air, electromagnetic waves, etc., yet there is no such thing as a “water-wave particle,” or a “sound particle,” or an “electromagnetic particle.” The idea of the wave is nothing more than a convenient way to describe what happens on a larger scale when small particles interact with each other. Regular water molecules rub against each other in a way that creates “wave” motion. Regular particles in the air bounce off of one another to create alternating regions of high and low pressure that we call a “sound wave.” Emergence, then, is the concept that embodies this. It is the means by which our concepts to describe macroscopic phenomena come about. So what does emergence have to do with reality as we know it? Donoghue is beginning to lean toward the answer “everything.” The idea is that many of the things we describe are emergent properties of more fundamental interactions (for example, waves happen even though, in a sense, there are no real waves), so why not take it a step further (as is often the physicist’s wont)? Knowing that phenomena occur based on the interactions between the particles described by the Standard Model, we may riskily say that the Standard Model itself is nothing more than an emergent phenomenon of something even more fundamental. That is, elementary particles may seem to exist, but are in fact just a convenient way to describe some “deeper” interactions. When asked about the nature of these interactions, Donoghue replied, “We can’t go there yet.” Indeed the idea of the Standard Model as an emergent phenomenon rather than a foundation is controversial and only discussed by a narrow range of physicists: those who are both well-versed enough to explore it and willing to perhaps sacrifice credibility within the more conservative scientific community to explore an avenue so novel and abstract. One such physicist is 1999 Nobel Prize in Physics winner Gerardus ’t Hooft, who is beginning to publish papers which attempt to shed light on this bold concept. Clearly some of the top physics minds in the world are starting to take the idea of emergence very seriously. Although Donoghue admits that the few who consider this theory are “further along thinking of ways to test the theory than thinking of the theory itself,” it is clear that physics as we know it is on the verge of change. Which direction it will go is up in the air, but one thing seems true: reality as we know it now is ripe to be peeled away.
UMass PHYSICS
December 19, 2008
5 / 24
PARTICLE PHYSICS
Something Smaller than an Electron? Answering a Fundamental Question of Physics
David Kawall Andrew O’Donnell, Amherst Science is always trying to push the frontiers of our knowledge about the world. One of those questions that has always been pursued is what is the world made up of? We already know there is a lot beyond just protons and electrons that everyone learns about in chemistry class, but for the electron, no one currently knows for certain if there is anything beyond it. Here at UMass, Professor David Kawall is trying to answer this question. The general experiment that Professor Kawall is trying to do is called the Electron Electric Dipole Moment or EDM experiment. There are about a dozen of these experiments being setup across the United States and they are all racing to try and improve sensitivities. An electric dipole moment is formed when there is a separation of positive and negative charges. If a dipole is found for the electron, it would be evidence that there is something smaller than it. EDM experiments have been going on for the past 50 years, but the sensitivities are not good enough yet and Professor Kawall is trying to improve these sensitivities. Surprisingly, no improvements have been made to measurements in the past 5 years. For sub atomic physics we have a theory called The Standard Model that is used to describe the world below protons and electrons. Evidence of an electron dipole moment would imply that there is physics beyond Standard Model. One of the current theories of physics beyond the Standard Model is called SuperSymmetry and that is the key theory used in many String Theory models. Professor Kawall is hoping that within the next year or two he will complete upgrade to the experiment and will increase sensitivities of the EDM. Professor Kawall joined the Physics Department in the Fall of 2005 and has been working on this experiment for over 3.5 years now searching for the Electron Dipole Moment. With
c http://www.physics.umass.edu - Created using paperTEX
some startup money from UMass, Professor Kawall hopes that his proposed changes to past methods for the experiment would increase sensitivities by 100. The experimental setup of the EDM goes as follows; a powerful laser evaporates a lead oxide sample that is inside a vacuum with a buffer of Neon Gas at 14 Kelvin (or −437◦ F). Then, magnetic and electric fields are applied to the chamber and the energy shifts of the system are measured. Lead oxide is used in this experiment instead of electrons because lead oxide has a big dipole moment and allows for a more sensitive measurement. If certain energy shifts are found, that would give evidence for an Electron Dipole Moment. Even if no dipole moment is found, increased sensitivities of the dipole moment might prove or disprove theories beyond the Standard Model. The setup of this experiment has been a long and difficult process, but a very rewarding one. Despite this experiment having the potential to answer some fundamental questions of physics, funding has been a little tight with this experiment. Instead of being able to order state of the art equipment, bargains and upgrades to used equipment is needed to create the levels of sensitivities for this experiment. Professor Kawall says ”One interesting thing is that a lot of used equipment can be found online for a lot cheaper. For instance, some of the lasers used in this experiment were once used in cosmetic surgery to help fix people’s vanes.” The whole process of setting up the system to measure the energy has been quite challenging. Professor Kawall says that ”The wave meters used to measure energy for this experiment are just as good as a state of the art system, espite the fact that it has been created from old parts.” Even with all the hard work being put into this experiment, Professor Kawall has found it quite rewarding. ”One of the best parts of experimental physics is trying out new things, planning them out, executing the plans, and seeing them work out.” In the Spring of 2008, Professor Kawall made yield measurements for the ablation of lead oxide. This was a landmark of his experimental setup and has left Professor Kawall optimistic that improved sensitivities will be found once the setup is finished. One of Professor Kawall’s proposed ways to improve the sensitivity of the EDM experiment is to increase the amount lead oxide molecules in the vacuum chamber. This was done by using a laser that has bigger pulse duration over a larger surface area, thus allowing for a large flux of lead oxide molecules to be ablated. As a result, 100 to 1000 times more lead oxide molecules have been placed into the vacuum chamber than previous experiment. The final step for Professor Kawall is to setup the refrigeration system. When Professor Kawall was a post doctorate at Yale, lead oxide molecules were ablated and then measurements were taken. His proposed improvement to the experiment is to put a buffer gas of Neon to cool down the lead oxide molecules. This will allow for enhanced resolution of the energy shifts during data taking. If all goes according to plans, this experiment will have a large impact on the field of physics.
UMass PHYSICS
December 19, 2008
6 / 24
COSMOLOGY
Looking Were We Can See Lorenzo Sorbo: Our Local Cosmologist physics. Sorbo, like many college-bound Italian students, attended his hometown university, the University of Bologna, for his undergraduate career. His love of theory prompted him to work on his undergraduate thesis with the strongest theory group in the university. He was asked to study the unwrapping of scotch tape. Not utterly enthralled with his assigned research, and having discovered a love for quantum field theory during a course on the subject, Sorbo decided to move on to graduate school. Lorenzo Sorbo Sebastian Fischetti and Rob Pierce, Amherst A couple walking down a street late at night notice a man, obviously drunk, searching for something under a streetlight. When asked about his behavior, the man replies, “I lost my keys.” “Where?” the couple ask. “In the park.” “Then why are you looking for them here?” “Because here I can see.” This story describes the manner of research of a cosmologist like Lorenzo Sorbo, a Professor of Physics at the University of Massachusetts Amherst. Because many ideas in cosmology, like dark energy, are still very obscure, sometimes cosmologists can only research within the known realm of physics before making progress on more mysterious topics. Cosmology, from the Greek word kosmos, meaning beauty and order, seeks to develop an understanding of the entire universe: its origins, its current structure, and its future. Most of the revolutions in cosmology have only occurred over the last decade or so: the discovery of the effects of dark energy and dark matter, and the observation of fluctuations in the cosmic microwave background (CMB). Theory has failed to keep up with these observations, so the most pressing goal for cosmologists is to come up with explanations of these mysterious phenomena. Sorbo’s interest in cosmology began with a fascination of the stars and heavens as a 15-year-old boy in his hometown of Bologna, Italy. He wanted to pursue these interests in his professional career by studying astronomy, but his mother, concerned with his future employment, suggested that he steer towards a more practical subject: engineering. Compromising, Sorbo decided on a middle route,
After receiving his Laurea (the Italian equivalent of a Bachelor’s Degree) in 1997, Sorbo, unlike the majority of Italian students, applied to schools outside of his hometown. He was accepted to and then attended the International School of Advanced Studies in Trieste, Italy. When asked about his decision to go against the majority (“You go to High School in Bologna; then you go to college at the University of Bologna; then you go to graduate school at the University of Bologna; then you work for a professor at the University of Bologna with the hope that it will eventually lead to a good position at the University of Bologna”), Sorbo says that he does not regret it, despite the lifestyle that his friends maintain today: they live at home with their parents, eat dinner at 8:00 with their parents, go out on dates with their girlfriends and come home to their parents, and, most importantly, don’t pay rent to their parents. All in all, his current lifestyle is not so bad.
Here at UMass professor Sorbo’s main focus is looking for candidates for dark energy. Experimental observations have shown that the expansion rate of the universe is, in fact, increasing: dark energy is the term given to (the yet unknown) agent responsible for this accelerating expansion. Unlike dark matter, of whose nature we have some understanding but no direct observation, there is no agreement on what exactly dark energy is: it could actually be something, a new and (we hope) one-day observable form of matter, or it could simply be an artifact of a mistake in Einstein’s equations. All we know is that, unlike dark matter, which “clumps” into halos around galaxies, dark energy’s distribution is uniform everywhere, and although it makes up the majority of the mass of the universe (normal stuff makes up 5 percent of the universe, dark matter makes up 20 percent, and dark energy a whopping 70 percent!), its density is incredibly small. An initial theory consisted of reintroducing what was once thought to be a mistake: the cosmological constant. In his formulation of general relativity, Einstein initially included a constant in order to keep the universe static. The static universe was eventually disproved, and Einstein removed this cosmological constant from his equations. Now cosmologists have found a reason to reinstate it: mathematically, it can be used to produce the same effect as dark energy. Roughly speaking, the cosmological constant is a measure of how much energy is associated with empty space: the “cost of space.”
At Trieste, he immediately found out that a professor there was doing research in cosmology, his old love. Since the professor’s main focus was not on Cosmology at the time, Sorbo’s research was mainly his own. When Sorbo lost a year of school because of mandatory civil service, he ended up doing most of his work with a fellow student. After getting his Ph.D. in 2001, Sorbo was offered a postdoctoral position in Paris, France, where he continued to work on cosmology. He was then offered another postdoctoral position at the University of California Davis, where he began his work on dark energy, and, in 2005, Sorbo came to UMass Amherst as an assistant profes- Almost three-quarters of the universe is unexplained! sor, where he teaches today.
c http://www.physics.umass.edu - Created using paperTEX
UMass PHYSICS
Initial theories for the origin of the cosmological constant tried to explain it as quantum fluctuations of the vacuum: in accordance with the Heisenberg uncertainty principle for energy and time, energy can be “borrowed” from the vacuum to create a particle and its antiparticle, provided they annihilate each other within a short enough period of time. Because of their fleeting nature, these particle pairs are called virtual particles, and their existence has been confirmed experimentally. Unfortunately, the total mass contribution of these particles to the universe is gigantic (1 with 120 zeros following it times larger than the observed value!), so naturally, physicists decided to set this huge number equal to zero. However, attempts to explain dark energy as consisting of these vacuum fluctuations need to reconcile the incredibly large value that emerges with the fact that the density of dark energy is very small. Because of this disagreement between the quantum-mechanical prediction and observations, cosmologists have come up with other theories of dark energy. One theory that Sorbo has worked on is the quintessence theory. In this theory, the empty parts of space are viewed as being comprised of a “quintessence
December 19, 2008
field.” In contrast with the cosmological constant, which cannot be excited (i.e. it’s infinitely rigid) and is a constant property of the universe, the quintessence can be excited and change as the universe evolves. The major support for this theory comes from this very feature: the time-dependence of the quintessence field allows it to be used to explain inflation, the period of accelerated expansion of the early universe, which is another important field of study in cosmology. After the quintessence field helped with the formation of large-scale structures during the inflationary period, it began to behave like the dark energy we see today, causing the accelerating expansion of the universe. Yet another theory dares to ask the question: What if Einstein’s equations are downright wrong? Although general relativity has been proven in a number of relatively small-scale experiments and observations (around the size of a galaxy), no experiments have been done to test it at scales around the size of the universe. This question has led some cosmologists to attempt to modify Einstein’s equations so that the accelerating expansion of the universe emerges simply as a mathematical artifact when general rela-
7 / 24
tivity is applied at large distances, removing the need for mysterious dark energy altogether. Unfortunately, general relativity is a very rigid and symmetric theory, and even extremely small changes in its equations meant only to affect large distances trickle down to dramatically alter the effects of gravity at smaller scales. Many cosmologists, including Sorbo, do not like this approach because of these modification problems, and others prefer to explore alternative theories. So what’s Sorbo’s take on all of this? He believes that one of the major drawbacks to the quintessence theory is the fact that it introduces a number of new parameters, each of which will in turn need to be explained. Similarly, Sorbo also finds the idea of modifying Einstein’s equations less compelling due to the “trickle down effect.” Despite these drawbacks, however, Sorbo still thinks that we should continue working on any ideas that haven’t yet been disproven: as long as a theory is a logical possibility, he says, it’s just one more chance of getting it right. Like the drunken man searching for his keys, we should work on what we currently know and understand, however unlikely that it is correct, in the hopes of one day reaching the correct answer.
The growth of the universe from the Big Bang to the present day. The initial spurt of rapid growth is the inflationary period.
c http://www.physics.umass.edu - Created using paperTEX
UMass PHYSICS
December 19, 2008
8 / 24
COSMOLOGICAL CONSTANT
Einstein Right Again: Universe Expanding He just can’t help it David Parker, Amherst When Einstein formulated the equations of General Relativity they showed that the universe was expanding, knowing that this would be viewed as ridiculous by the scientific community, which on this point Einstein was, for some reason, not willing to go to bat as he did for the rest of his “ridiculous” ideas, tried to include a “cosmological constant” that would allow the universe to remain nonexpanding without collapsing to a single point. The necessity for some sort of agent to prevent the collapse of the universe is apparent when you think about the nature of gravity. Imagine that all of the stars, planets, gas clouds, and everything were completely evenly distributed throughout the entire universe, in this case gravity would be perfectly balanced in all directions and the universe would remain stable, i.e. non-collapsing. However, as soon as a slight change occurred, no matter how small, the universe would begin to collapse towards the point that had a slightly higher mass than the rest of the universe. Einstein’s cosmological constant was a repulsive force that would allow a heterogeneous universe to remain static. However, Einstein didn’t have to support this idea for long because in the 1920’s Edwin Hubble discovered, through careful monitoring of the motion of stars at different distances from the Earth, that the universe actually is expanding. Not only that, but the expansion of the universe is accelerating. In fact, it is this acceleration of the expansion of the universe that is necessary to support a non-collapsing universe, otherwise, since gravity does not make matter move at a constant velocity, but instead accelerates it, a constant velocity expansion would eventually be overcome by gravity and cause a collapse. So, given this new discovery by Hubble, Einstein threw the cosmological constant out, later calling it “the worst mistake of my career”. The expansion of the universe is slightly more complicated than everything moving out from a single point, as most would naturally assume. It is
Current theories speculate that the cosmological constant has some very small but non-zero energy, it permeates evenly throughout the universe, it doesn’t dissipate or dilute with the expansion of the universe, and it almost doesn’t interact at all with anything. These things all combined make for an extremely sticky scientific problem. If it barely interacts, how can we observe it, if it does not dilute, why the changes in acceleration, and what about conservation of energy? To be inclusive I must admit that there are other possibilities that could explain the expansion of the universe such as Quintessence, an energy similar to the cosmological constant except that it can be interacted with a little more, or perhaps General Relativity is flawed and gravity actually has a repulsive component at very large distances. Neither of these are any less proven than the cosmological constant, however, at least in the eyes of Dr. Sorbo, “[Other explanations of the expansion of the universe] are too specific, if it can be changed, then why? And by what? And there is almost no evidence that show General Relativity as wrong”. The types of experiments being done right now to find evidence for or against the cosmological constant are basically recreations of Hubble’s original experiment, mapping out the relative velocities and accelerations of the surrounding universe, at much higher precisions with a more thorough process. Scientists are looking for changes in the rate of expansion throughout the history of the universe in order to see when and why changes might occur. The vertical lines separated by roughly 1 Basically, we are still being stumped billion years, the separation of the horiby the elegance introduced into our piczontal lines represents the expansion of the ture of the universe that was painted for universe us by a dreaming relativist in a Swiss Enter the cosmological constant. patent office. There are very few who Welcome back. However, this time, in- think seriously about the nature of the stead of being the agent for a static uni- universe without a nod, and in most cases verse it is in fact the agent for an ex- a lot more than that, to Einstein. Within panding universe. At the moment there the next 50 years we may have figured is no hard evidence for the existence of out that “if it’s not the cosmological conthe cosmological constant other than the stant, it is certainly very close, and at this fact that the universe goes through peri- level of precision we might as well call ods of accelerated expansion and there it the cosmological constant.” Thank you must be something causing these phases. again, Einstein. actually, in the words of Dr. Lorenzo Sorbo, assistant professor at University of Massachusetts Amherst, “like a raisin cake as it bakes, all of the raisins move away from each other, and those farthest away move away fastest”. The universe, as Hubble discovered, behaves in exactly this way, everything is moving away from every other thing, and the stars that are farther away from us are moving away faster. Therefore, the universe is currently in a phase of accelerated expansion, this is known as the current phase of accelerated expansion. However, there was another accelerated phase in the very early universe known as the primordial phase of accelerated expansion in which the universe expanded at a phenomenal rate. Now, the major question surrounding all of this is what is causing this expansion? An acceleration must be fueled by some energy, and there is most certainly acceleration, so what is this energy? The answer, sort of, is Dark Energy. Dark Energy is the name given to whatever energy it is that is fueling the acceleration of the expansion of the universe, its properties are basically unknown other than it doesn’t react with anything other than gravity, and with that very weakly.
c http://www.physics.umass.edu - Created using paperTEX
UMass PHYSICS
9 / 24
December 19, 2008
PARTICLE PHYSICS
Physicists Search for God Particle Professor Carlo Dallapiccola and ATLAS Sam Bingham and Matthew Mirigian, Amherst
Some people are concerned that new experiments at the world’s largest particle physics laboratory could have some disastrous consequences. Perhaps it is due to the talk of high energy collisions possibly resulting in black holes. The Large Hadron Collider (LHC) will be capable of smashing protons at the highest energies seen in a laboratory setting. The collider’s major ring, where the protons will zoom around at 99.999999 percent of the speed of light, consists of a circular track 17 miles in circumference and is located underground straddling the Franco-Swiss border near Geneva. In September the first proton beams circulated in the main ring. As some of the kinks are worked out physicists hope to use the data collected to explore and test theoretical concepts about the most fundamental building blocks of the universe–and yes, possibly produce black holes. The LHC will be used for many different collaborative experiments to take advantage of the large amount of groundbreaking data. University of Massachusetts Professor Carlo Dallapiccola is part of the effort to make sense of what will be seen in these high energy collisions through his involvement in the ATLAS experiment. ATLAS (A Toroidal LHC ApparatuS) is one of six particle detector experiments at the LHC. The ATLAS detector weighs over 7000 tons and will be used to collect data from protons that will collide with an energy of 14 TeV, about the same energy as a collison of cars moving at about 2700 mph. One of its main goals is to search for the theoretical Higgs boson, or as it is affectionately known, “the God particle.” In the present Standard Model of particle physics the Higgs boson accounts for a hole in the theory, allowing for the explanation of the origins of the mass of the other elementary particles. The Higgs boson is the only particle in the standard model that has yet to be observed. The predicted energy necessary for a Higgs boson observation is just outside that of the second largest particle accelerator, Fermilab
in Chicago, but well within the range at
The ATLAS detector.
the LHC. Dallapiccola suggests that with sensible data definitive answers for the Higg’s boson would be provided by the LHC rather quickly. Another possible result of the collisions is that black holes will be produced. But rest assured that the LHC will not produce a planet-eating black hole, as these black holes would be microscopic and evaporate immediately. Dallapiccola points out that cosmic rays have been colliding with energies greater than those produced at the LHC for billions of years and have not caused the end of the universe yet. If black holes are in fact produced during collisions they should decay by means of radiation as Stephen Hawking predicted, and in the process produce all the particles in the Standard Model, including the Higgs boson. The production of black holes would also support the existence of extra dimensions in our universe. These extra dimensions would explain why gravity is so weak compared to the other physical forces. The theory says that the force of gravity as we know it is only a part of the full force, as much of it is dispersed in other dimensions outside our current space. But if we smash protons together at high enough energy, they could get close enough together where the dispersion is negligible and the full force of gravity is perceived in our space, producing a black hole. Dallapiccola says, “If we don’t produce black holes it would invalidate some of the current models.” So either way, physicists are excited for the advances that will inevitably occur as a result of the collisions. Like a head on car crash, the high en-
c http://www.physics.umass.edu - Created using paperTEX
ergy collisions of protons will produce a lot of “stuff” flying every which way. The ATLAS detector is designed to make sense of the “stuff” and connect it to theory. The detector itself is massive, heavier than the Eiffel Tower, and consists of many super precise parts. To give a sense of the precision, some specifications require accuracy on the order of a micron for distances on the scale of a football field. Part of what Professor Dallapiccola does is write software to analyze the data the detector collects. The amount of data is overwhelming, so it is imperative that the software can recognize the essential parts of the data, like the evidence of black holes and of the Higgs boson. The ATLAS team has been operational and taking cosmic ray data since September 2008, and high-energy collisions are scheduled to resume in summer 2009. The experiments were delayed when on September 19, 2008 an incident in the LHC tunnel, far from ATLAS and the other experiments, did substantial damage to portions of the LHC. A faulty electrical connection between two magnets lead to a massive leak of liquid helium and delays in the LHC’s schedule. Professor Dallapiccola spoke on the time table of his groups works and predicted that it would be around the fall of 2010 before the LHC yielded sensible high quality data with results. The technical difficulties will not be the only obstacle in the way of the ATLAS team as over 2500 physicists work together on the project. Collaboration will be essential to the success of the project as it is one of the largest collaborative efforts ever attempted in the physics world. The ATLAS team is like a virtual United Nations with physicists coming from 37 different countries and 139 different universities. The group is involved in what Dallapiccola calls a “friendly competition” with one of the other detector experiments called CMS (Compact Muon Solenoid). The two detectors are designed to complement each other and to provide further corroboration of findings.
UMass PHYSICS
10 / 24
December 19, 2008
THE LARGE HADRON COLLIDER
Atom Smasher UMass professors help in search for Higgs Boson, dark matter, and extra dimensions Chris MacLellan and Robert Deegan, Amherst
Halfway across the world, near the shores of Lake Geneva, lies the Large Hadron Collider (LHC), the largest particle accelerator ever built. This massive machine is buried dozens of meters below the ground and forms a ring of circumference 17 miles that straddles the border between Switzerland and France. The LHC project is part of an international effort led by The European Center for Nuclear Research (CERN) to unlock the secrets of one of the most mystifying areas of science, particle physics. It is hoped that a better understanding of the fundamental particles that make up the universe will be achieved by colliding beams of protons that circulate in opposite directions through the massive circular tunnels. The protons in the LHC will collide at 99.999999% of the speed of light at a rate of 600 million collisions per second. Each of the collisions is recorded by
one of the LHC’s six particle detectors. However because of the overwhelming amount of data this would create, only one out of every two million collisions is actually recorded. Despite this reduction, the data recorded by each of the major experiments at the LHC will be able to fill around 100,000 dual layer DVDs every year. Although this massive undertaking may seem to be only taking place in Europe, it turns out that the project is a collaboration of over 10,000 scientists and engineers representing over 100 countries, three of which are faculty right here at UMass. Professors Stephane Willocq, Carlo Dalipicolla, and Benjamin Brau are all currently concentrating their research efforts on one of the two largest particle detectors at the LHC: ATLAS. The Umass professors are also joined by three postdoctoral research associates and one grad student. Over 2500 physicists work on
c http://www.physics.umass.edu - Created using paperTEX
this massive detector which is about 150 feet long, 80 feet high, and weighs about 7,000 tons. ATLAS is made up of four major components: the inner tracker and magnet system which both measure the speed of charged particles; the calorimeter, which measures the energy of particles; and the muon spectrometer which identifies and measures muons. This immensely large and complex machine is maintained by hierarchy of physicists that work in highly specialized groups. For example, the group at UMass concentrates solely on the muon spectrometer portion of the detector. Although group members can be separated by thousands of miles, they work together using the internet. Face to face interaction is still needed, as the members of the UMass team travel to Switzerland a few times a year to attend large meetings. Many may wonder what the LHC is meant to discover. The most publi-
UMass PHYSICS
cized potential discovery is a new particle called the Higgs Boson, which is supposed to be responsible for giving particles their mass. In fact, the Higgs Boson is so important to particle physics that its discovery would easily warrant a Nobel Prize. Everything we know about particle physics is encapsulated in a theory called the Standard Model which is used to explain the nature of all subatomic particles. To date, theory has outpaced experiment as the Standard Model can only predict the existence of the Higgs Boson, but we have not been able to detect it due to technical limitation. Specifically no other particle detector can produce collisions with enough energy to create the Higgs Boson. If it exists the LHC will have the ability to create the Higgs Boson. The collisions at the LHC may also be able to identify and describe dark matter, which has never been directly observed. This mysterious form of matter along with its counterpart, dark energy, are predicted to make up over 90% of the universe. The high energies of the LHC may also make it possible to confirm the existence of extra dimensions. These extra dimensions are predicted by and necessary for string theory, which has the po-
December 19, 2008
tential to reconcile quantum mechanics and general relativity, which are two major branches of physics that don’t always agree. The six detectors that are attached to the LHC are the measuring devices needed to make these discoveries. When the proton beams collide at such high energies they either combine or decay into entirely new particles. These new particles continue to decay until they form a set of stable particles. There are many different combinations of these particle decays which are known as decay modes. Each of the major parts of the detectors works to find the identity, trajectory, and speed of the particles. Computer programs then work backward to determine each particular decay mode. For example, since the Higgs Boson will exist for such a short time before decaying, it can’t be directly detected. Therefore its existence will be proved by identifying decay modes that are consistent with having originated from a Higgs Boson. Because the probability of producing a Higgs Boson is very small the physiscists at the LHC will analyze massive amounts of collisions in hopes of finding these characteristic decay modes. Further analysis of this data may confirm
The ATLAS Collider while under construction
c http://www.physics.umass.edu - Created using paperTEX
11 / 24
or deny the existence of the dark matter and extra dimensions. One of the possible consequences of these extra dimensions could be the formation of microscopic black holes. This possibility has raised a considerable amount of concern about the safety of the LHC and has led to multiple lawsuits hoping to halt its operation. However these black holes would most likely decay instantaneously and thus pose no actual threat to mankind. If they were any real danger we would have already felt their effects since cosmic rays that constantly bombard our atmosphere create similar conditions to those in the LHC and would produce these black holes. Despite the excitement generated in the science community by this massive experiment, nobody will be able to make groundbreaking discoveries in the near future. Although the first beams were circulated through the collider on September 10th, overheating in superconducting magnets caused a serious malfunction in the collider. The cause for the problem has since been diagnosed and is currently being fixed. Because of this malfunction we will likely have to wait until the summer of 2009 before the collider will be working again.
UMass PHYSICS
12 / 24
December 19, 2008
GRAVITATIONAL WAVE ASTRONOMY
Making Waves at UMass Dr. Laura Cadonati brings a revolutionary perspective on the cosmos to Amherst
Laura Cadonati Paul Hughes & Daniel Rogers, Amherst
Albert Einstein is a household name. Most people know that E = mc2 , and that funny things happen to space and time at speeds near that of light. A less commonly known prediction of relativity, however, is the existence of gravitational waves, which have yet to be directly observed. Since the cornerstone of scientific progress is the verification of such predictions, this is a problem for Einstein and his theory. Fortunately for him, there is LIGO (the Laser Interferometer Gravitational-Wave Observatory), which hopes to finally detect gravitational waves. The project is a collaborative effort of about 500 scientists from across the country. One of them, Dr. Laura Cadonati, has helped to start a gravitational waves research group in the UMass Amherst Physics department. “In high school,” admits Cadonati, “I hated physics. I did. I was very good at math, I loved science of all types, and I hated physics. I just didn’t understand it. When we were first exposed to it in high school, we didn’t have calculus.” In her senior year, however, calculus shed some light on physics. She also realized that she needed something more concrete than a degree in mathematics. “I’m not going to be doing calculus only, or differential equations for the rest of my career.” Dr. Cadonati went on to be trained in experimental particle and nuclear physics in Milan, Italy. When she came to the United States to undertake her graduate studies at Princeton, she was involved in Borexino, a project studying solar neutrinos. On moving to MIT for her post-doctorate work in 2002, her interest shifted to gravitational waves and she
became involved in LIGO. “In my previous work,” she said, “I was doing a lot more hands on lab work—hardware design, installation—and I wanted to... pull science out of data, kind of the opposite.” When she was looking for a professorship in 2005, the opportunity to start a new gravitational wave research group was a factor in her decision to come to UMass. Gravitational waves are theorized to be propagating ripples in space-time that cause relative distances to change. For example, a gravitational wave passing through a circular section of space-time would warp into an oval as it went by. This effect is predicted to be extremely small: a circle with a diameter of 27 trillion miles would only see a compression of about the width of a human hair. Detecting the small effects of these waves on a terrestrial scale raises monumental challenges in both engineering and data analysis.
kilometer interferometer. This crosscountry separation allows LIGO to tell whether a signal is coherent—that is, whether both sites are detecting the same signal from an individual source. This helps to distinguish gravitational waves from background noise. Major intrinsic sources of noise (the scientific equivalent of static on a radio) include stray gas particles inside the vacuum tubes, thermal vibrations in the mirrors and their supports, and the “shot noise” from individual photons (small packets of light) slipping through to the detector. The experiment has to deal with a range of external noise sources, as well. In the initial stages of LIGO, there was a logging operation taking place just outside the Louisiana installation, which made it practically impossible to take useful data during the day. “If a tree fell in the forest,” Dr. Cadonati said, “LIGO heard it.”
mirror
“If you think about your perception of reality, you see and you hear. We’ve got electromagnetic waves which we’ve been seeing, and then we’re going to be listening using gravitational waves.” LIGO overcomes these challenges by using very sensitive laser interferometers. An interferometer splits an infrared laser beam down two perpendicular tubes of equal length; the two beams are then reflected by mirrors at the ends of the tubes, and are recombined and directed into a detector. If there is no distortion by a gravitational wave, then both beams should be “in phase”, which means that the peaks and troughs of the waves match up. If a gravitational wave happens to be passing through, however, then one tube stretches while the other contracts; the tubes are no longer of equal length, and the beams are no longer in phase. Interferometers with tube lengths on the scale of a few kilometers would only see distortions smaller than the radius of a proton—about a billionth of the width of a human hair. LIGO uses one four-kilometer interferometer in Louisiana, and a second installation in Washington state that combines one four-kilometer and one two-
c http://www.physics.umass.edu - Created using paperTEX
laser source
half-silvered mirror mirror
detector Diagram of a simple interferometer
“But not anymore!” she clarified. Since the early days of the project, LIGO has made a wide range of improvements in vibration isolation, vacuum technology, and laser power output to drown out troublesome shot noise. Over the last six years, the system’s noise reduction has been upgraded to meet and even exceed the projected design specifications. More improvements are yet to come in the next stage of LIGO’s development: new interferometers with ten times the sensitivity of the current ones, planned to become operational in 2014. At present, LIGO should be able to detect gravitational waves emitted from two neutron stars (with a combined mass of about
UMass PHYSICS
three times our Sun’s) coalescing in the Virgo cluster; that’s about 60 million light years away. In 2014, LIGO scientists can expect to pick up signals from an additional 1,000 galaxies. Dr. Cadonati’s particular role in the collaboration is in the search for ‘burst’type gravitational wave sources, such as supernovae (the explosive deaths of stars much larger than our Sun) and other such cataclysmic astronomical events. As such, she also does a great deal of work— especially in the present, developmental phase of LIGO—in tracking down unwanted noise sources and working with the engineers and commissioners to find ways to eliminate them. Part of what makes this search for burst signals so interesting is that so little is known about the gravitational waves made by supernovae. “Based on the current understanding,” Cadonati explained, “we could see a supernova in our galaxy; it’s not clear we could see supernovae in Andromeda,” our closest neighboring galaxy at two and a half million light
December 19, 2008
years’ distance. “But that’s based on simulations based on ‘axis-symmetric’ evolution,” meaning they assume that the supernova distorts space-time equally in all directions. “But if it’s not axissymmetric we could see farther out.” This means that combining LIGO’s data with more traditional astronomical observation, we may be able to learn much more about what goes on in very massive stars as they die. “If a tree fell in the forest, LIGO heard it... but not anymore!” LIGO scientists hope not only to look further out into space, but also farther back in time. In 1964, radio astronomers discovered the cosmic microwave background radiation: the leftover electromagnetic energy from the big bang. If LIGO can achieve great enough sensitivity, it may be possible to measure the cosmic gravitational background radiation. As Dr. Cadonati explains it, the gravitational force decoupled from the other
The LIGO facility in Livingston, Louisiana
c http://www.physics.umass.edu - Created using paperTEX
13 / 24
forces within a tiny fraction of a second after the big bang; by contrast, it took almost another half a million years before the universe became transparent to electromagnetic radiation—the radiation astronomers currently rely on for observations. This means that gravitational wave astronomy may give us a much earlier picture of the structure of our universe, answering many questions about why it took on the shape it has today. All in all, LIGO promises to open “a new window on the universe” by providing a completely new way of observing cosmic events. “If you think about your perception of reality,” says Cadonati, “you see and you hear. They are very different. One is pressure waves, the other is electromagnetic waves. And the two combined gives you a better perception of the reality around you. This is kind of the same: we’ve got electromagnetic waves which we’ve been seeing, and then we’re going to be listening using gravitational waves... it’s really a new way to explore the universe.”
UMass PHYSICS
December 19, 2008
14 / 24
NEUTRINO PHYSICS
Searching for Ghosts The Borexino Solar Neutrino Detector
An inside view of the Borexino detector Keith Fratus and Amanda Lund, Amherst
How do you catch a practically massless, nearly lightspeed, charge-free particle that flies straight through the earth and across entire galaxies without interacting with a single thing? Physicists have been trying to accomplish this tricky feat since the elusive neutrino was first hypothesized in 1930, building cunningly elaborate detection facilities to trap the “ghost particle.” The Borexino solar neutrino detector is a large spherical tank located in a tunnel below Gran Sasso mountain in the Abruzzo region of Italy. It is one of the most recent of these experiments, having been collecting data for only a year and half. Borexino is the first detector to measure in real-time the low energy neutrinos that make up 99% of the solar neutrinos emitted from the Sun, which are produced in large numbers as a result of nuclear fusion. Laura Cadonati, an assistant professor of physics at the University of Massachusetts Amherst, has been with Borexino for its entire lifetime, since she was an undergraduate in Milan. In an interview last week she spoke about her research on Borexino and what to expect from the experiment. “I think the good results will come in a year, that’s my guess,” she said. The detector tracks down neutrinos using 300 tons of liquid scintillator (a material that emits light when struck by a particle) contained in a thin, spherical nylon vessel that could hold about four SUVs. A layer of buffer fluid and a layer of water also encompass the sphere and help reduce background noise, and the entire detector is enclosed in a steel water tank. When neutrinos interact with electrons in the scintillator, a set of 2000 light detectors surrounding the vessel record the flashes of light they produce. But what exactly is a neutrino, and what is the point of studying a particle that seems so indifferent to the rest of the universe? Nuclear fusion inside the sun produces huge quantities of neutrinos, which because of their low interaction rate are relatively unchanged by the time they reach Earth. Studying these neutrinos could provide us with an inside look at the Sun and how it works. It turns out that to the best of our knowledge, the world as we know it can be described in terms of a few fundamental parc http://www.physics.umass.edu - Created using paperTEX
ticles, grouped together into three “families” in something referred to as the Standard Model. The first family contains four particles: two quarks (which can combine to form protons and neutrons, which in turn make up atomic nuclei), the electron, and the neutrino. The next two families of particles are almost exactly the same as the first; in fact, everything about them is identical except for their mass. Each family is composed of heavier particles than the one before it. Neutrinos from different families are said to be different “flavors.” Each of the particles in the standard model interacts with the others through one or more of the four fundamental forces. Gravity (which affects all matter, and actually even light itself to a small extent) and electromagnetism (which mediates the interactions between particles with electric charge) are the most familiar of these. The remaining two forces are named the “strong” and “weak” forces–the strong force helps hold together the nuclei of atoms, while the weak force is responsible for radioactive decay. What is challenging about neutrino detection is that the only forces that neutrinos are capable of interacting with are gravity and the weak force. Contradictory to our everyday experience, on the scale of fundamental physics, gravity is so weak that it has almost no effect on particle interactions. While the weak force is many times stronger than gravity, compared to the electromagnetic and strong forces it is still quite feeble. Saying a force is “weak” implies that interactions involving that force happen very rarely; so neutrinos interact with other particles incredibly infrequently, and as a result studying them becomes a very complicated process, requiring the use of large underground detectors. The Sun gives scientists a free source of neutrinos to study, which helps them refine their theories about the standard model and its various components. As the Standard Model is currently the best theory we have for explaining the world, studying solar neutrinos is a very real way to expand our knowledge of the fundamental laws of the universe. When Austrian physicist Wolfgang Pauli first postulated the existence of the neutrino, he feared he had cooked up a particle that could never be detected. Every second, 100 billion neutrinos born from nuclear reactions in the Sun pass through your thumbnail, but they interact with matter so rarely you would never know it. The first successful detection of solar neutrinos was in 1960 at the Homestake mine in South Dakota, where Ray Davis, a chemist at the Brookhaven National Laboratory, developed an innovative detector using a 100,000 gallon underground tank filled with cleaning fluid. Despite the trillions of neutrinos constantly bombarding the detector, Homestake was predicted to detect only 10 each week. In fact, Davis found even fewer. Though the experiment succeeded in observing the “ghost particle,” it detected only a third of the anticipated number, generating a new problem: where were the missing solar neutrinos? The answer lay in neutrino oscillations, a theory proposed by Italian-born atomic physicist Bruno Pontecorvo within a year of Homestake’s ini-
UMass PHYSICS
December 19, 2008
tial results but not proven for another 30. Homestake could only trap electron neutrinos (from the first family of the Standard Model), letting the other two flavors sneak through unrecognized. Following Homestake’s neutrino deficiency, it was proposed that on their journey from the Sun to Earth, neutrinos could oscillate, or change flavor; so Davis’ missing neutrinos had not vanished, but switched from electron to a flavor that was not observable. That is, a neutrino from one family of matter had turned into a neutrino from another family of matter. The Sudbury Neutrino Observatory (SNO), another detector located in the deepest part of Canada’s Creighton mine, verified neutrino oscillation in 2001 with measurements of all neutrino flavors relatively close to the predicted solar neutrino level. The neutrino’s ability to change flavor had enormous implications for physics. The existence of neutrino oscillations proves that neutrinos have mass. Though intuitively it might seem like a particle that makes up matter should have mass itself, this was an open question; it could have been possible for neutrinos to make up ordinary matter, interact with gravity, and be massless. Because there are so many neutrinos flying around the universe, how much mass they have has a huge impact on how much “stuff” is in the universe–which affects everything from particle physics to astronomy. It has also been theorized that the sister particles of the neutrinos–the electron, muon, and tau–should be able to transform into each other. However, the predicted rate at which this should happen is remarkably low, and to this day has never been observed (the odds that a muon will decay into an electron are about the same as the odds you have of winning the Mega Millions five years in a row). Because muons and taus are much heavier than electrons, they generally decay into lighter particles. This tendency of muons and taus to decay, combined with the fact that transformations amongst the sister particles are unimaginably rare, means that we see very many electrons and very few taus and muons. Deep in a mine or under a mountain may sound like a strange place to search for particles coming from the Sun, but the layers of rock and earth actually insulate the detector from distracting background radiation and cosmogenic events, which cannot penetrate as much material as the slippery neutrinos. To minimize this noise, almost all neutrino detectors are located in mines, under the ocean, or beneath mountain ranges (like Borexino). Almost a mile of rock shields the tunnel containing the Borexino detector from cosmic rays and neutrons, creating a relatively low-background environment. This is especially important for this particular experiment, as the low energy of the neutrinos it is trying to detect imposes strict requirements on the amount of radioactivity around the detector. “The trick was, can we really have purity at this level–it was never measured or achieved before,” Cadonati said. Before Borexino was built, Cadonati worked on a miniBorexino prototype called the Counting Test Facility (CTF), a detector to measure the impurity in the scintillator and provide insight on the best design for Borexino. “There was no proof that this could be done on a ton scale, and so that’s why the counting test facility was required. You
c http://www.physics.umass.edu - Created using paperTEX
15 / 24
start from a smaller scale, somethings that’s cheaper...you can try things out to prove that you can actually reach a bigger goal.” The CTF was one meter in radius, had only water for shielding, and had a thicker membrane than Borexino, so it was not an ideal design for finding neutrinos. “It’s a different set of problems,” said Cadonati. ”It’s really designed to measure the possible radiopurity of the scintillator itself.” As an undergraduate working on the CTF, Cadonati studied the water purification system; from this her research morphed into analyzing radon diffusion and radon propagation in the water. She was there when the CTF was filled and began collecting data, and remembers the excitement seeing it come to life. “We had this bottle of champagne, we smacked it against the tank,” she said. “When we got the first data, for me it was this new thing of realizing you learn in books...and then if you look for events with given cuts you actually find what the books say. It was like, “Oh wow, that’s real!” After the CTF came online, Cadonati took a short break from Borexino, working temporary employment, teaching in high school, and applying to grad school. In 1996 she came to the U.S. to start graduate school at Princeton. She continued working for Borexino intermittently until 2002, when the experiment experienced a major setback. “[There] was this spill of liquid scintillator. That was in the summer of 2002,” said Cadonati. “The hope was to come out at a comparable time to SNO, Kamland. The leak put in a setback of 3 or 4 years.” The delay from the accident was long not just because of problems with the detector, but due to the political, legal, and security issues that it sparked. “That accident was like a Pandora Box,” Cadonati said. Other experiments have faced hurdles too–in 2001, a spark and a chain reaction caused most of the light detectors in the Japanese neutrino detector Super-Kamiokande to explode. Fortunately, Super-K had already turned out its important results; Borexino was not as lucky with its own glitch, as it had not even become live when the spill halted progress. However, thanks to support from funding agencies and dedication from its members, Borexino got back on track and began taking data in May 2007. The first results came out after only 100 days, in the ballpark of what was expected. “The real job to do now is to reduce the systematic [error],” said Cadonati. “We need to get rid of the noise and track the stability of the detector...the lower energy threshold, that’s also going to be an important result once we understand the systematic.” The detector’s ability to look at low energies means it could be useful for other things such as searching for geophysical neutrinos (produced in radioactive decay), for which its location is particularly good. Cadonati believes Borexino can produce significant data within two years. But what will become of Borexino after that? “There are now talks to morph the neutrino detector into a new kind of experiment,” she said. “You could do beta decay measurements with it, or now there are some talks to make it into a dark matter experiment.” So after more than a decade, it seems that Borexino is just reaching the peak of its career–and that its future may even hold more than solar neutrinos.
UMass PHYSICS
December 19, 2008
16 / 24
STATISTICAL PHYSICS
A New Definition of Complexity Just how much can we know?
Professor Machta shows off some of his work Matthew Drake, Amherst Since the beginning of the universe, many systems that can be analyzed have had an ever-increasing amount of complexity. From biology to astronomy, the world as we know it constantly changes, generating more and more complex systems. What do I mean by “complex?” Professor Jon Machta of the University of Massachusetts Amherst says that it’s related to what we can simulate. “The evolution of life is often considered the most complex systems. Even using a computer with millions of processors, I believe it’d still take hundreds of millions of years to simulate the evolution of life on Earth.” Machta has been researching how to define the concept of complexity and its implications for some time now. The most important implication that comes from Machta’s definition of complexity is that it will limit which problems we can reasonably understand through computer simulations. Of course computer simulations become very useful for many physics problems, but some systems can be too complex for a computer to handle. A definition of complexity, based around computation, can create limits on what would be appropriate to use simulations for and also guide physicists on how to research various physical systems. Although Machta shies away from fully defining complexity itself, he argues that we can make some requirements for complexity that will fit our intuition. “The emergence of complexity requires a long history,” argues Machta. It seems like a natural requirement if we think that the passage of time will make a system more complex. However, the common conception of time is insufficient. Instead, we want to think about time in terms of depth, or how many computational steps a simulation on a parallel computer would need. A parallel computer is a specific type of computer with multiple processing units, meaning it can do multiple mathematical operations in one step. Time gives trouble to complexity if we think about systems that seem comparably complex and occur over different time periods. Think about comparing a hurricane and a spiral galaxy. Both of these systems are self-organizing rotating structures with about the same difficulty in terms of mathemat-
c http://www.physics.umass.edu - Created using paperTEX
ics, but it takes millions of years longer to make a galaxy than a hurricane. The amount of time that a system takes going from “point a” to “point b” is unnecessary during simulation of a system, so it would make sense that these are both similarly complex systems. If we use the common conception of time, a galaxy is millions of times more complex than the hurricane. As an example, the depth of a galaxy would be the number of steps between a beginning where stardust is floating around and an end where the galaxy has formed. Knowing that computation is a more effective measure, it is important to simulate systems appropriately. Since we’re using a parallel computer, we can make it have as many processing units as we want it to have. We can do as many mathematical operations at one time as we have processors, so we want to know how many parts our system has, then we can allow our computer to have a number of processors proportional to number of parts. Essentially, we want to have enough processors to reach the next state of the system in one step. “Just because a system is big and has many parts doesn’t mean it’s complex” says Machta. “It’s the evolution of the system that causes a system to have depth.” The actual method that we use to simulate the system is also very important. We wouldn’t want a system to seem any more complex than it needs to be, so it is important that we use the fastest method possible. Not only fast, but the method must be accurate and understandable by defining the properties that we want to know about the system. Without an accurate and understandable answer, the simulation becomes useless anyway. The best possible method can take us from the beginning of the system to the end of the system as fast as possible while also accurately defining properties we need to know along the way. It can be interesting to think about systems which are not complex according to this definition. A system with a high entropy might seem like an ideal candidate for a complex system, but that is not the case. Entropy is often used as a measure for chaos and randomness of a system. Picture a gas at a constant high temperature. We can think of it as a large number of particles moving around very fast and in random directions. There would be little to no order to the system and so it would have a lot of entropy, but it would actually be a very low complexity system. Statistical Physicists have been able to solve this problem for a long time. In fact, Albert Einstein won the 1921 Nobel Prize in part because of a 1905 paper that he wrote describing exactly this system, proving that it is simple enough to simulate properly without a computer. So what is complexity exactly? Well, there is no accepted definition of complexity yet, so Justice Potter Stewart’s famous definition of pornography may be applicable here, “I shall not today attempt to define the kinds of material I understand to be embraced with that shorthand description...But I know it when I see it.” However, Professor Machta’s concept of a complex system having substantial depth may well prove to be a very helpful guide on how to simulate complex systems appropriately.
UMass PHYSICS
17 / 24
December 19, 2008
Hurricane Andrew (left) and the spiral galaxy M51 (right). Courtesy of NOAA and NASA, respectively
CHEMISTRY
UMass makes big discoveries with small science Nanotechnology in food science Christopher Emma, Amherst If someone were to throw out the word nanotechnology, it would instantly evoke futuristic themes of technology based self-improvement. Things like tiny robots, invisible to the human eye, swimming around in our bodies, acting as things like white blood cells and artery de-cloggers. Professor D. Julian McClements, an associate professor studying food physio-chemistry at UMass Amherst, would be the first to debunk that thought process. Through an interview with Professor McClements, we are able to shed some light on the concept of food-based nanotechnology, the progress made so far, and how it could come to affect the everyday American. Research teams are currently working on issues pertaining to microencapsulation, emulsion, and ultrasonic sensing. The end goal of all of this research is to find better ways of distributing medicines and nutrients into the human body. McClements’ first point of interest was that of microencapsulation; in essence the storing of medicine and nutrients into capsules made to be taken orally and the ways in which they are processed in the body on a molecular level. Functional agents such as vitamins and antioxidants tend to interact poorly with water solubility, food matrices, etc. As such, it is pertinent to develop new delivery systems with which to make the agents more soluble and stable, with the end goal of maximizing the intake of said agents. In order to do this, the team at
Umass has focused its research on different food-grade ingredients as substitutes for traditional means of distribution. Proteins and lipids, or fat, have been the center of attention thus far. Foods in the aqueous, or water soluble, phase have these proteins and lipids, both biopolymers, which can make them viscous and more rigid. He also applies this thought process to Micellar Technology. One of the many things found inside a cell are vesicles. These small blobs of water act as the cells way of transporting di erent substances to different parts of the cell, much like a capsule is used to transport vitamins or medicine to the body. Even smaller than vesicles are micelles, which are currently being researched to absorb and move non-polars in water. Theoretically, the application of this could be used to control such things as flavor release. The research team at Umass is currently working on absorption rates and solubility for micelles. Through the studying of molecular interaction of biopolymers and their solubility in the human body, McClements hopes to improve the distribution methods of medicine and nutrients. One of McClements’ less relevant but slightly more interesting projects is that of food emulsion. Emulsion is the mixture of unmixable liquids such as milk, butter, ice cream, salad dressing, etc. Again his research tends to culminate on a molecular level and focuses on how unmixable ingredients, such as oil and water, blend in things like salad dressing.
c http://www.physics.umass.edu - Created using paperTEX
Research is currently being done on what machines can be used to create this faux bonding and how it can be kept stable. The purpose of all of this is to one day be able to better blend these currently resistant components. McClements currently does consultation work for outside research groups on the subject, while the university research team is delegated to microencapsulation and another interesting research subject, Ultrasonic Sensing. While ultrasonic sensing may sound complex, it actually is quite selfexplanatory. Through the firing of sound waves into various types of food, scientists are able to sense the di erent components that make up said food. Though such a concept seems futuristic, it actually has been developed and is being applied by scientists today. The purpose of such sensing is to create a fast, non-invasive detector for various types of food. McClements and his team have used this process on things such as dairy based creamers. It allows them to test the fat percentage of the mixture as well as find the most e cient mixture (which can be applied to the microencapsulation research). They are also currently working on a hand held sensor for use with live animals, most notably fish. This would allow them to describe the full composition of a live fish, fat content included, without ever causing harm to the fish. While the principal research on this has been completed, the technology needs commercial development.
UMass PHYSICS
December 19, 2008
18 / 24
BIOPHYSICS
When Physics and Biology Collide UMass Professor Plans to Improve Cell Imaging
Photo taken by Ross using TIRF. Morgan-Elise Cervo, Amherst Imagine if you had the ability to give an MRI or PET scan to the smallest component of our bodies; cells. Being able to take a closer look at our bodies cells could greatly improve our understanding of disease prevention and even bring us closer to find a cure for HIV. Recent addition to the UMass Physics department, Jennifer Ross, is working on a new method of microscopic imaging that bring us a great deal closer to understanding the inner workings of cells. Ross, who completed her postdoctoral work at UPenn in Philadelphia, has brought a a great opportunity for expansion in the newly developing biophysics department at UMass Amherst. She began her postdoctoral work studying the movement of dynein, a motor protein that can be found in eucaryotic cells (cells found in animals). Dynein’s job is to transport cargo necessary for cells to function. Dynein is able to transport things by converting chemical energy into mechanical energy and moving along the cell microtubules towards the negative end. Kinesin, a more studied and well understood motor protein, moves towards the positive end of the microtubules. Microtubules are important to study because they are involved in many of the processes that take place within cells. For example, certain drugs taken by cancer patients are used for stabilizing GDP-bound tubulin in microtubules. Until recently, Biologists believed that both kinesin, and dynein only had the ability to move in one direction along the microtubule. Therefore if they hit an obstruction, such as a microtubule associated proteins or MAP, while carrying the cell cargo they become either be permanently stuck in one place or simply fall of the microtubule. However, Ross created a new technique for observing the movement of kinesin and dynein by placing two microtubules in such a way that they are crossed one over the other. This setup creates a known obstruction for her to observe how dynein and kinesin react to MAPs. By running experiments in which the progress of dynein is tracked
c http://www.physics.umass.edu - Created using paperTEX
along the microtubule, Ross observed that dynein had a greater success rate for passing the obstruction than kinesin. She discovered that when dynein bumps into MAPs it is able to either side step or back step to get around the MAP. Kinesin however, continues to bump into the MAP and either remains there forever or falls off of the microtubule. Why is it useful to have biophysicists studying cells as opposed to biologists who have a far greater knowledge or cells and other living things? Ross describes biologist study of cells as a“top down” method. “[Biologists] are studying and making observations of an entire cell,” says Ross. Ross and other biophysicists tend to have a ”bottom up” approach. For example with microtubules, Ross’s team of researchers go through strenuous laboratory procedures in order to eliminate parts of the cell that they are not interested in looking at. In other words, she studies specific parts of the cell in specific conditions. Ross’ hope is that biologists and physicists will be able to meet somewhere in the middle and produce an effective understanding of what is happening inside cells. Physicists who concentrate in biological studies can also be useful in improving the means for how we can look at cells. More recently Ross has been moving in the direction of creating a new method for looking at cells on microscopic levels. Currently the best available method for observation of cells for biologist and biophysicists is Total Internal Reflection Florescence (TIRF) microscopy, which won the Nobel Prize in 1986. TIRF imaging allows Ross to be able to view movement of microtubules in multiple directions. TIRF microscopy works by using an evanescent wave to excited an illuminate fluorophores bound to a specimen. The evanescent wave is generate by having the incident light (usually a laser) pass through of series of lenses to achieve total internal reflection. In the picture the squiggle on the right side is the microtubule (which on average have a diameter of only twenty-five nanometers). The brighter areas represent movement towards and away from the microscope. Microscopic imaging without TIRF would lack these lighter areas because only side to side movement (wiggling) can be detected. Ross is working with others on a new quantum mechanical form of microscopic imaging. The idea is to fire a small number of photons at a cell sample. Statistically, some photons will collide and interact with the nucleus of various atoms. When this happens the energy of the photon is converted and an electron and positron (similar to an electron but with a positive charge) are formed. Eventually, when an electron and position come in close enough proximity of one another they annihilate each other and two photons are formed (radiation!). Radiation can easily be detected by various equipment in a process similar to“position emission tomography”, PET, which is used in hospital everywhere for medical imaging. Imaging of cells is extremely important because a better understanding of cells could help us learn how cells spread disease. Once we know more about how disease spreads it will be easier to develop appropriate treatments. The future of science and medicine lies greatly in the integration of biological and physical sciences.
UMass PHYSICS
December 19, 2008
19 / 24
BIOPHYSICS
Single Molecule Biophysics Exploring the Biological Universe of Proteins Alex Kiriakopoulos, Massachusetts Most people when they hear of protein think of their favorite rib eye steak or chicken dish. At the University of Massachusetts Amherst, Professor Lori Goldner thinks of only scientific experiments that can be carried out on them. Professor Goldner is a recent faculty addition to the physics department and comes from the National Insitute of Standards and Technology. She brings with her her work on single molecule measurements using biophysics. Biophysics is “using the tools of physics and the models of physics to try to understand biological systemsâ€? said Lori Goldner. “I use optical microscopy techniques and physical techinques to pull on very small objects, to watch them move, and maybe poke themâ€? Proteins, from Greek Ď€Ď Ď‰Ď„ ΚoĎ‚ ´ meaning “primaryâ€?, are large organic molecules made of amino acids. They are the building blocks life, whose present functions make life possibly and can be described as the chief actors within the cell. They have an ability to bind other molecules very specifically and tightly like a lock and key mechanism. What first drew Professor Goldner to this burgeoning field was “the idea about being able to mess and see single molecules. I got interested in it first through optical tweezers. And the idea that you could manipulate the little objects with light really intrigued me. I was looking for a way to make more of a connection with important applications. When your doing this you find the molecules you work on have some implication in disease or in health or ecology; they have interesting behaviors that actually affect people in real lifeâ€? said Goldner. Professor Goldner’s and other’s work in this field have wide implication for all us. Anytime we go to the drug store for allergy medications or a diabetic receives
insulin they are taking advantage of what biophysics has accomplished or has aided in the development in. Biophysics can help to synthezie proteins like insulin and to develop drugs which will bind with proteins. One of the proteins Professor Goldner’s research has had her come across was �the HIV binding site which is a drug target and if you can understand how it functions better you might be able to design better drugs to inhibit the HIV virus� said Professor Goldner. With the aide of a confocal microscope Goldner explores the shapes and geometry of these molecules. “Structure is very important because it often informs or determines function; so alot of what I do is looking at structure or structural changes over time that you could not see using x ray crystallography or magnetic resonance.� Professor Goldner said. Professor Goldner’s confocal microscope is designed specially to study these minuscule molecules, and explains that “if you want to see a single molecule you have to block out all of the light from the rest of the world because the molecule does not emit alot of light typically at most million photons per second. So how do you keep the rest of the light out? The trick is that you build a microscope that can only see into a very small volume about a femtoliter and that would be a confocal microscope. All the light is focused into that tiny little area and that area is again imaged onto a pinhole so that any light that is not coming right from that tiny volume is rejected.� The fundamental idea behind this seemingly complex scheme is basically a simple “trick... we dont shine light on everything, we don’t get light back from everything, we use really good detectors and really good optics so you don’t lose any photons� Professor Goldner said. Her objectives for her confocal microscope and single molecule experiments will be “to develop, test and use
c http://www.physics.umass.edu - Created using paperTEX
new techniques that will allow to [what] could not see before, study things you couldn’t study before and understand things we couldn’t understand.� She explains that in her work “The techniques that I use are good for looking for single moleuculex that are sticky, go boom, and fall apart. So the sorts of techniques that I am developing will hopefully allow for people to look at molecules... that you couldn’t otherwise look at because around surfaces they denature, molecular complexes that fall apart, and things that are out of equilibrium.� “There are alot of reactions in nature where a protein meets a piece of RNA or two proteins come together and some big confirmational change happens to one of the proteins which is irreversible or makes something else happen; these transient interactions out of equilibrium dynamics are not observable in an easy way using single molecule techniques but using the techniques I am developing they might be�. Professor Goldner’s work representents a relatively new field of science that is currently burgeoning. Single molecule techniques have only been usable and popular since the 1990s. In past research “measurements were done in bulk which means you make certain assumptions... by looking at the average�. But as Goldner explains, “if you have a bimodal distribution... you miss the fact that the distribution is meaningful.� The benefits of these techinques for single molecule measurement that, Professor Goldner’s work is focused on, is as Goldner explains “you are measuring the property of each molecule individually. The really big advantage to single molecule measurements you get the details of the distribution and you can to watch the dynamics. You can’t do that any other way�.
UMass PHYSICS
December 19, 2008
20 / 24
PHYSICAL CHEMISTRY
Single Molecule Spectroscopy Looking at a Molecule with a Flashlight
Each bright circle is light from a single fluorescent molecule Tim Mortsolf, Amherst
Almost everything we learn about our environment comes from how electromagnetic radiation interacts with matter. Spectroscopy is the branch of science that describes how light interacts with atoms and chemical moleulcules. Astronomers have been using the theories of spectroscopy for almost one hundred years to determine the chemical makeup of stars. Perhaps one of the most important discoveries of quantum mechanics was that atoms emit only certain wavelengths of light and when we measure these wavelengths, we can use them as a sort of atomic fingerprint that tells us exactly what element or chemical compound they came from. For example, astronomers recently analyzed the light measured from a planet in another solar system and were able to deduce the presence of carbon dioxide from its signature in the light spectrum. However, instead of looking at the light coming from large objects located thousands of light-years distant, single molecule spectroscopy utilizes optics that focus light from close molecular objects — close enough that we can record the light emitted by a single molecule.
explanation is that the researchers on his team are ”chemical photographers”. The experiments they perform take pictures of the light transmitted by individual chemical molecules and individual nanostructure composite species. These are not pictures in a conventional sense because the photographs we look at have contextual clues that tell us a lot about the scene. For example, when you look at a picture of a famous person in a magazine you can almost always tell who is in the picture and maybe even where and when the picture was taken. But individual molecules cannot be imaged in this way. The information his researchers rely on is not in the picture, but in any information they can glean from individual photons (particles of light) that these molecules transmit. Prior to single molecule spectroscopy, spectroscopic measurements could only be made on a large group, or ensemble, of related molecules; they could not easily be performed on individual molecules. These molecules are similar but not identical even though they have the same chemical structure. Differences between molecules arise from physical properties such as their orientation in space or energies of motion. An analogy to this is a device that measures the flow of electrical current through a wire. The current that a device records arises from the flow of a large number of electrons, but our device also cannot tell us anything about the behavior of any individual electron. With the advent of single molecule spectroscopy, it has became possible to measure some physical properties of individual molecules, but there are still many significant obstacles that restrict what molecules we can look at and just what information we can learn. According to Dr. Barnes, the ability to analyze the behavior of individual molecules has only become practical during the last 20 years. Prior to this we were able to measure average quantities, but could not easily measure spectroscopic information for single molecules.
When I think about a chemistry laboratory, several images come to mind, most of them having to do with the synthesis of new chemical compounds. However, when you enter Dr. Mike Barnes chemistry lab and take a mental inventory of the surroundings, you would very likely think that you are in a physics lab instead. Dr. Barnes field is physical chemistry — a branch of chemistry that uses applied physics to study chemical compounds. The benches in the back of his lab are not covered with the bottles and glassware used by synthetic chemists, but instead are shock absorbent counters, suspended by nitrogen gas, that are adorned with lasers, beam splitters, and very sensitive digital cameras. His specialty as a physical chemist is a recently unfolding area of research called ”single molecule spectroscopy”. So what does that mean? Dr. Barnes simple Dr. Mike Barnes in his Lab c http://www.physics.umass.edu - Created using paperTEX
UMass PHYSICS
December 19, 2008
21 / 24
Flasks of quantum dots that emit light at different wavelengths. Photo courtesy of nanolabweb.com
One of the compunds that Dr. Barnes group is interested in is a family of molecules called ”quantum dots”. When asked what a quantum dot is, Dr. Barnes replied ”A quantum dot is often referred to as an artificial atom. It is the synthetic analogue of a sodium atom.” The sodium atom has two signature wavelegnths that emit light in the visible light spectrum and sodium is used in light bulbs for places like factories and parking lots. The advantages our using a quantum dot is that unlike atoms such as sodium which has energy levels fixed by nature, synthetic chemists can engineer a quantum dots to absorb and emit light at a desired color. The benefits of the this research have a broad range of applications. Dr. Barnes noted that some discoveries have already yielded benefits in biochemistry with applications for DNA sequencing and proteomics. The Nobel prize for chemistry in 2008 was awarded for the development of green fluorescent protein for use in biochemical research. Green fluorescent protein, or GFP, is attached by biological scientists to specific moleucles in a cell. Since this protein is fluorescent, any light emitted at its signature wavelength informs the scientist about the presence of the molecule that was tagged with GFP. Dr Barnes also sees future applications for the subminiturization of nanoscale optoelectronic devices that could have applications to better display technology and optical computc http://www.physics.umass.edu - Created using paperTEX
ing. Physical chemistry research labs also train future scientists for work in industry. During his four years at the University of Massachusetts in Amherst, Dr. Barnes has had four graduate students, two post-doctoral research assistants, and several undergraduate students assist in his research. When I asked where he thinks this research could lead into for the next 50 years, his outlook was very optimistic. When Dr. Barnes worked as a post-doctoral researcher, many of the tools that are used today such as single photon counting devices that can count individual photons with precise timing information were not available. The amount of light we have been able to collect from a molecule has increased tremendously from just a few photons per molecule to more than one million; this is the key driver that has advanced the field of single molecule spectroscopy. One key limitation that exists today is the range of wavelengths has been limited to a narrow spectral range of 400-700 nanometers, which corresponds to the visible range of light for humans. In the coming decades, maybe even sooner, he thinks that we will be able to make single molecule measurements at higher frequencies in the X-ray spectrum. This would be an important achievement, because these higher frequencies permit better resolution maps to be constructed for the structure of single molecules.
UMass PHYSICS
December 19, 2008
22 / 24
A Bose-Einstein condensate produced at Amherst College LOW TEMPERATURE PHYSICS
What’s the Matter Researchers are using a new form of matter, made of ultra cold gases, to open up new frontiers in physics. Douglas Herbert, Amherst
In the early 1920s, Satyendra Nath Bose was studying what was then the new idea that light came in little discrete packets that we now call “quanta” or “photons”. Bose assumed a set of rules for deciding when two photons should be considered either identical or different. We now call these rules “Bose statistics” or sometimes “Bose-Einstein statistics”. Einstein read and liked Bose’s ideas, and used his influence to help get them published. Einstein thought that Bose’s rules might also apply to atoms, and he worked out a theory for how atoms would behave in a gas if these new rules were applied. Einstein’s equations said that there would not be much difference, except at extremely low temperatures. If the atoms were cold enough, something very unusual was supposed to happen, so unusual that he was not sure that it was correct. Under the direction of William Mullin, the Laboratory for Low Temperature Physics at the University of Massachusetts at Amherst is among those studying such systems. In quantum mechanics, particles can be represented as waves, and when you sufficiently cool gaseous particles “a large percentage of the particles [...] fall into the lowest [energy] state, and they’re c http://www.physics.umass.edu - Created using paperTEX
coherent in the sense of quantum mechanics [...] they all have the same phase.” says Professor Mullin. When all the atoms fall into a single energy state, their waveforms overlap so that the peaks and troughs match up (creating the “phase”), and they all behave in exactly the same way (they’re coherent). “ In quantum mechanics we have these non commuting variables, momentum and position, if you specify the position, you can’t know the momentum [The Heisenberg Uncertainty Principle].” explains Mullin. At Absolute Zero atomic motion ceases, so the colder an atom gets the slower it travels, and the better we know the velocity of the atom. However, the closer we are to knowing the velocity of the atom, the more uncertain we are of exactly where it is; its location literally gets fuzzy. When you collect a lot of atoms in the same ultra low energy state, all of those fuzzy atoms lie on top of each other and merge together, any one atom could be in the position of every other atom at any given time. All of the atoms actually occupy the same space; they coalesce into a single blob called a Bose-Einstein condensate (BEC), and they all behave in a synchronized fashion (some researchers refer to a BEC as a giant “super atom). The first BEC was created in 1995 by Carl
UMass PHYSICS
December 19, 2008
Wieman, Eric Cornell and their colleagues at the University of Colorado at Boulder, and is an entirely new state of matter; the fifth known state of matter after gas, liquid, solid, and plasma. An atom’s energy is quantized, which is to say that its energy is limited to a series of discrete values called energy levels. The key to forming a BEC is to cool gaseous atoms to very low temperatures. This is done inside a vacuum chamber because any random air molecules would bump the gaseous atoms, transferring kinetic energy (energy of motion), and heating the atoms. Laser cooling makes use of the force exerted by repeated photon impacts to slow the atoms down and push them into the middle of the chamber. Photons are extremely light compared to atoms, but if you fire a large enough stream of ping pong balls at a bowling ball you can move the bowling ball. Laser cooling works the same way, battering atoms with photons. The lasers are aimed from six different directions to keep any atoms from wandering off. A magnetic field is used to form a trap to contain the slow moving atoms; this is done by allowing a small area within the magnetic field to have no field. The result is a magnetic field shaped like a bowl which holds the slowest (coolest) atoms. Evaporative cooling makes use of specially tuned radio waves which pass over the magnetic trap and kick the most energetic (hottest) molecules away as they jump above the rim of the “bowl”. Comparable to blowing steam off of a cup of coffee this leaves the cooler atoms behind. As the remaining atoms reach very low temperatures, they slot themselves into the few remaining energy levels above absolute zero. Within a billionth of a degree above absolute zero, the BEC forms. “One of the founders of this whole ultra cold stuff was Steve Chu, who was one of the inventors of laser cooling, was made energy secretary (by Barack Obama).” says Professor Mullin. In an experiment at MIT in 1997, Wolfgang Ketterle and MIT’s Atomic Physics Group showed that a laser like beam could be formed form a BEC of sodium atoms. This beam is similar to a laser beam, but it uses matter rather than photons, a “matter laser” as Mullin puts it. This atom beam can be controlled by two other lasers, which push the matter and aim the beam, “if you have a coherent beam you can make it interfere”. Mullin and his team are conducting research in interferometry with BECs. An interferometer is an instrument that uses interference phenomena between waves to determine wavelengths, wave velocities, small distances, and thicknesses. An atomic interferometer would be 100 times more sensitive than a laser interferometer. Since atoms in a BEC move much more
c http://www.physics.umass.edu - Created using paperTEX
23 / 24
slowly than photons of light, they can be more precisely controlled. An atomic interferometer could be used for submarine navigation systems (GPS cannot pinpoint locations underwater), auto-pilot systems for airplanes, or for countless other applications in which extremely precise displacement measurements are necessary. A plan has been developed for the International Space Station (ISS) to be equipped with a laser-cooled cesium atomic clock. Cesium atoms move more slowly in the microgravity of space, which would allow for a more precise measurement of the second, 20 times better than anything achieved on earth. This type of clock on the ISS would make the more accurate second measurement available on earth, as well as for clocking GPS satellites and testing gravitational theory. Quantum computing is another promising way to use ultra cold atoms. “What we’re thinking about now is using these interferometers for making “qbits” for quantum computers” says Mullin. In classical computing, data is represented and stored as “bits”, the binary digits 1 and 0. In quantum computing, an atom can act as a quantum bit, or a qubit, with it’s internal sub-state called “spin” (imagine a marble spinning on a table) functioning as the 0 or 1 of classical computing. It can also exist as 0 and 1 simultaneously. That last state of simultaneity is due to something called quantum interference, which allows an atom to exist in two spin states at the same time, another oddity of quantum mechanics. If a classical computer contains three registers (storage spaces), it can only store one of eight binary numbers at a time, 000 through 111. A quantum computer could store all eight numbers in a superposition state at the same time, also in that superposition state - operations can be performed on all eight numbers using a single computational step. If both computers performed the same operation (using three registers), the classical computer would require eight computations, while the quantum computer would require only one. The classical computer would require a (literally) exponential increase in either time or memory in order to match the computational power of the quantum computer. If quantum computers can be realized, the computing world as we know it will be revolutionized. It will be years before BECs see much use outside the laboratory, in the meantime, “Just studying Bose condensates themselves is kind of fun.” says Mullin. Perhaps current research will serve to aid the in the application of the laws of quantum physics to the world of technology. “There are so many people that make condensates now that I’m sure something will come out of it.” notes Mullin.
UMass PHYSICS
24 / 24
December 19, 2008
SOFT CONDENSED MATTER
Thin Films, Geometry, and 3-D Printers A doily illustrates an exciting phenomenon Collin Lally, Amherst Condensed matter physics deals with things that, so says Professor Christian Santangelo, “you can hold in your hand.” He can, indeed, hold an analogue of his latest research project in the palm of his hand; a crocheted doily is not, after all, very big. This doily is not particularly unusual, but it does illustrate an interesting phenomenon: the number of stitches is not constant along its radius (Prof. Santangelo refers to this as the doily being “radially inhomogeneous”), leading to a ruffled (or more technically, buckled) shape. There is more “stuff” (yarn) on the outer edge of the doily than in the middle, which causes it to crumple slightly. The question related to this phenomenon is whether the radial inhomogeneity can be controlled in such a manner that the doily buckles into a predefined shape. Basically, what Prof. Santangelo is working on is how to determine when this is possible, and (when it is possible) how to make the doily buckle in the desired manner. While this is interesting from a mathematical point of view (it’s an open problem in geometry) there is also a relevant connection to soft condensed matter physics. The phenomenon in question is found in the case of polymer thin films, under certain circumstances. Before I say what, exactly, these circumstances are, I should define what a polymer thin film is. I’ll do this in two parts. A polymer is a material composed of large molecules (the building blocks of everyday matter) that is, in turn, composed of some repeating internal structure. The most common example of a polymer would be plastic, but there are many other examples as well. They have a specific property that renders them very useful: the molecules in a polymer stick together very well, forming chains. These chains are the defining characteristic of a polymer, and can give such a material great strength on a bulk scale, or flexibility, stickiness (adhesivity), or other useful properties. A thin film is exactly what it sounds like: a very thin (nearly twodimensional) layer of the material in question. Imagine a slick of oil floating on top of a pool of water and you have a good picture of a thin film.
Now back to the physics at hand. I had mentioned buckling; it turns out that this phenomenon happens in thin films when they are made of certain types of polymers, and exposed to some agent that causes this material to swell. Suppose you have a polymer thin-film disk, just a round, flat piece of material made of an appropriate polymer. Now add a chemical agent, in reaction to which the polymer expands in the plane. So, the disk swells horizontally, but not vertically, in an inhomogeneous manner. Since the disk wants to stay in one piece, it buckles, rather than fragment. How the disk swells can be controlled through the application of the “swelling agent” (the chemical introduced above). But, as in the case of the doily, can the disk be made to buckle into some final shape? Sometimes, not always, it can. If the shape of the buckled disk can be easily and reliably controlled, it might mean the ready commercial availability of so-called 3-D printers. A 3-D printer is not a printer in the traditional sense of a computer peripheral that transfers ink or toner onto a piece of paper. Rather, it is a device for fabricating three-dimensional objects. Usually used in computer-aided manufacturing for building product prototypes, current 3-D printers “print” an object by building it up, one layer of material at a time. This technology is expensive and relatively slow. But with the application of Prof. Santangelo’s research, 3-D printers could be brought to the masses. The combination of an inexpensive base polymer and the use of existing ink-jet technology to apply the swelling agent could easily make a 3-D printer affordable for the average person. Why someone would need such a device is not evident now, but that is almost certainly because the technology is not currently accessible on a large scale. Take the example of the personal computer: before its commercial advent, noone knew the impact or use it would have. However, one thing can be said for certain: sufficiently advanced 3-D printing technology coupled with userfriendly computer-aided design software could be the real-world equivalent of the Star Trek replicator (minus its foodproduction capability; no-one would want to eat plastic!). One could simply
c http://www.physics.umass.edu - Created using paperTEX
design a widget, and, with the press of a button, have it built immediately. But that is quite enough futurology. An advanced 3-D printer based on the controlled buckling of a polymer substrate is not anywhere close to reality. Happily, it is not the potential application that drives Prof. Santangelo’s interest in this problem. Prof. Santangelo is a theoretical physicist. He deals with predictions of new phenomena and explanations of known phenomena. His interest in physics is not necessarily motivated by dreams of his research leading directly to a science fiction future. Rather, it is his enduring fascination with mathematics that has lead him to this problem of buckling thin films. Initially, Prof. Santangelo went to graduate school at the University of California Santa Barbara with the intention of becoming a string theorist. When he arrived and went to the string theory seminars, however, he found it not to be to his liking. “The seminars were not very interesting [to me],” he says. He had been drawn to string theory by his strong interest in geometry and topology, so he looked for another field in which these tools were heavily used. What he found was soft condensed matter theory. Soft condensed matter is essentially that stuff that is not hard condensed matter; the field deals with things like liquid crystals or polymers (but not exclusively) rather than with solids and similar hard things. Happily for Prof. Santangelo, UCSB is renowned for their condensed matter theory programs, and geometry is a good tool for soft condensed matter theory; he easily found research that meshed with his mathematical interests. This was only the beginning: this fascination with arcane mathematics has since become a driving principle of his research. The theme, as it were, of his overall research program is driven by the question “how do geometry and physics interact?” In this way, Prof. Santangelo frequently tackles problems that might seem more common in a mathematics department, the doily problem among them. They all, however, have some direct relation to the physical world. “There are deep principles hidden in mundane things,” he says. The doily proves it.