2010/11
Dear Reader, It seems science is not just integral to our understanding of the external world; it is our understanding of the external world. Having spent seven years at one of the best academic schools in the country, if we have a learnt one thing, it is that science is not, as is commonly conceived by a worrying number of people, ‘boring.’ On average over a human life span your head is older than the rest of your body by three hundred nanoseconds, a bacteria originally tested as a cancer treatment can make you feel happy and wind turbines are now being built in the middle of the ocean. Innovation and science are like H2O and Water – they are arguably one and the same. Science is the architect that carves the way for progress. When you stare at the computer and type, you are only able to see the words clearly because empirical research showed people the mechanics of the eye, its ability to manipulate sensory inputs to produce an image and correct it where its normal function was disrupted using carefully moulded materials that fit perfectly on my nose. Science is innately pioneering and this is what makes a magazine like SCOPE so exciting. The next generation is to be sustained on the intellectual crop produced by the student seedlings in education now. To have students then, so engaged in the world of science with exuberant enthusiasm is a truly wonderful thing. We both like to think that all humans are born as scientists, that is, curious. Children explore the world around them with their senses, trying to apprehend as much of the information the vibrant world around us has to give. Somewhere along the winding road of life many people lose that curiosity. No longer do they wonder why women get morning sickness, why people die of old age or why if the sun stopped emitting light it would take us eight minutes to realise. So maybe as seventeen year olds our following wish is naive and idealistic: we hope that whoever reads articles from this magazine are either still full of their natural human curiosity or that anyone who has no time for such idle questions, have their curiosity rekindled or indeed they ask why are humans curious in the first place? SCOPE is written entirely by a team of bright students who we must thank deeply. Their hard work and intelligence will be evident throughout the magazine and we are sure that for anyone who cares to read this year’s SCOPE, will be in for an intellectual high. The eclectic, seemingly ubiquitous knowledge of Mr Delpech has meant we have been extremely well supported in our efforts to make a top quality production, both in terms of appearance and content. We hope that you enjoy SCOPE 2011!
Aadarsh Gautam and Nicholas Parker Chief Editors of SCOPE
Contents Physical Sciences Review Thermodynamics – Zachary Spiro........... 5
The History of Science – Salil Patel........ 26
Schrodinger’s Cat – Ray Otsuki............. 7
CERN Trip review – Andrew Yiu............. 30
States of Matter – Nicholas Parker ........ 9
Rational Optimist book review –
String Theory – James Zhao ................. 11
Mr. Hall.............................................. 32
P vs NP – Matthew Earnshaw .............. 14
Could Science go too far? –
Particles and Sparticles – Andrew Yiu .... 18
Akshay Kishan-Karia............................. 33
Biomimetics – Jamie Ough .................. 22
Biological Sciences Cocaine Addiction – Keyur Gudka........... 36 Do Aliens Exist? – Josh Goodman.......... 38 Feeling Down – Aadarsh Gautam........... 40 Gestational Diabetes – Nader Baydoun... 42 Acne – Javi Farrukh............................. 45 Theories of General Anaesthetic Action – Elliot Brown........................................ 47 Invertebrate Intelligence – Richard Breslin.................................... 49 Bibliography........................................ 50
SCOPE 2010/11 3
4 SCOPE 2010/11
Physical Sciences
Demons, Chaos and Japanese Robots; and you thought Thermodynamics was boring? Many of you’ll have little to no idea what the field of thermodynamics actually is. However, thermodynamics is one of the simplest fields of science to understand, but also one of the most important to pretty much everything in the Universe. Let’s start off with some basic Laws. The first Law of Thermodynamics, which pretty much everyone should be aware of at least an intuitive level, is that energy cannot be created or destroyed; it can only be moved around. That is to say: if I have a hot cup of water (heat is a form of energy, namely kinetic), and this water then cools down, the energy lost by the wwater must also be gained by the surroundings, i.e. air. Now, on to the second Law of Thermodynamics. Here it starts to get more interesting, as this isn’t strictly a ‘Law’, more of a statistical observation. However, before we begin, it is important to understand the meaning of the term ‘entropy’. Entropy is a measure of the chaos, or disorder in a system, and it can be quantified (i.e. exactly calculated). The higher the entropy present, the more chaotic or random the system is. So, gases will have higher entropies than liquids, and larger molecules will have higher entropies than smaller molecules. This isn’t so important though, as the only fact that you really have to understand is that when I write ‘the entropy increases’, what I really mean is that the levels of disorder in the system increase. The second Law states that: Within a closed system, energy will only change form or position such that the total entropy of the system will increase. (An older way of writing it is “No process is possible whose sole result is the transfer of heat from a body of lower temperature to a body of higher temperature.”) The reason that this Law is so important is that it dictates what will be possible and feasible to achieve. The reason that this is a statistical law is that not every change that happens in a
system increases entropy; in fact a good number will actually decrease it. For example, if you take the example of a dynamic equilibrium A + B
C+D
(For those who don’t know, a dynamic equilibrium is one where the reaction is reversible, and a state of equilibrium is reached where the rates of both forward and backwards reactions are occurring at the same rate, and hence the net change to the system is zero.) It can be given that the forward reaction (A + B C + D) has a given production of entropy associated with it. Therefore, the reverse reaction will also have the same entropy production associated with it, but with the sign reversed (i.e. positive if negative etc.) As the equilibrium is dynamic, both reactions must occur at the same time, and therefore a significant number of reactions are occurring that generate negative entropy. However, the net change for the system will always be an increase in entropy, even though there are reductions in entropy going on (although there are also increases at the same time). Hence, the second Law of Thermodynamics is not really absolute, but statistical, as it denotes a trend for reactions to go in a certain direction. This Law is perhaps relatively obvious compared to the First, as how often do you mix things together in a bag, and find that they spontaneously unmix? Gases will spread to occupy as much space as possible, broken things do not spontaneously repair themselves, and most of all, heat lost to the outside world is pretty much unrecoverable. In fact, the Second Law of Thermodynamics is often considered to be the most fundamental Law in all of science, as has been expressed quite nicely by Sir Arthur Stanley Eddington:
“If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can giveyou no hope; there is nothing for it but to collapse in deepest humiliation.” And so we’ve managed to cover the first key word of the title: Chaos. Now we have to move on to the slightly cooler word: Demons. As you may have already inferred from the mention of “Maxwell’s equations”, there was indeed a scientist named Maxwell. And he did write some equations. We’re not interested with those though, and more with a thought experiment that he devised in 1870, named Maxwell’s Demon. The experiment goes like this: Imagine you have two rooms, filled with particles of mixed kinetic energies, and that these rooms are separated by a wall with a frictionless door in it. Now imagine that a being (demon) operates the door, and can observe whether the particle coming towards the door has a higher, or a lower energy relative to the others in the room. He then chooses to only allow particles of higher energy through the door, whilst keeping those of lower energies outside.
A diagrammatical representation of Maxwell’s Demon, where the faster molecues are represented by squares, and the slower ones by circles.
SCOPE 2010/11 5
This would have the effect of raising the temperature on one side of the wall, whilst simultaneously reducing the temperature on the other. However, this door is frictionless, and requires no energy to open or close. Therefore, no energy has been expended in spontaneously generating this heat, and this is in clear violation of the Second Law of Thermodynamics. Of course, it’s easy to see the applications of such a technology. One could have a machine powered by heat, such as a steam engine (powered by the fact that steam is less dense than air, and rises), and once the heat has left the steam, the machine could be used to regenerate then heat, recreating the steam and repowering the engine. This is, as anyone can see, perpetual energy generation. (However, it is worth considering that in reality, the demon itself would increase the entropy of the system by regulating this movement of particles, and this amount would far shadow the amount saved, hence keeping the experiment free from violation of the Law.) Therefore, we can deduce that perpetual energy generation (or perpetual motion, or whatever other name you may have for it) is strictly relegated to the realm of the impossible by the Second Law of Thermodynamics. So, now we’ve covered demons. Therefore all that remains, are the Japanese Robots, which you may or may not find cooler than demons, but if you happen to have seen the Transformers films, it is likely that you do. Anyways, these robots are of interest to a Thermodynamicist (if such a profession exists) because of a recent experiment conducted in Japan’s Tokyo University. The experiment was related to Maxwell’s Demon, in that it attempts to cause a thermodynamically improbable event repeatedly. The experiment is can be considered in this way: A vibrating ball is placed on a spiral staircase, and the ball’s vibrations can cause
6 SCOPE 2010/11
it to hop either up, or down a step on the stairs. Of course, due to gravity, a hop down the stairs is much more likely, as the ball will lose gravitational potential energy by moving downwards. Therefore, it can be given that the ball will move downwards on the stairs more often than it moves upwards, as the vibrations are random.
whereas rotation in another will cause it to move upwards. Through the application of a gentle electrical field, the rotation of the beads can be controlled to an extent. Through observation by a computer program, the rotation of the bead is detected, and so the field can be altered to prevent downward movement.
Now imagine that it is possible to place a wall on the edge of the stairs, as shown in the image to the right. This wall has the effect of preventing the ball from moving down the stairs, and can be considered to be controlled by a form of Maxwell’s Demon. Therefore, if the wall is placed on the stair directly below the ball, it can only move up the stairs.
On the surface, it may seem that this is a violation of the Second Law, as the bead will only move up the ramp. However, it isn’t really, as the computer requires power to run, as does the creation of the electrical field, obviously. The experiment is interesting though, because of the potential it has for computers, as it can be considered to have essentially just converted information into energy. It doesn’t pose any possibility of solving an energy crisis within the reasonable future, but it could be conceivably used to predict the potential limits to information storage in super-computers, as the relationship between information and energy can eventually be explored completely.
The actual experiment works by having two small charged beads on a glass surface, where one is pinned to the surface, leaving the other to rotate around it. The entire apparatus is submerged in a fluid, and due to Brownian motion, rotation can essentially be considered random. Rotation of the bead in one direction will favour the bead moving down the ramp,
Schrödinger’s cat “A cat is penned up in a steel chamber, along with the following device (which must be secured against direct interference by the cat): in a Geiger counter, there is a tiny bit of radioactive substance, so small that perhaps in the course of the hour, one of the atoms decays, but also, with equal probability, perhaps none; if it happens, the counter tube discharges, and through a relay releases a hammer that shatters a small flask of hydrocyanic acid. If one has left this entire system to itself for an hour, one would say that the cat still lives if meanwhile no atom has decayed and the psi-function of the entire system would express this by having in it the living and dead cat (pardon the expression) mixed or smeared out in equal parts.”
The above text is a partial translation of a thought experiment which the Austrian physicist, Erwin Schrödinger, published in the German magazine Naturwissenschaften in 1935 to demonstrate an apparent discrepancy between what quantum theory would lead one to believe and what nature actually shows. The cat being “smeared out in equal parts” is an example of a superposition; a concept in quantum theory which states that one can never know what state an object is in if no observations or measurements are made. Whilst this holds true, the object is considered to be simultaneously in every single possible state. When observations or measurements are taken, all but one of the possible states collapse and the superposition falls apart. In the case of Schrödinger’s paradox, the cat is in a superposition in which it is simultaneously dead and alive until the steel chamber is opened. Upon opening the chamber, it becomes evident whether the cat has died or not; if the atom has decayed, the hammer has fallen and the vial has shattered, killing the cat. If it has
not, the hammer will not have fallen, and the vial would not have shattered and the cat would still be alive. The opening of the steel chamber is taken to be an observation, and so all but one of the possible states collapses with the opening of the chamber. The superposition is lost and the fate of the cat is sealed. Utilizing this reductio ad absurdum, Schrödinger only demonstrated the limitations of certain (in particular, the Copenhagen) interpretations of quantum theory; he himself did not propose a full solution to this paradox- rightly considering the prospect of a cat being both simultaneously alive and dead as ridiculous. However, the implications of this thought experiment were so great that it quickly became almost fundamental in comparing the strengths and weaknesses of the various proposed interpretations of quantum theory based on how they tackled the Schrödinger’s cat paradox. In fact, Schrödinger himself is rumoured to have regretted proposing the experiment at all because of the implications it brought with it. However, it must be noted that quantum theory remained at the forefront of physics; other predictions that it made were just too accurate and valuable to write it off as mistaken. The significance of Schrödinger’s experiment was that superposition, which was known to occur at subatomic levels, was brought to an observable, macroscopic environment in which the superposition of the decaying atom dictates the fate of something observable with the naked eye, thereby amplifying the absurdity of the whole concept. More importantly, it begged a description of the mechanism by which a superposition of states collapses, leaving only one observable state. Since the first proposition of the experiment, different interpretations of quantum mechanics have proposed different solutions to the problem and many of these are in disagreement with one another as to when the wave superposition of states collapses. Quantum theory holds central an idea that the state of every particle can be described by
a mathematical tool called a wave function (a variable quantity which fully describes the characteristics of a particle, mathematically). The aforementioned Copenhagen interpretation furthers this by postulating that by making observations and measurements, the wave function (all the various different configurations of succeeding events) collapses onto itself, leaving only the single state which reflects the measured values. By following the Copenhagen interpretation, Schrödinger’s experiment would suggest that the wave function gives the possibility that the cat is simultaneously alive and dead (the centre point of Schrödinger’s ridicule) and remains undeniable because no direct observations can be made apart from by directly interfering with the experiment and opening the steel chamber. However, by opening the steel chamber, one of the two possible outcomes, [decayed substance= dead cat] and [undecayed substance= live cat], collapses, leaving the other intact and visible. This is often considered an “observer’s paradox” in which the presence of the observer (whoever is responsible for the running of the experiment) affects the outcome of the experiment and so no true result, as such, can be observed. Though it would be reasonable to suggest that the cat’s fate is only determined with the opening of the steel chamber by an observer, this is not necessarily true. Since simple measurements with a second Geiger counter on the outside of the steel chamber would suffice in determining whether the cat has died or not, (if the second Geiger counter detects a count, then the vial has broken and the cat has died and vice versa if no count has been detected) one would be led into thinking that the wave function collapsed precisely after an hour from the start of the experiment. The Geiger counter is powerful enough to collapse the wave function before any conscious observations are made; it has the power to explicitly state whether the cat is dead or not,
SCOPE 2010/11 7
regardless of whether anyone observes this fact, thereby ruling out one of the two possibilities by causing the corresponding wave function to collapse. In essence, this means that if the observer does not look at the second Geiger counter, it remains impossible for him/her to know whether the cat is alive or not, despite the fact that the fate of the cat has already been sealed by the collapse of one wave function (brought about by the second Geiger counter). This is because the wave function from the observer’s point of view is still fully standing and so the point in time at which the substance decays in not necessarily the point at which the wave function has observably collapsed; the collapsing of the superposition is independent of the observer. On the other hand, if the observer does look at the second Geiger counter, they will have gained more knowledge concerning the parameters of the experiment and would, therefore, by better placed to answer the question whether the cat is alive or not. The Copenhagen interpretation concludes that the superposition collapses after precisely one hour, and the presence or absence of an observer (or a measuring implement) is irrelevant to the timing of the collapse. Contrary to the Copenhagen interpretation, the Ensemble interpretation barely considers Schrödinger’s paradox a problem. Proponents of this interpretation argue that just because a system can be in multiple states, it does not necessarily mean that it is in every one of those states at the same time (like the Copenhagen interpretation suggests) but is rather in only one of them. The wave function is taken to be statistical (as opposed to observable), with theoretical results taken from numerous (hypothetical) experiments prepared in exactly the same way as the experiment in question. In this way, various wave functions do not describe single systems, but rather ensembles of systems; single experiments are mere
8 SCOPE 2010/11
members of a greater ensemble. When applied to Schrödinger’s cat, one comes up with the following response: the cat is not dead and alive at the same time; it is either entirely dead or entirely alive. A superposition of states never actually occurs because the cat is only ever in one state; any wave functions showing that the cat is alive as well as dead is due to the statistical probability of the cat being alive. It does not necessarily mean that it is in both states at the same time.
“The attempt to conceive the quantumtheoretical description as the complete description of the individual systems leads to unnatural theoretical interpretations, which become immediately unnecessary if one accepts the interpretation that the description refers to ensembles of systems and not to individual systems.”
Einstein was one of a few notable proponents of the Ensemble interpretation. He wrote: The many-worlds interpretations are a collection of subtly varying interpretations which claim that there are an infinite numbers of outcomes, resulting from splits at branch points. A branch point splits the present world into two distinct (but equally real) worlds in which neither can interact with the other and each exhibit only one of the possible outcomes. With Schrödinger’s cat, the ‘original’ single world splits once after exactly one hour to give one world in which the cat is still alive and one world in which the cat is dead. These two worlds will continue on in time and will also split at each branch point they come across. The two separate worlds are said to be in quantum decoherence and this ensures that the two outcomes cannot interact with each other in any way. Since the two worlds are still
continuing, the two wave functions (the cat being alive and the cat being dead) are still both standing and so a superposition of states has occurred (although the two wave functions are in separate worlds, since they are equally real and equally valid, the cat is still considered to be in superposition) but since the two worlds are in decoherence, neither of the wave functions will collapse (since both are stable due to lack of any interferences from the other wave function) and so the superposition effectively remains whole. In conclusion, various interpretations of quantum mechanics give rise to odd answers. The ‘standard’ Copenhagen interpretation (the most widely accepted interpretation) dictates that the superposition collapses with the decay of the substance, regardless of the presence or absence of an observer whilst the Ensemble interpretation lies on the opposite end of the spectrum; it denies the very fact that the cat ever even enters a superposition. Positioned between the two are the many-worlds interpretations (amongst others) in which the superposition is formed but is never collapsed.
States of Matter state of matter, called a superfluid. At 2.17K (-270.98°C), liquid helium turns into a superfluid, and thus has zero viscosity and no thermal resistivity (all the superfluid is at exactly the same temperature). These properties lead to bizarre phenomena, such as the superfluid’s ability to spontaneously climb up the sides of vessels and flow over the edge. The superfluid will also form a fountain if a capillary tube is placed into the liquid and heated (even by illumination with light).
States of Matter Most people think that there are only three states of matter - solid, liquid and gas. In fact, there are potentially fifteen, some of which have weird and mysterious properties, such as zero viscosity (causing an object to flow spontaneously). In this article I will explore the various states of matter, starting with the familiar three states, moving on to more exotic states and finally exploring theoretical states that have not yet been experimentally confirmed.
Solids, Liquids and Gases Most matter encountered on a daily basis is in one of the three “normal” states of matter solid, liquid or gas. Solids come in two forms - crystalline and amorphous - which each have a different lattice structure (regular and irregular respectively). Most of the objects we think of as solids are in fact crystalline solids, which have a regular ordering of atoms, though glass (an amorphous solid) is a notable exception. Crystalline solids are formed when a liquid is cooled slowly, whilst amorphous solids are formed when a liquid is cooled very quickly, leaving the atoms “frozen” in their positions.
Both liquids and gases are classed as fluids, as unlike solids they are able to flow and take the shape of their container. Liquids have a fixed volume (like solids), and hydraulics (e.g. in car breaks) make use of a liquid’s fixed volume, as when the break lever is depressed, the break pads are pushed by the incompressible hydraulic fluid against the wheels, converting kinetic energy into heat via friction and slowing the car down. Liquids are also useful as solvents, lubricants and coolants, primarily due to the liquid particles’ ability to move around whilst being bound as a single entity. Gases have neither a fixed volume nor a fixed shape; however a fixed number of gas particles occupies a fixed volume at a given temperature and pressure. This is called Avogadro’s Law, after its discoverer. More generally, gases can be described macroscopically by the ideal gas laws, which relate the pressure, absolute temperature, volume and number of particles of a gas. These equations can all be derived from kinetic theory, which describes matter microscopically on the level of atoms.
Low temperature states At very low temperatures (approaching absolute zero), some liquids form another
When gas or vapour is cooled to similarly lower temperatures, a Bose-Einstein condensate can be formed. In a Bose-Einstein condensate, a large number the particles occupy the lowest quantum state, and have the smallest amount of energy possible, whilst a small number gain a large proportion of the energy and evaporate from the condensate. This leads to quantum effects that are usually negligible becoming apparent on a macroscopic scale.
High temperature states A plasma is an electrically neutral medium where electrons are ripped away from their parent atoms, usually due to high temperatures, leaving a mixture of free electrons and positive ions. These separated electrical charges allow plasmas to conduct electricity, and cause their behaviours to be markedly different from gases. Plasmas will also respond to magnetic fields,
SCOPE 2010/11 9
due to the presence of individual electrical charges within the plasma. The equations that describe plasmas are relatively simple; however their behavior is exceptionally subtle and varied. The emergence of unexpected behavior from a simple model is a typical feature of a complex system. Plasma complexity can manifest itself in the formation of features on a variety of length scales, and they are often very sharp have a fractal form. Many of these features were first studied in the laboratory, and have subsequently been recognized throughout the universe. Far from being an abnormal, exotic form of matter, almost 99% of all matter in the universe (including the Sun) is plasma, and it occupies what we usually consider “empty space�. Plasmas are also used in a variety of applications, ranging from recreational plasma globes to plasma cutters in industry. Another more extreme high temperature state is a quark-gluon plasma, a state which all matter is thought to have existed in for a few millionths of a second after the big bang. Usually quarks and gluons exist in a bound 10 SCOPE 2010/11
state, forming protons, neutrons and other short lived particles; however in a quark-gluon plasma the constituent quarks and gluons are unbound, mainly due to the high energy density. In contrast to a normal plasma, which screens (dampens) electrical charge, a quarkgluon plasma screens colour charge, which is a conserved property of quarks that allows them to interact with the strong nuclear force. Understanding the properties of a quarkgluon plasma would help us understand the fundamental laws of physics better, as well as providing new insights into the very early Universe. In fact, one of the experiments at the Large Hadron collider at CERN, dedicated to trying to understand more about quark-gluon plasmas, has recently managed to detect a signal that a quark-gluon plasma was formed in a collision.
Theoretical states of matter There are also several unconfirmed theoretical states of matter; many of which could have useful properties. For example, the string-net
liquid state of matter has potential applications in quantum computing. In normal solid matter, atoms align themselves so that their spins (a quantum mechanical property) form an alternating 3-D lattice. However, in a sting-net liquid the atoms are arranged in such a way that two neighbouring atoms would have to have the same spin, giving the matter some odd properties, in addition to supporting some unusual proposals about the fundamental nature of the Universe. In fact, a mineral called Herbertsmithite is suspected to display some of the properties of a sting-net liquid, and could in the future be used in quantum computers. So maybe one day, the material this lump of rock is made of could form the world’s most powerful computer - maybe one powerful enough to be sentient.
We’re Just … Pieces Of String Clash of the Titans For over 50 years, physicists from across the globe have been carefully and quietly avoiding an ominous problem galloping over the horizon. The problem is this: modern physics, as we know it, is based upon two great theories, quantum mechanics and Einstein’s general relativity. In order to understand how string theory came about, we must further investigate this violent antipathy. Quantum mechanics is the study of our universe on the smallest of scales. This provides a theoretical foundation with which physicists can begin to fully understand the behaviour and interactions of matter on an atomic and subatomic scale, from molecules to quarks. Over many years, physicists have managed to prove, experimentally, that most of the predictions from these theories are indeed correct.
On the contrary, Albert Einstein’s general relativity is the study to understand all things big- galaxies, stars… It also tells us that the gravitational attraction between any two different masses is due to their individual warping of space and time. Although Newton’s Law of Universal Gravitation had already explained this, Einstein’s Law of General Relativity was able to elucidate numerous problems with Newton’s law, mainly unexplained glitches in the orbits of some of the planets, including Mercury, as well as correctly predict many effects of gravity, like gravitational time dilation, where gravity affects time itself. Now onto the problem: for a long time, most physicists either study things big and heavy (general relativity) or things light and minuscule (quantum mechanics), ignoring the other as if it doesn’t exist. However, when the universe
decides to be mischievous, it throws some extreme things at us. At the exact moment of the Big Bang, when the universe was supposedly created, it exploded from a particle so small it makes dust seem gargantuan. Inside a black hole, in its deep and dark depths, a colossal mass is crushed to an infinitesimal magnitude. Being both “big” and “small”, both these situations require the equations of quantum mechanics and general relativity, but as they are combined, it becomes clear that the two are incompatible without further understanding. Not understanding the beginning of the universe and black holes, it becomes clear that the relationship between these two theoretical frameworks begs for deeper understanding. Thus the question which string theory tries to solve becomes clear: How can quantum mechanics be merged with general relativity to produce a coherent and consistent theory of quantum gravity?
All Tied Up Superstring theory, or string theory (as it will be referred to for the rest of this article), a relative child compared to the elderly pillars of quantum mechanics and general relativity, bring the two together, redrawing matter in its most fundamental level. In string theory, quantum mechanics and general relativity require one another to work properly. The union, according to string theory, is unavoidable. As you’ve probably been taught, atoms are made up of fundamental particles, protons, neutrons, electrons, which in turn are made up of quarks. String theory suggests that if we examined quarks with greater magnitude of precision, we would find tiny, one- dimensional loops, that, with the creative minds that scientists have, have been named “string”. Who would have thought pieces of infinitely thin, dancing rubber bands would solve the greatest problem of contemporary physics?
SCOPE 2010/11 11
theory. There were minor conflicts between the pair which threatened the existence of string theory again, which meant that, as before, gravity had resisted being linked up to the minute world in quantum mechanics. Due to the work of Schwarz and Green by 1984, the interest in string theory was re-ignited. They suggested that this conflict was minor and could be resolved, as well as expressing how, if expanded, string theory could bring all four forces and all matter together in a framework for future study. When this was finally accepted by the physics community, there was a mad rush by particle physicists to fully discover the potential of string theory.
Brief History Einstein was the first to realise that a unified theory was needed, a forerunner of his time. When he was investigating the forces of gravity and electromagnetism, he realised that an underlying principle was needed to bring the two together. Einstein was unable to answer his own question, but he would have been proud now: string theory threatens to be able to bring all forces and matter together in one unified framework. In 1968, Gabriele Veneziano, a young physicist from Italy, was working as a research fellow in CERN and was struggling to make sense of the properties he had observed of the strong nuclear force, working one this problem for many years, until one day, Veneziano made an astonishing breakthrough. 200 years ago, a Swiss mathematician Euler had created a formula called the Euler beta- function, for no distinct purpose. However, Veneziano realised that the Euler beta- function managed to explain most of the features of the strong nuclear force, even though after years of study, no-one seemed to know why. In 1970, physicists showed that Euler’s function described exactly the nuclear interactions between one- dimensional vibrating “string”. If 12 SCOPE 2010/11
this “string” was small enough, it was argued that they would still look like point particles and would fit experimental observations. In the early 70s, subatomic world- probing experiments provided a number of observations that contrasted with what string theory suggested. After string theory had been basically “disproven”, it was confined to the sidelines as new theories were sketched up to match the properties of the strong force However, there were many physicists who stood by string theory and attempted to figure out why it had fallen at such a hurdle. In 1974, particle physicists Schwarz and Scherk realised that the reason why string theory had failed before was due to the fact that the theory contained additional particles, which had made it different from experimental observations. By investigating further, the two physicists realised that this new particle matched the theoretical properties of the graviton, and with it came an astonishing discovery: string theory, not only caters for the quantum world, but also includes gravity. In the late 1970s/ early 1980s, string theory experienced a brick wall when scientists decided to delve deeper and deeper into the equations which linked quantum mechanics and string
1984 to 1986 gave birth to what was dubbed the “First Superstring Revolution”, where more than one thousand research papers on the subject. However, many of whom had jumped on the wagon were soon disheartened when they encountered a stone wall yet again: the equations devised from string theory were estimated, meaning solutions were approximate. This brought a stop to the progress made as approximations hindered further development as fundamental questions lied unanswered.
Lots and lots of string Instead of the conventional point- particles thought to be common belief before string theory, if each point- particle were examined with an extremely powerful microscope beyond current capabilities, we would find that the particles are each a minute, vacillating piece of string. Each string loop is about 1.6x1035 meters, roughly a Planck length each, approximately 1020 times smaller than an atomic nucleus. To fully confirm directly the existence of the string through experimental data would take energy many times stronger than previously used. The main idea behind string theory is that the fundamental particles behind the Standard Model (for example, electrons) are all just essentially the same thing. Instead of being point particles, it is believed that, under string
theory, each fundamental particle is actually a piece of string. A point can only move, but string can not only move, but oscillate in numerous fashions. These different ways are all different fundamental particles: if it oscillates one way, it’s a certain quark, if it oscillates another way, it’s a photon. It also ties in with a great advantage of string theory; it includes the graviton (particle that mediates the force of gravity), which means gravity is included in string theory.
it is capable of explaining both the forces and matter of which our Universe is made of. There are five different theories we can create this way- three are different superstring theories, of which one uses open string as the fundamental building block, whereas the other two use closed string as the basic object; also by mixing the best parts of supersymmetric string theory and bosonic string theory give rise to two other consistent theories of strings, which are called Heterotic String Theories.
Particles are classified as bosons (integer spin) or fermions (odd half integer spin) according to their spin. Bosons carry forces, which include the graviton, which carries gravitational force. Fermions include the different quarks and electrons. String theory up to now was called Bosonic String Theory as it only describes bosons, the particles which carry force, and did not include fermions.
M-Theory is supposedly the theory that brings it all together. By adding an 11th dimension, it meant that in terms of the mathematics, all was now perfect. The theory was also just what the doctor ordered too- the addition of an extra dimension meant that a string can expand infinitely into a “floating membrane”. Apparently, our universe exists on a floating membrane; which co-exists alongside an uncountable number of parallel universes. This also explains why gravity is such a weak forcemathematically; it becomes clear that gravity could leak into our membrane from another
When supersymmetry is added to Bosonic String Theory, there is a new theory born which describes bosons and fermions, which means
nearby membrane. It brings together all the different superstring theories as different ways of reaching the ultimate theory. It has become clearer and clearer that this theory has succeeded and triumphed where the old Standard Model didn’t. M- Theory could also hold the key to scientists figuring out how the Big Bang came along- through two membranes colliding? The energy produced from this theoretical collision is consistent with mathematical theory. String theory does have its’ doubters (including Mr McKane). As it predicts phenomena which we do not have the ability to measure, such as an eleventh dimension, infinite number of parallel universes and tiny one- dimensional strings, it may be a long time before the mathematical elegance of this theory is universally celebrated, but one must expect that one day, the brilliance of M- Theory and superstrings will be proven correct. Who knows when that time will be?
SCOPE 2010/11 13
The P versus NP Problem Does P equal NP? This statement encapsulates one of the most important open questions in mathematics and theoretical computer science. Such is the problem’s importance that the Clay Mathematics Institute has listed it as one of their eight Millennium Prize problems and offers a one million dollar bounty to the first successful resolution of the problem. Before we get to meaning of the question and its implications, let us first look at the early history of computer science.
Gödel, Turing, Cook In 1931, Austrian logician Kurt Gödel published a paper entitled, “On formally undecidable propositions of Principia Mathematica and related systems” in which he showed to the amazement of the mathematical community that an axiomatic formalisation of arithmetic that is both complete and consistent cannot exist, a result known as Gödel’s Incompleteness Theorem. This got other mathematicians thinking about formalising methods of “computation”. Up until the late 1930s, a computer was understood to be a person laboured with the task of carrying out repetitive calculations. The early founders of computer science were not at all concerned with the development of hardware but rather theoretical study to determine the capabilities of mechanical “computers”. One of the early pioneers was Alan Turing, a mathematician and cryptanalyst at Bletchley Park during World War II, and who is today widely regarded as one of the founders of modern computer science. In 1937, Turing devised a theoretical model of computation (known today as Turing Machines) to prove that a computational method that can reliably decide whether a mathematical statement is true or not, cannot exist. He also solved the so-called “Halting problem”, showing that no computational method could ever determine whether a given program would complete its computation or run forever. In the 1960s, researchers began to investigate more practical problems such as how the resources needed for computation scale to large inputs, which is a large part of the field now known as computational complexity theory. Computational complexity theory also studies the classification 14 SCOPE 2010/11
of computational problems according to their intrinsic “difficulty”. In 1971, computer scientist Stephen Cook showed the existence of a certain class of “difficult” problems (NP-complete problems) and laid the rigorous mathematical foundations for the P versus NP problem. Before we get to the details however, we need to consider Turing Machines in greater detail. All true statements expressible in the symbols of a formal system are theorems if the formal system is complete. A consistent set of axioms within a formal system is such that contradictions do not arise in theorems generated from them. Gödel showed that all “sufficiently powerful” formal systems are incomplete but that’s for another article. For more information refer to Hofstadter, Douglas. “Gödel, Escher, Bach” Basic Books, 1979
1
In fact, a letter discovered in the 1990s shows that Gödel was probably the first to think about the P versus NP problem, writing “How fast does [the number of steps a machine requires] grow for an optimal machine … If there actually were a machine with [an efficient time complexity] … this would have consequences of the greatest magnitude. That is to say, it would clearly indicate that … the mental effort of the mathematician in the case of yes-or-no questions could be replaced by machines”. However, Cook was the first to state the problem precisely. 2
Turing Machines In order to make a rigorous mathematical investigation of computers, it is convenient to use theoretical models. The archetypal model of computation is the Turing Machine (TM) conceived of in 1937 by its namesake, Alan Turing. A TM can be thought of as having an infinite reel of tape on which it can manipulate symbols (say for example, 1s and 0s), according to a predefined set of simple rules called a transition function or “a table of behaviour” which can loosely be thought of
as the machine’s program. A “reading device” can scan a single symbol from the tape and then act according to the machine’s internal state and the symbol read from the tape. For example, a machine may start with a blank input tape (although the input may be any finite length string of symbols), and have the transition function, “Begin with state A. In state A, if the symbol at the reader is blank, write a 1 to the tape, move to the right and change the internal state to B. In state B, if the symbol at the reader is blank, write a 0 to the tape, move to the right, and change the internal state to A”. This is more conveniently described using notation such as, {(A, _, 1, R, B), (B, _, 0, R, A)}. This rather trivial program simply outputs the infinite string 1010101010… and you would be forgiven for thinking that such a primitive model of computation could never do anything of practical use. In fact, far removed from this, the TM represents the most powerful model of computation. No piece of computer hardware that exists now or in the future will exceed the basic problem solving ability of a TM. One of the important aspects of the TM described here is that for each state and symbol, there is only one possible action. Such a TM is technically called a Deterministic Turing Machine (DTM) because the eventual output of the machine is completely determined by its initial state, and will always be the same for the same input. This is an important concept that we shall return to later.
Time Complexity An advantage of representing computation as a series of basic steps is that we can analyse how the number of steps required for an algorithm grows in proportional to the given input. This is called the time complexity of an algorithm. The word time here does not refer to an actual time as measured in seconds for in order time complexity to be an effective metric for comparing the intrinsic complexity of certain algorithms, it should be agnostic about the machine upon which it is run. Let us analyse the time complexity of a problem. Consider that you are trying to pack a container with smaller boxes of different sizes such that
they fit perfectly in the container. We can pose this as the decision problem (yes or no question), “Is there some subset of the set of boxes that will fill the container?”. The way in which you or a computer might go about solving this problem systematically is called an algorithm (specifically, a decision procedure in this case, given that our problem has a yes-no answer). Take for example the case such that the container’s volume is 10m3 and the boxes provided have volumes {1,2,3,4,5}. A brute force type approach would enumerate every subset, {},{1},{2},{1,2},{1,2,3} etc. of which there are 2n (where ‘n’ is the number of boxes) and check their if their total is equal to ten. In the worst case, we would have to check all of 2n of the subsets and sum up to n elements, so we say that the time complexity of the decision procedure is n2n, conventionally represented using “Big O” notation, O(n2n). This means that the number of steps required for computation (the running “time”) grows in proportion to n2n where ‘n’ is the input size. Note that “Big O” notation is formally defined as representing the asymptotic or limiting behavior of an algorithm, and therefore we can omit constants and low order terms because only the dominant term is required for comparison. For example, an algorithm taking n2+5n+1 steps is said to have time complexity O(n2) because in the limit as ‘n’ tends to infinity, the quotient (n2+5n+1)/n2 is one. Returning to our container packing problem, we can tell quickly by inspection that 1+4+5 is a valid solution and for very small values of ‘n’ it would be a trivial operation for a computer to find the answer using the decision procedure outlined above. However, algorithms with time complexities like O(n2n) are said to run in exponential time, and even for modest values of n such as 100, the value of n2n is of the order of 127 thousand billion billion billion. The number of instructions (simple operations) per second that a top of the range hexacore Intel Core i7 processor can execute is 147.6 billion, meaning in rough terms, the operation would take no fewer 27 trillion years to complete, in the worst possible case. In general, we can say that if the time complexity grows faster
than any polynomial as the input size tends to infinity, it runs in super-polynomial time such as O(3n), O(n!) or O(nn). This is the widely accepted hypothesis known as the Church-Turing thesis. Although it might be unprovable, it has very interesting philosophical and technical implications. 3
Time Complexity Classes: P and NP In order to compare the inherent difficult of problems we have seen that it is useful to classify them according to some metric such as time complexity. Arguably the most fundamental time complexity classes are P and NP. P stands for “Polynomial Time”, and contains computational problems that have a polynomial time complexity such as O(n) or O(n4). Informally, we can say that P is the class of problems to which solutions can be found quickly (their solutions can also be verified to be correct quickly, as we shall see). It seems rather artificial to designate that problems with a polynomial time complexity are “quickly” or “efficiently” soluble. This is known as Cobham’s thesis and is merely a useful assumption because polynomials are the archetypal “slow growing” function. There are very few cases of algorithms with a polynomial time complexity of an order greater than about O(n3) and even high order polynomials are slow growing in comparison to exponential time algorithms, the limit as n tends to infinity for the quotient of any polynomial (say n10^100) and any exponential function (say 1.01n) is always zero. The class P is a subset of a wider class called NP which contains computational problems soluble in “Non-Determininstic Polynomial Time”. To understand what this means, we need to revisit the concept of Turing Machines. Recall the fact that for each state, there was one only one possible course of action for each symbol read, hence the path that a Deterministic Turing Machine (DTM) follows is completely determined by the transition function. Imagine instead that for any state
and symbol, the machine can follow any number of possible paths and furthermore, the machine is able to choose the best possible course of action. Such a TM is called a Non-Deterministic Turing Machine (NDTM). These special properties would allow such a machine to solve problems like the container packing problem in polynomial rather than the super-polynomial time. It is worth noting that the claimed “special properties” of a NTDM do not make it a more powerful model of computation. It still holds true that a DTM represents the fundamentally most powerful possible computing model because it can emulate a NDTM simply by trying every route of computation and deciding afterwards which path was the best but this simulation takes exponential time. We can therefore state NP as being the class of problems “the soluble in polynomial time on a NDTM”. The key aspect of NP problems is that a correct solution to a problem in NP can be verified in polynomial time by a DTM. Recall the container packing example, although finding the solution takes exponential time, we could check our answer (1+4+5) was correct in polynomial time, because addition of n-digit numbers is O(n). Therefore, P is a subset of NP, where NP is the set of problems with “efficiently” verifiable solutions, but where P has both “efficiently” verifiable solutions and “efficient” decision procedures. The P versus NP problem asks the question of whether there really is a fundamental divide between problems in P and NP, could efficient algorithms exist for all problems contained in NP or are some problems fundamentally more difficult. A final important concept before we explore exactly why the question of P versus NP is so important, is that of NP-completeness. A problem in NP is said to be NP-complete if there is an isomorphism between the given problem and any other problem in NP. Essentially, if one problem can be shown to be NP-complete then all other problems in NP can be expressed in terms of this problem, and crucially, this conversion from one problem to another can be done in polynomial time. Stephen Cook was the first to prove the existence of NP-complete SCOPE 2010/11 15
problems in his 1971 paper, “The Complexity of Theorem Proving Procedures”. He showed that any problem in NP could theoretically be reduced to the Boolean Satisfiability Problem (SAT). Although SAT is a rather esoteric logic problem, one year later in 1972, Professor Richard Karp of UC Berkeley showed a further twenty one important problems to be NPComplete including a version of the subset sum problem presented above. The great benefit of this discovery is it allows us to restate the question of P versus NP in a potentially more manageable fashion. Given that some problems have been proven to be NP-complete it will now suffice to show that a single NP-complete problem is soluble in polynomial time in order to prove P=NP and vice versa. Given the wide range of fields in which NP-complete problems are found, it is effectively the case that top researchers in fields as diverse as mathematics and biology have been battling against versions of the same problem and we still have no efficient solution to an NP-complete problem.
Consequences At this point you may be thinking that the P versus NP problem is no more than a pure mathematicians’ folly and that being able to solve NP problems such as SAT efficiently is of meager importance. However, the class NP is full of significant and practical problems in a wide range of fields including applications to problems such as protein folding, market behaviour, and task scheduling to name but a few. One important problem in NP is that of integer factorisation; given an integer n, find all integers p and q such that n=pq. Clearly a solution would be efficiently verifiable as you merely need to multiply two numbers and multiplication is O(n2). Finding the factors in the first place however is significantly more difficult. This is the key to most modern cryptography systems such as the widely used RSA algorithm, which secures traffic across the internet or the Advanced Encryption Standard
16 SCOPE 2010/11
(AES), used worldwide by organisations like the NSA. If P=NP, integer factorization becomes “easy” and modern cryptography would become useless. Another problem in NP which is of great interest to industry is known as the “Travelling Salesman Problem”. Consider a salesman planning to visit n towns exactly once in order to hawk his wares. He knows the distances between each pair of towns, but is there a route shorter than a given distance x? Of course, he could simply calculate the length of each possible route and see if any satisfy the given requirement, but the number of possible routes is n! (n factorial) which grows very quickly, a modest 20 cities requires the salesman to check 2.4×1018 routes. If P equals NP, then the shortest route could be found with comparative ease, a result with applications to problems such as mail delivery, laying cables for telecommunications and even genome sequencing. The implications of P equaling NP extend far deeper than these few examples. If P=NP then for many problems with efficiently verifiable solutions, we can also potentially find solutions efficiently. Given that any mathematical proof could theoretically be written in very precise terms using a formal system (such as the propositional calculus) and that an algorithm could conceivably be constructed to check that the syntax and derivation is correct according to the rules of the formal system, it would appear that checking the validity of mathematical proofs is in NP. This conforms to our everyday experience with mathematics, one can follow the steps of a proof and be satisfied that it is correct very quickly, but the initial effort to devise such a proof seems significantly greater. However, if P=NP, finding proofs of mathematical conjectures could be done quickly by machines. Some have even gone as far as to suggest that this can loosely be extended to the arts and whilst we must be careful in straying to far from the well-defined realm of mathematics, surely some creative are genuinely in NP but outside of P. While MIT Professor Scott Aaronson
may be exaggerating when he states that, “[If P=NP], everyone who could appreciate a symphony would be Mozart”, a proof that P=NP would still unquestionably be a paradigm shift in mathematics. Although by virtue of these consequences it ‘feels’ unlikely that P≠NP, this sort of feeling has no value in mathematics and until we have a proof that P≠NP or otherwise the question remains wide open. It has not been proven that integer factorisation truly does lie outside of P, but it is widely thought to be the case, and for the sake of simplicity, let us assume to it be so. The most efficient algorithm known has O(exp[(64b/9)(1/3)(log b)(2/3)]), viz. exponential time. 4
Challengers In August 2010, HP Labs researcher Vinay Deolalikar caused a stir when he published a paper entitled “P≠NP”, claiming to have resolved the problem once and for all. Mathematicians quickly scrambled to verify the proof and Stephen Cook is even reported to have said that “[Deolalikar’s proof] appears to be a relatively serious claim to have solved P vs NP”. Since this time however, his proof have been found to contain irreparable flaws. Deolalikar is not the first to have claimed to have solved the P versus NP problem. Gerhard Woeginger maintains a list of some fifty eight attempted proofs of which thirty four claim P=NP and twenty four claim P≠NP. It’s worth noting however that only one of these proofs has appeared in a peer-reviewed journal, and none are recognised as being valid proofs. In fact, it is widely believed that P will be shown not to equal NP. The best evidence for this is the resilience and universality of NP-complete problems, despite all the possible ways to approach the problem, be it showing that a postman can never chose the shortest route efficiently to proving that SAT cannot be solved quickly, we still have no hint of a proof. There
is also the wonderful “argument from self reference” pointed out by Scott Aaronson, it would seem that the very question of P versus NP is itself a problem in NP, and so if P=NP then it would be easy to prove that P=NP and vice versa! It’s even possible that the P versus NP problem is fundamentally undecidable. Either way, mathematicians still have their place in the world, at least for now.
The Travelling Salesman Problem is normally stated as a problem that asks for the shortest route between cities rather than the existence of a route less than a certain length but verifying a solution to the first version would require enumeration of every route which is by definition in NP! 5
The Travelling Salesman Problem is normally stated as a problem that asks for the shortest route between cities rather than the existence of a route less than a certain length but verifying a solution to the first version would require enumeration of every route which is by definition in NP! 6
http://www.ncbi.nlm.nih.gov/genome/ rhmap/ 7
Glossary of terminology Computational complexity theory
A branch of theoretical computer science dealing mathematical analysis and classification of “computational problems” according to how efficiently they are soluble, usually judged by running time or required memory space, given a certain model of computation (eg. a Turing Machine).
Computational problem
A problem that lends itself to being able to be solved by an algorithm.
Decision problem
A class of computational problems that have a yes or no answer.
Algorithm
A finite, definite, effective and logically organised set of instructions that produces an output (here, producing an output by acting on an input).
Decision procedure
An algorithm that solves a decision problem.
Polynomial time
An algorithm runs in polynomial time if it has time complexity O(nk). We say that such problems are efficiently soluble.
Super-Polynomial time
An algorithm runs in super-polynomial time if it has a time complexity that grows faster than any polynomial such as exponential time complexity O(kn).
NP
The class of computational problems soluble in polynomial time using a NTDM, or whose solution can be verified in polynomial time using a DTM.
P NP-complete
A subset of the NP class containing problems soluble in polynomial time using a DTM.
Turing Machine (TM)
A theoretical model of computation with infinite memory that performs computations by manipulation of symbols according to a specified grammar, and is capable of computing all computable tasks.
Non-Deterministic Turing Machine (NTDM)
A type of TM that has more than one possible action for a given state and symbol, and can chooses the correct action so as to achieve perfect efficiency.
Big O notation
Used in mathematics to describe the behavior of a function for large inputs and in computational complexity to represent how an algorithm’s resource requirements such as time or memory grow in proportion to its input size.
A problem in the NP class that all other problems in NP can be reduced to in polynomial time.
SCOPE 2010/11 17
Particles and Sparticles A Short Summary to the Standard Model In an article regarding prime numbers in last year’s edition of SCOPE, one writer rather erroneously compared the role of primes in mathematics with that of atoms in chemistry, discussing how both were the elementary blocks that make up everything in their respective subjects. However, it is common knowledge that atoms are neither the smallest nor the most basic particles in our universe. For example, you will have heard of the electron, the proton, or the neutron, all of which are structures of the atom itself, but can we cut these down even further? The electron is indeed “un-cuttable” (unless we delve into the world of strings, but that is another story), but the proton and the neutron are hadrons, or more specifically baryons, containing three elementary particles each called quarks. The word “hadron” is likely to ring a bell, due to the ubiquitously discussed Large Hadron Collider (LHC), the largest particle accelerator in the world. Some of the purposes of the LHC will be revisited later on. The neutron and the proton are hadrons because they contain quarks, unlike the electron, which itself belongs to a class of elementary particles known as leptons. As far as we know at the moment, all matter in the universe is composed of quarks and leptons. Both quarks and leptons have six different types each (excluding antimatter equivalents), as illustrated by the table top right:
18 SCOPE 2010/11
As said before, protons and neutrons are composed of three quarks each. You may know that the proton has a positive charge of 1, and this is because it contains two up quarks and one down quark. The neutron has no charge, due to its composition of one up quark and two down quarks. As a result, up and down quarks are by far the most abundant within the quark family. The reason behind this is that the heavier quarks (the ones found lower down in the table) are also more unstable and may undergo particle decay, transforming into another elementary particle with less mass plus another forcecarrying particle known as a boson, depending on what fundamental force (e.g. the strong force or the weak force) is responsible for the reaction. Of course, if this new elementary particle is itself unstable as well, it may decay again and so on. This process of particle decay, though similar, is distinct from radioactive decay (which you may have come across in Physics lessons already) where an unstable nucleus within an atom emits
particles or gamma radiation to form a smaller, more stable nucleus. However, like half-lives, which is the average time for the radioactivity of a substance to decrease by half, particles have the characteristic of a lifetime, which is the average time for a particle, such as one of those shown in the table, to decay. To emphasize the instability of some of the more massive particles, take the top quark as an example. It has a lifetime of 5×10−25 s (i.e. a top quark takes on average 5×10−25 s to decay into smaller particles), which coupled with the fact that much energy needs to be provided to create heavier particles in experiment, means that top particles are very rarely detected and its properties are still relatively unclear. On the other hand, up and down quarks are fairly stable. The lifetime of a proton exceeds 1029 years. The process of particle decay works in a similar way with the charged leptons, the electron, the muon and the tau. The electron is of course the
most abundant, found in all atomic structures. The muon and the tau are much rarer; muons are found in cosmic rays, whilst the tau is only found in experimental situations, due to the need for high energy reactions and technology to produce it and its high instability, similar to the top quark. Each of these 3 particles have a chargeless and virtually massless partner, commonly known as neutrinos. The fact that they have no charge means that the electromagnetic force cannot act on them, and hence neutrinos can only interact with the weak force, allowing it to travel through matter without disturbance. As a result, neutrinos are immensely difficult to detect. There is an abundance of neutrinos in the universe despite the fact that they rarely interact with other things. For example, over 50 trillion solar neutrinos are passing through your body every second! In the Standard Model, there are four fundamental forces: the electromagnetic force, the weak nuclear force, the strong nuclear force and gravity. It may seem odd to think of forces as particles at first, but when applied to tiny perspectives, our usual concept of a force falls apart. So instead of the invisible “push-pull” image that we imagine, physics explains the fundamental forces as quantums which fire back and forth elementary particles exchanging energy. Hence, the gauge bosons are also called “exchange particles”. Photons are the force carriers for electromagnetism, the force responsible for much of what we see in everyday life. When you touch something, the sensation that you experience is due to the electromagnetic forces acting against your skin exerted from the particles on the surface you are in contact with. In fact, it is technically impossible to touch something, due to the repulsion of electromagnetism Photons are commonly used to refer to the basic units/ carriers of light, but are also carriers of all electromagnetic radiation, such as example, radio waves and infrared Gluons carry the strong interaction and the W and Z bosons carry the weak interaction. Both the strong and the weak nuclear forces cannot
be observed in everyday life, due to their very short ranges, unlike electromagnetism and gravity. The strong force holds the nucleus together, which would otherwise fly apart due to the electromagnetic repulsion between the positively charged protons. Apart from gluons, quarks are the only elementary particles to interact with the strong force. The weak force was proposed to explain interactions with particles like electrons and neutrinos which are not affected by the strong force. The process of beta decay can be explained by the weak interaction. You may have realized that gravity is missing, and has lacked an explanation thus far, and that is because the Standard Model as of yet cannot explain the final fundamental force that is gravitation. It is indeed quite an anomaly. Not only is it much weaker than the other forces (1038 weaker than the strong force), but neither does it have a gauge boson like the others. A hypothetical “graviton” has been proposed but at the moment this is simply speculation and not backed by experimental evidence. The connection and interactions between the fermions and the bosons is a key aspect of the Standard Model. The differences between fermions and bosons results from their differences in spin, which like mass and electrical charge, is an intrinsic property of a particle. Spin can be thought of as a type of angular momentum, and as such is measured with the same units of Joule-seconds, though in practice, values of spin are expressed in integer (e.g. 0,1,2…..for bosons) or half-integer (e.g. 1/2,
3/2, 5/2……for fermions) numbers (Known as spin quantum numbers), indicating integral or half-integral multiples of Dirac’s Constant. The Standard Model predicts the existence of an elementary particle called the Higgs Boson, which is key to understanding the origin of mass in our universe. Without this particle, all other particles should theoretically be massless according to the Standard Model, and hence the Higgs particle is vital in eliminating the inconsistencies that would otherwise be produced. It may seem strange that we should consider the origin of mass, as we are accustomed to seeing mass as an intrinsic property of matter that requires no further explanation. However, it is something that is not explained or fully understood by science at the moment. For example, how would you explain the difference between something with mass (like ourselves) and something without (photons, gluons, possibly gravitons)? The basic idea of the theory, is that there exists an invisible Higgs Field, which all the particles in the Universe are always swimming through at any one time. Particles which interact with the field are given mass, and the more they interact with the field, the more massive they become (Massive in terms of mass, not size). Particles like the photon do not interact with the Higgs Field, and hence do not have mass. The Higgs particle, the mediator particle of this Higgs field, fits in with all the calculations and predictions of the Standard Model. All that is left, is to find it (One of the main motivations for the building of the LHC). SCOPE 2010/11 19
Beyond the Standard Model: Supersymmetry and Dark Matter The Standard Model may be highly successful in explaining scientific results and supported by strong experimental evidence, but any physicist will acknowledge that the theory is incomplete. The key ambition in particle physics is unification. The ultimate goal is to produce a Theory of Everything that can explain everything that happens and could happen in the Universe An example of unification lies in the forces. All four forces have wildly different ranges and strengths, as described before, but it is hoped that they can all be unified and explained in one single theory. A connection between the electromagnetic force and the weak force has already been established. At low energies, the electromagnetic force is about 1011 times stronger and has an infinite range, compared to the weak force, which acts on an atomic scale. This is due to the fact that the W and Z bosons have a large amount of mass and can therefore only travel short distances, while the photon, the carrier of the electromagnetic force, is massless, and can travel much further. However, above energies of 100 GeV, the masses of these particles are virtually equal, and the strengths of the forces become the same, hence the electromagnetic and the weak force combine into a single electroweak interaction. This is the basis of the Electroweak theory. It is thought that just after the Big Bang, in the presence of incredibly high energies, there was one single,unified superforce, and only later when the Universe cooled down, did it split into the forces we observe today. So naturally, the next step would be to try and merge the strong force with the electromagnetic and the weak forces, and achieve Grand Unification. The energy required to provide experimental evidence for this is at this stage in man-made technology (such as particle accelerators) is completely unviable considering the masses of the particles involved. From a theoretical standpoint, using the Standard Model to calculate the strengths of the three forces at
20 SCOPE 2010/11
very high energies produces results which show that they do indeed get closer together. However, these results only give an approximate intersection at which the three forces unify which, although is a success in itself, have led physicists to explore a range of Grand Unified Theories (GUTs) beyond the Standard Model to find a more satisfying conclusion. One of the most serious inconsistencies concerning the Standard Model also provides strong motivation for physics beyond the theory we have now. As we have already seen, all elementary particles are given mass through the strength of their interaction with the Higgs Field. But what about the mass of the Higgs itself? The particle will have a bare mass, plus the mass gained from its interaction with all of the other elementary particles, making its mass huge. This unfortunately disagrees with experimental evidence. Clues suggest that its mass is likely to be lighter than 160 GeV, if not 130 GeV, well within the range of discovery by with the energy levels of the LHC (If the interlinking of mass and energy is confusing, consider Einstein’s famous equation E = mc2, E being energy, m being mass and c being a constant. From this, we can see that mass and energy are directly proportional, and as a result, the more mass something has, the more energy required to create it.). The Standard Model however calculates it to be 17 to 18 orders of magnitude greater. The only solution we have at the moment is to fine-tune the parameters used to remove the discrepancy, to the extent where physicists deem it unnatural. Hence, we have what is known as the hierarchy problem. An idea that was first proposed in the ‘60s and then developed into a full, realistic theory in the early ‘80s involved symmetry between fermions and bosons. The basis of this theory was that for each elementary particle we know today, there is a superpartner particle, which differs by a spin of ½. Essentially, the amount of elementary particles is doubled to include these extra superpartners. The general naming scheme is to add an “-ino” suffix to the spin 1/2 superpartners of spin 1 elementary particles, and
an “s-“ prefix to spin 0 superpartners of spin 1/2 particles. So for example, the superpartner to the photon (which has spin 1) is the photino (with spin 1/2) and the selectron (with spin 0) is the superpartner to the electron which has spin 1/2. If these superparticles (or sparticles to keep in the scheme of things) exist, they would have to be much heavier than their superpartners, because otherwise, they would have been discovered long ago. But according to leading theorist Steven Weinberg, they would be no need to “sweep them [the superparticles] under the rug, as an excuse for not having discovered them yet”, as it is actually natural for them to heavier. The masses of the elementary particles that we know today are bound by their interactions with the Higgs Field, but this does not apply to the superparticles, and hence they can be any mass they want (Figuratively speaking, of course), and it is natural for them to heavier than their counterparts. This ambitious idea of supersymmetry was originally ignored by the scientific community, until a startling observation was made, which gave reason to work on the theory. When calculating the mass of the Higgs with supersymmetry in mind, it is found that the contributions to its mass by the elementary particles that we have discovered already are naturally cancelled out by the contributions of their heavier superpartners. Therefore, the quantum contributions to the Higgs mass come from the sum of the dfferences or the “leftovers” between the particles and sparticles, and hence the mass of the Higgs does not shoot out of control as it would without these extra particles. We should also consider that the Higgs itself has a superpartner called the Higgsino, which has spin1/2. The Higgsino, as a fermion, is held by symmetric principles which keep it light, which as a result keeps its superpartner, the Higgs, light as well. Then, it is not surprising that the mass of the Higgs is around the range where experimental evidence points towards. The hierarchy problem is essentially solved. But that’s not all. Returning to the subject of the unification of the three forces, if the
it attempts to combine the quantum theory of gravity with everything else by the way of using fundamental one-dimensional vibrating strings which make up all particles and forces in our universe. Steven Weinberg has expressed his disappointment in the lack of progress with string theory in the past few years, but the research area has garnered support and interest from other leading theorists such as Edward Witten and Leonard Susskind. A main problem, however, is that neither supersymmetry nor string theory have as of yet concrete experimental evidence.
The strengths of the three forces at different energy levels, calculated with and without supersymmetry. strengths of the forces are recalculated to include supersymmetry, it is found that they intersect with remarkable accuracy, as seen in Figure 1. However, what may be the most exciting thing that supersymmetry brings to the table, is the hypothetical existence of a group of superparticles called neutralinos. These are electrically neutral fermions, composed of a mixture of Winos (the W boson superpartner), photinos and Higgsinos. In theory, the lightest of these neutralinos would be a very suitable candidate for the composition of the mysterious dark matter which permeates our universe, of which there is four times the amount of normal matter. The problem is that dark matter is very difficult to detect, due to several reasons. One of the requirements for being dark matter is electrical neutrality. This is so that photons are not emitted, and dark matter can stay dark. Neutralinos satisfy this. Dark matter candidates cannot be too light as to behave as radiation, nor too heavy either. The lightest neutralino falls within the range of this Goldilocks Principle.
Lastly, the candidate must be incredibly stable i.e. for billions of years. As the lightest superparticle, this neutralino cannot decay any further, and hence also satisfies this requirement. Furthermore, the lightest neutralino would only interact very weakly via the weak interaction with other matter, which would explain why it hasn’t been detected as of yet. Upon calculating how much neutralino residue there should be, physicists discovered that there is much more of it than there is normal matter in the universe!
It is hoped that the LHC will provide some much needed answers on this topic. In fact, on the topic of finding evidence of supersymmetry at the LHC, Steven Weinberg has stated “I would say it’s the most important target, even including the Higgs. In fact, many of us are terrified that the LHC will discover the Higgs particle and nothing else……” His reason behind this statement is that discovering the Higgs Boson will of course finally fit the final piece of evidence into the Standard Model, but the theory is one that people believe without doubt regardless, and have been using it for years already. What would be much more exciting is if the component of dark matter were to be discovered, and according to calculations, superparticles are well within the range of the LHC. So watch out for news on this to come, scientific history may be just around the corner.
Of course, each of the three issues discussed – the unification of the three forces, the hierarchy problem and dark matter – can be resolved by different and separate theories, but for obvious reasons, scientists would prefer the existence of one single explanation. There have also been links between supersymmetry and the infamous and highly ambitious string theory, leading to combined superstring theories. It is perhaps our only path to a Theory of Everything at present day; as
SCOPE 2010/11 21
Biomimetics Biomimetics, taken from the Greek, literally means the imitation of life. From the strength of a spider’s web to the aerodynamic superiority of an eagle’s wings, nature has evolved designs far more efficient than human intelligence has yet produced. Scientists are increasingly looking to the designs of nature to harness millions of years’ worth of natural selection.
although initially predicted by critics to collapse under its own weight, stands perfectly, over a century later. The Victoria amazonica, or Amazonian water lily, is strong enough to hold the weight of a person owing to its radial fibrous ribs and cross-struts, which cover the underside. The Victorian Crystal Palace, designed by Joseph Paxton, used the water lily as inspiration; Paxton duplicated the fibrous ribs with iron, and the leaf with glass, and constructed an engineering marvel 35 metres high, and covering an area of over 7000 square metres.
Fluid dynamics In the field of aerodynamics, the efficiency of plane wings was greatly increased with the addition of ‘winglets’, angled wing tips inhibiting the formation of trailing vortices which contribute to drag. These were developed following the work of American engineer Richard T. Whitcomb, who studied the wing of a bald eagle in order to understand the principles of aerodynamics. Realising that their curled winglets decrease the forces of drag by a significant degree, he implemented these designs in planes to great effect. Recently, observing the fins of whales, aptlynamed biologist Frank Fish discovered that bumps present on the leading edge of the tail fin dramatically increase lift and stability by introducing beneficial and reducing detrimental vortices over the wing - a controlled experiment saw a 32 percent decrease in drag than conventional smooth wings. Also, the angle of attack can be 40 percent greater before stall occurs – this would mean a massive improvement in aircraft efficiency and safety. However, the greatest benefit will be for wind turbines: at slow wind speeds (around 8 ms-1), performance is doubled. This is a revolutionary advance in aerodynamics. These bumps, known as tubercles, are predicted to be implemented in wind turbines, fans and even plane wings shortly. The “bionic car”, made by Mercedes-Benz, is based on the shape of the tropical boxfish. Also known as the cowfish or trunkfish, this marine animal is known for its almost cuboid body. This odd shape has a drag coefficient of 0.06 (over 5 times better than the Ferrari F430 F1), resulting in 20% less fuel consumption.
22 SCOPE 2010/11
Mercedes-Benz state that their diesel-powered prototype can reach 190kph, with a fuel economy of 30 kilometres per litre (70 mpg).
Architecture The Eiffel Tower was based on the strongest bone in the human body, the femur. After research into the internal trabeculae fibres of this bone, it was discovered that the fibres are precisely aligned to the lines of force, allowing the bone to hold an off-centre weight easily. Engineer Gustave Eiffel incorporated this curved lattice into his world-famous Eiffel Tower, which,
The Eastgate Centre in Harare, Zimbabwe is inspired by the self-cooling mounds of African termites. These insects farm the fungus Termitomices, their primary food source. As this fungus must be kept at exactly 36° Celsius while outside temperatures range from 2° to 40°, the termites have developed an ingenious temperature-regulating system which utilises carefully adjusted air tunnels. These create cooling convection currents, sucking air up through the bottom of the mound and releasing it at the top. The termites constantly block up and dig new tunnels to counter fluctuations in outside temperature. Largely constructed out of concrete, the Eastgate centre replicates
A plane and a bald eagle. Note the similarities between the eagle’s curled wing feathers and the plane’s winglets
this cooling process, sucking air up through the building using fans on the first floor, then venting it at the top. Using less than ten percent of the energy of a conventional building of the same size, the Eastgate Centre is a cost-effective solution to air conditioning. This solution also shows an important concept of biomimetics: study the desert, not the oasis. Organisms in an environment where resources are scarce will have adapted to be far more efficient than those in environments where resources are abundant.
Materials Nature manipulates the simplest of substances to the greatest effect. For instance, the marine alabone makes its shell from calcium carbonate, the same material Soft chalk; but thanks to its nanobrick structure, it is five times as strong as steel. This structural strength is being replicated with stronger materials in order to create superstrong alabone armour. The lotus leaf also has a remarkable ability to repel water owing to a nanoscopic superhydrophobic pattern of bumps on the leaf surface. This characteristic, known as the lotus effect, is displayed in many other plants, and in the wings of some insects. Water is forced to stay as droplets on the surface and roll off, giving the leaf a secondary benefit of being cleaned as the droplets absorb any dirt particles in their path. The lotus effect has been incorporated into self cleaning windows and surfaces, and will soon be used to make anti-frost glass. Velcro, perhaps the best known example of biomimicry, is based on the simple hooked hairs on the cocklebur. Designed in the 1940s, the eponymous hook and loop fasteners came to be after a Swiss engineer noticed that hooked seeds stuck to the tangled fur of his dog. Realising the significance of this effect, he designed two plastic pads of hooks and loops which require a large amount of force to rip apart when engaged. One of the simplest fastening methods, the cocklebur seed distribution mechanism was exploited to great success. The iridescent sheen of diatom (single-celled
microorganisms) shells was discovered to be caused by tiny holes in their shells which cause light interference patterns, giving them vivid colours, much like the reflective colours you can see on the data side of CDs. These structural, rather than pigmental, colours will be incorporated into anti-counterfeiting holograms and cosmetics. Following the discovery of how geckos are able to stick to walls (a quantum phenomenon known as Van der Waals forces), “gecko tape” is being developed, a biodegradable material able to powerfully adhere to surfaces while a perpendicular force is applied. The material mimics the gecko’s toe hairs with polypropylene microfibers 600 nanometres across. One square centimetre of this material, containing 42 million of these synthetic hairs, can hold 200 grams of mass when stuck to a smooth vertical surface. The main benefits of this material are that it leaves no residue, and needs minimal force to “unstick” from a surface. This tape has a wide variety of uses from surgical tape to wallclimbing robots The ridges on the mosquito proboscis have been implemented in hypodermic needles, minimising nerve stimulation. Contrary to popular belief, mosquito bites do not usually hurt; it is their anticoagulant saliva which causes pain and inflammation. The needle is made of a titanium alloy, and has a diameter of 60 microns, fifteen times smaller than conventional needles. This needle can extract blood or inject chemicals into the body, making it perfect for diabetic blood sampling and insulin injections from a wristwatch-style device.
Technology The sensory mechanism of the Melanophilia beetle, which lays its eggs in freshly-burned wood and can sense the infrared radiation of a forest fire from about 80 kilometres away, is being explored by the US Air Force for production of highly sensitive IR detectors. The lobster eye can vastly amplify weak light (it is about 1000 times more sensitive than a human eye) thanks to its novel design. Each
eye contains thousand of radially arranged square tubes that capture and focus light from a large field of view using the technique of reflective superpositioning. Researchers at the University of Leicester are developing an X-ray telescope using the same effect, known as the “Lobster All-Sky X-ray Monitor”. It is predicted to be used on the International Space Station . X-ray telescopes with large fields of view are invaluable to astronomers, as sudden X-ray bursts from violent astrophysical events, such as gamma ray bursts, superflares or black hole activity, are easy to miss. If sent around a 90-minute orbit of the earth, the telescope would be able to build up a complete image of the sky. Using the same effect, a prototype imaging device known as LEXID has been created, which can “see through walls”, to identify people or weapons. The US border control will use this device to check vehicles coming into the country. Certain species of shrimp can snap their powerful claws with enough force to create a cavitation bubble – an area in a liquid where the low pressure is sufficient to boil the liquid. The bubble collapses, creating a shockwave, and temperatures within it rise to almost that of the surface of the sun. This effect is used by the shrimp to burn, stun or even kill its prey. These cavitation bubbles can be artificially produced with acoustic fields, and temperatures can be theoretically increased sufficiently to induce the elusive nuclear fusion inside the bubble.
The future of biomimetics Still in its infancy as an established field of research, biomimetics has already yielded great rewards in many different areas. Undoubtedly a myriad of new discoveries and inventions are waiting to be made. For example, the simple apple is 95 percent liquid, yet still manages to contain it when cut. This mechanism could yield leak-proof, biodegradable packaging in years to come. Rather than the traditional inefficient industrial manufacture method, which involves cutting the product out of a block of raw materials, SCOPE 2010/11 23
The Eastgate Centre, Harare. The line of air release chimneys can be seen on the roof future materials will be grown on an efficient nanoscopic level, drawing inspiration from biological growth. Researchers have created computer chip “seeds”, self-assembling components which use tiny electrical currents to grow carbon nanotubes in specific formations. These can be manufactured using less energy than conventional chips. Also, a team of MIT researchers are pioneering a method to grow rechargeable batteries using viruses. As biotechnology and nanotechnology capabilities increase, new production methods inspired by nature will be introduced, using the principles of natural growth.
24 SCOPE 2010/11
Because of its enormous untapped potential, the field of biomimetics is becoming increasingly recognised by the scientific community. Efficiency is growing in importance as energy production becomes more expensive, and there is nowhere better to look than the world’s most sophisticated trial and error system – nature. Those who are inspired by a model other than Nature, a mistress above all masters, are labouring in vain. - Leonardo da Vinci
Review SCOPE 2010/11 25
The History of Science Science is defined as “any systematic knowledge-base or prescriptive practice that is capable of resulting in a correct prediction, or reliably-predictable type of outcome”. This is a very broad definition and hence the task of conveying the history of the human understanding of the natural world in a school magazine is no small task. But Scope isn’t just any magazine. It’s a glossary into the unknown. One can gain a glimpse into what makes a HABS scientist wonder, ponder and question the very fabric of the universe. Whether it is Biological, Chemical or Physical, Science has the ability to interest people beyond reason. “Equipped with his five senses, man explores the universe around him and calls the adventure Science.” – Edwin Hubble We start our journey in Mesopotamia (Ancient Iraq). Mesopotamians had been recording observations from at least 3500 BC. However these recordings had been taken for other purposes and not scientific discoveries. The earliest instance of recorded scientific knowledge was in 1900 BC on a clay tablet found with Pythagorean laws engraved. A clay tablet known as the Plimpton 22 depicts several Pythagorean triplets (a millennia before Pythagoras was born). The Mesopotamia era gave birth to astronomy and as a result the motions of the stars, moons and planets are all found on clay tablets. These observations provided the basis for the solar year, lunar month and 7-day week. Hence our modern calendar originated in Babylonia (a region of Mesopotamia). Using this data they developed mathematical methods to help calculate the changing length of daylight in the course of the year and to predict the appearances of the planets and eclipses of the Sun and Moon. According to the historian A. Aaboe, “All subsequent varieties of scientific astronomy in India, in Islam, and in the West - if not indeed all subsequent endeavors in the exact sciences - depend upon Babylonian astronomy in decisive and fundamental ways.” Ancient Egypt benefitted from these discoveries and built upon the Mesopotamians’ fundamental knowledge. Hence, astronomy thrived and had a lastly effect on the Egyptian empire. The pyramids were aligned towards the pole star, and the 26 SCOPE 2010/11
temple of Amun-Re at Karnak was aligned on the rising of the midwinter sun. The Edwin Smith Papyrus (found in Egypt) was the first document describing in detail, medical data (in this case the brain). Many scientists see this as the birth of Neuroscience. Archaeologists also found scrolls, which have the following, components written on them: examination, diagnosis, treatment and prognosis, to the treatment of disease. These components are integral to empiricism and hence the foundation of Science.
of the birth of Metallurgy (the craft of metal working using scientific techniques). Stainless steels are credited with being invented in India, and were widely exported. Ancient China was home to major technological advances. The compass, gunpowder, papermaking and printing all originated from this era and are known as “The Four Great Inventions”. Karl Marx is quoted as saying “Gunpowder, the compass, and the printing press were the three great inventions which ushered in bourgeois society. Gunpowder blew up the knightly class, the compass discovered the world market and founded the colonies, and the printing press was the instrument of Protestantism and the regeneration of science in general; the most powerful lever for creating the intellectual prerequisites”.
Moving east to Ancient India, the home of the Indus Valley Civilization (4th Millennia BC – 3rd Millennia BC) who designed the first recorded ruler—the Mohenjo-Daro ruler—whose length (approximately 3.4 centimetres) was divided into ten equal parts. They used this ruler to make consistent bricks and hence stable buildings. Aryabhata, a mathematician and astronomer, introduced several trigonometric functions which we use today (including sine, cosine and inverse sine), trigonometric HYPERLINK “http:// en.wikipedia.org/wiki/Aryabhata%27s_ sine_table”tables and algebra. In 628 AD, Brahmagupta (an Astronomer) suggested that there was a force that attracted material objects toward the earth. This was the first recorded mention of the concept of gravity. He also explained the use of HYPERLINK “http:// en.wikipedia.org/wiki/0_(number)”zero along with the Hindu-Arabic numeral system now used universally throughout the world. Findings from Neolithic graveyards in HYPERLINK “http:// en.wikipedia.org/wiki/Pakistan”Pakistan show evidence of makeshift dentistry among an early farming culture. Ayurveda (a system of traditional medicine that originated in Ancient India before 2500 BC) is now used throughout the world. Ancient India was also the location
Ancient China was home to Shen Kuo (1031– 1095 AD), a polymath. He dabbled in most areas of Science and was the first to describe the magnetic needle compass used for navigation and discovered the concept of true north. He also discovered and adopted the view of gradual climate change over time, after observing HYPERLINK “http://en.wikipedia.org/wiki/ Petrified”petrified bamboo found underground. These discoveries were seen as essential to the Scientific Revolution as the Jesuit China missions of the 16th and 17th centuries took notice of the scientific achievements of this ancient culture and made them known in Europe. Through their correspondence, European scientists first learned about the Chinese science and culture. However, cultural factors prevented these Chinese achievements from developing into what we might call “modern science”. Historians believe due to the Taoism beliefs widely held in Ancient China at this time, they would regard diverging into the laws of the universe as being naïve and hence due to these major cultural differences a large Scientific “Boom” did not occur. We move onto the Classical Antiquity where the first “Scientists” are thought to have originated. ‘Classical Antiquity’ is a term for
a period of cultural history centered on the Mediterranean Sea, comprising of the neighbour civilizations of Ancient Greece and Ancient Rome, collectively known as the Greco-Roman world. Thales of Miletus is widely regarded as the father of modern science. He postulated questions such as “How did the ordered cosmos in which we live come to be?” Thales also worked on theories that explained phenomena such as lightening and earthquakes via nonsupernatural beings. Thales was the teacher of the well-known philosopher Pythagoras. He founded the Pythagorean school which talk Mathematics. Although widely accredited for founding the Pythagorean Theorem, as read previously, the Mesopotamians had discovered this mathematical puzzle several hundred years before. Not much is known about Pythagoras’s life. Many of the accomplishments credited to him may actually have been accomplishments of his colleagues and successors (at his school). However, it was said that he was the first man to call himself a philosopher, or lover of wisdom, and Pythagorean ideas exercised a marked influence on Plato, and through Plato to all of Western philosophy. Atomism, the theory that states all matter is comprised of indivisible, imperishable units called atoms came about via Leucippus. As with most philosophers during this historical period, not much is known about Leucippus. However, like Pythagoras, he founded a school and one of his pupils (Democritus) continued his work on atomism.
Atomists attempted to explain the world without the need of purpose, prime mover, or final cause. Hence atomism moved away from ‘faith’ to more concrete evidence. Atomists reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp and pointed; and air atoms are light and whirling, pervading all other materials. Then came Plato; founder of the Academy in Athens in 387 BC. Along with his mentor, Socrates, and his student, Aristotle, Plato helped to lay the foundations of Western philosophy and science. Plato’s influence has been especially strong in mathematics and the sciences. He helped to distinguish between pure and applied mathematics. Aristotle introduced empiricism and the notion that universal truths can be arrived via observation and induction, thereby laying the foundations of the scientific method. Aristotle also produced many biological writings that were empirical in nature, focusing on biological causation and the diversity of life. He made countless observations of nature, especially the habits and attributes of plants and animals in
the world around him, classified more than 540 animal species, and dissected at least 50. Archimedes is generally considered to be one of the greatest mathematicians of all time. He calculated the area under the arc of a parabola and gave a remarkably accurate approximation of pi. Among his advances in physics are the foundations of hydrostatics, statics and an explanation of the principle of the HYPERLINK “http://en.wikipedia.org/wiki/Lever”lever. He is credited with designing innovative machines, including siege engines. Modern experiments have tested claims that Archimedes designed machines capable of lifting attacking ships out of the water and setting ships on fire using an array of mirrors. Hippocrates (460 BC – 370 BC) and his followers were the first to describe various diseases and medical conditions as well as found the Hippocratic oath for physicians, still relevant today. Herophilos (335 - 280 BC) was the first to properly conduct dissections of the human body and to base conclusions on these dissections alone. The mathematician Euclid introduced the concepts of definition, axiom, theorem and proof. This period was
SCOPE 2010/11 27
ripe with discoveries and hence, it is impossible to mention neither every discovery nor every scientist/natural philosopher. Moving east to the Middle East, Greek philosophy was able to find some support under the newly created Arab Empire. With the spread of Islam in the 7th and 8th centuries, a period known as the Islamic Golden Age, lasted until the 16th century. Scientists from this era put an increased amount of emphasis on experiments compared to the Greeks. This new methodology is regarded as the most important development of the scientific method. The use of experiments to distinguish between competing scientific theories set within a generally empirical orientation began among Muslim scientists. However, Bertrand Russell, amongst others, believed that Islamic science, while admirable in many technical ways, lacked the intellectual energy required for innovation and was chiefly important as a preserver of ancient knowledge and transmitter to medieval Europe. This might have been due to the Muslim approach to Science. Islamic historians identified an approach to science, flowing from monotheism (as outlined in the Quran). In Islamic science, this is reflected in a disinterest in describing individual material objects, their properties and characteristics and instead a concern with the will of the Creator (Allah). In Europe during the birth of modern science, most scientists (although religious, such as Galileo) used empirical knowledge to help quantify religious beliefs and not use religious beliefs to quantify empirical knowledge. As the great character Sherlock Holmes remarked, “It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts”. However, despite these cultural hindrances this Islamic Period accounted for many important discoveries that later helped influence modern science. Muslim chemists and alchemists played an important role in the foundation of modern chemistry and Jābir ibn Hayyān is considered by many to be the father of chemistry. He is credited with the discovering many common chemical processes
28 SCOPE 2010/11
Newton
Galileo
such as crystallisation, distillation and using the substances such as citric acid, ethanoic acid, tartaric acid, sulfur, and mercury that have become the foundation of today’s chemistry. Ibn Sina is regarded as one of the most influential scientists and philosophers in Islam. He pioneered the science of experimental medicine and was the first physician to conduct clinical trials. He introduced clinical pharmacology and discovered the contagious nature of infectious diseases. This would have a lasting effect and help to increase the average life expectancy. All the theories and disciplines from the Greeks and Arabs made there way through to Europe via the Crusades and Reconquista. The start of a new intellectual era of Europe commenced with the creation of several medical universities in the 12th Century. Europeans began to venture further and further east (most notably, Marco Polo) and this led to the increased influence of Indian and Chinese science on the European tradition. Hence, Europe was able to build upon the foundations that these ancient civilizations had observed and deduced. At the beginning of the 13th century there were reasonably accurate Latin translations of the main works of almost all the intellectually crucial ancient authors. In 1348, the Black Death amongst other disasters sealed a sudden end to the previous period of massive philosophic and scientific development. Yet, the rediscovery of ancient texts was improved after the fall of Constantinople in
Kepler
1453, when many scholars found refuge in the West. Meanwhile, the introduction of mass printing was to have great effect on European society. New ideas helped to influence the development of European science at this point. A mixture of all these developments resulted in the Scientific Revolution; the Birth of Modern Science. The Scientific revolution is defined as the period when “new ideas in physics, astronomy, biology, human anatomy, chemistry, and other sciences led to a rejection of doctrines that had prevailed starting in Ancient Greece and continuing through the Middle Ages, and laid the foundation of modern science”. Hence during this period the various theories that had been widely regarded as true in the Ancient world were being disproved and new theories were being observed and recorded. During this period, in scientific circles, gradually religion, superstition, and fear were replaced by reason and knowledge. According to a majority of scholars, the scientific revolution began with the publication of two works that changed the course of science in 1543 and continued through the late 18th century. These publications were Nicholas Copernicus’s “On the Revolutions of the Heavenly Spheres” (outlining that the Sun, not the Earth, was the centre of our universe – heliocentric cosmology) and Andreas Vesalius’s “On the Fabric of the Human Body” (incredibly detailed illustrations and discoveries of the human body via dissections).
A cascade of discoveries took place in the results years, decades and centuries. Many famous scientists worked during this glorious era. Blaise Pascal (1623–1662) invented the mechanical calculators in 1642. He also made important contributions to the study of fluid and clarified the concepts of pressure and vacuum. Along with the inventor Robert Hooke (1635–1703), Sir Christopher Wren (1632–1723) and Sir Isaac Newton (1642–1727), English scientist and astronomer Edmond Halley (1656–1742) was trying to develop a mechanical explanation for planetary motion. Halley’s star catalogue of 1678 was the first to contain telescopically determined locations of southern stars. The HYPERLINK “http://en.wikipedia.org/ wiki/Anatomist”Anatomist William Harvey (1578–1657) mapped the circulatory system and discovered precisely how the heart works via elaborate, ingenious experiments. Many of the important figures of the scientific revolution, however, shared in the Renaissance respect for ancient learning and cited ancient pedigrees for their innovations. According to Newton and other historians of science, his ‘first law of motion’ was the same as Aristotle’s ‘counterfactual principle of interminable locomotion in a void’ and was also endorsed by ancient Greek atomists and others. Hence the ancient civilizations proceeding this period had played a key role in these major discoveries.
During the 19th century, romanticism came about and affected science along with society as a whole. European scientists who had been obsessed with the experimental side of science started to supported the belief that observing nature meant understanding the world in more detail and that the answers that nature could give us should not be obtained by force (experimentation) but by merely observing it. Hence, they decided to take a step back and observe nature. Despite sounding rather odd compared to the rigorous experimentation of the previous centuries this resulted in major breakthroughs coming in biology (Darwin’s theory of evolution), physics (electromagnetism), mathematics (non-Euclidean geometry and group theory) and chemistry (organic chemistry). Romanticism declined due to the uprising of Positivism. No longer were people seeking unification between man and nature based on the ideals of harmony, but a more precise approach that eventually gave rise to the study of science that is prevalent today (positivism). This really started to take hold at the turn of the 19th Century. During the 19th century, the practice of science became professionalized in ways that continued through the 20th century. What resulted in these centuries is unequivocal. Theories
outlining developments such as relativity, DNA, radioactivity, Vaccinations, Pasteurization, Quantum Physics, Electricity, the Big Bang Theory, and Cheap Medicine etc. came about in this era (in case you have forgotten we also made it to the moon and back). Einstein, Rutherford, Crick, Watson, Darwin, Wallace, Bohr, Wohler, Maxwell, Pasteur, Franklin, Freud, Pauling, Boltzmann, Hubble, Curie and many more shaped our society beyond our ancestors’ widest imagination. Over the past few decades these theories have been clarified and improved upon by the likes of Dawkins, Gould, Venter, Hansen, Chomsky, Hawking et al. The public has been able to grasp these theories with the help of Sagan, Bronowski, Attenborough and other presenters who made Science more accessible. Science has an elaborate past riddled with controversy, religion and politics. With better technology, the Internet and brighter minds, who knows what will be discovered in the next century. But Science is not only responsible for concrete discoveries and theories but also accountable for a lust for knowledge and a factual mentality. Least we forget, “Science is a way of thinking much more than it is a body of knowledge” as said by Carl Sagan.
During this period, institutions such as the Royal Society were founded to help peer review theories and provide an environment for intellectual stimulation. The Royal Society was the idea of various leading scientists in 1660 such as Boyle, Wren, Hooke, amongst others. Their motto, “nullius in verba”, translates to, “Take nobody’s word for it,” and was used to signify the members’ determination to establish facts via experiments. During the next century or so, these founding theories discovered by the likes of Galileo, Boyle, Hooke, Halley etc. were now being put to practice in modern technology. The impact of this process was not limited to science and technology, but affected philosophy (Immanuel Kant and David Hume), religion (notably with the public appearance of atheism) and politics. SCOPE 2010/11 29
CERN, 30/01/11 centre screen. As the show faded to black from its impressive finale, some of us had a go on the innovative information devices strategically scattered around the floor, while the others retreated into their futuristic-looking chairs, with the smooth sound of Italian commentary flowing into their ears. We showed some innovation of our own, turning a touch-screen map of CERN from above into a thrilling game of electronic air-hockey. Lunch followed a seemingly endless winding path past dozens of offices into the canteen. The unquestionable highlight was the limitless fun had with the magnetic cutlery. It seems those at CERN can’t bear to get away from physics at ANY time of the day.
Visiting the site of the largest and most expensive particle accelerator in the world probably sits comfortably on a small list of things that we would wake up at 4:30am on a Saturday morning for. The lack of complaints for the very early meet-up time is an apt description of our collective excitement for the trip, which sold out in two days without the need for condescending encouragement from the Physics Department. Perhaps in fear of incurring the wrath of Mr. Kerr, everyone was punctual in arrival, and we soon boarded our flight after greeting each other with semi-serious remarks like “Ready to see protons collide?” and “How’s your weekend been so far?”. To prepare for the high-energy fun ahead, many of us took to napping during the journey there, and in what seemed like a blink of an eye, we found ourselves in Geneva, Switzerland with a hearty and fascinating day of physics in store. Inside the CERN Tourist Centre, there was a complicated, futuristic-looking design on the floor, which glowed periodically in seemingly random sections of the shapes. The Dance Dance Revolution fans among us thought it was
30 SCOPE 2010/11
a good idea to step all over the curious looking floor to turn on the pretty lights, before being told it was actually a cosmic ray detector. With two hours to spare before lunch, we were able to freely wander around the Microcosm exhibition within the centre, and the CERN Globe of Science and Innovation across the road. Descending first into Microcosm, we experienced the history of particle physics; from a reconstruction of Rutherford’s experiment of firing alpha particles at a gold sheet 100 years ago, to the unification of the weak and electromagnetic forces by Glashow, Salam and Weinberg, to the Large Hadron Collider at present day. Everyone found something that intrigued them, whether it was watching red dots move around a miniature LHC, or passing light electrical currents through their body. Any hour later, we crossed over to the other side of the road to the unusually shaped CERN Globe building. Soon after entering, we were showered with the dancing lights of the show shown every 15 minutes or so. It was difficult not to be astonished by the stunning effects and images which made full use of the both the peripheral screens and the massive circular
A presentation followed back inside Microcosm. We were first given a brief introduction into the ambitions of CERN at present day, and then shown two informative but non-technical video clips. They were evidently so hilarious, that Mrs. Letts burst out in an ecstasy of laughter, leading the whole room to turn their heads in bewilderment, bar a rather red and embarrassed Miss Letts in the front row. The rest of the planned activities would be spent at two further destinations, requiring an exciting coach journey past the Swiss/ French border. Our particular group was led by Professor Vincent Smith of Bristol University, who spoke with an enjoyable mix of enthusiasm and humour, and had a remarkable patience for Mr. Fielding’s persistent and testing questions. First stop was the ATLAS control centre, which regulated one of the three main stations of the LHC. Dr. Smith began with an explanation of the
gigantic painting on one of the walls outside the building, which showed the inner workings of the LHC from two perpendicular perspectives. Perhaps sensing our eagerness, which manifested itself as violent shivering, Dr. Smith soon led us inside the warm building, where we watched a short 3-D clip about the making of ATLAS. A discussion about the clip followed, including a collective mocking of Dan Brown’s literary inadequacies. Downstairs was the site of the main office. Normally, this room would be filled with the hustle-and-bustle of post-graduate groups from around the world, working hard and efficiently on the flood of data pouring through from the LHC. On this particular day however, the LHC was offline, and the sole researcher in the office could afford to carry out his tasks in a relaxed fashion. Later analysis on his computer screen by Matthew Earnshaw revealed that his important task was in fact Facebook. We were hurried back onto the coach by Mr. McKane, who was clearly delighted to be spending his weekend with his beloved James Zhao, and then proceeded on swiftly to our final destination of the day. While it was now considered dangerous to walk alongside the highest energy particle accelerator in the world, we were able to see the spare parts for the machine instead. Professor Smith rattled through some difficult physics about the LHC cooling process (necessary for superconductivity), which though rather beyond us at that point, was absolutely fascinating. We asked Mr. Fielder and Mr. McKane for a more accessible explanation later on, but they only responded with a nervous laugh and “You should have been listening!”.
that to progress understanding and knowledge effectively, co-operation is key. One can’t help but admire this unique and wonderful aspect of the scientific community, something that should be taken note of by other less collaborative industries. Having finished our tour of CERN on time (A main benefit of a science trip), the physics staff decided to take us into the heart of Geneva to walk about the beautiful fountains there. But what was hoped to be a nice, relaxing treat to end a productive and informative day turned out to be a weary 2 hour long journey inside the
coach. The fountains weren’t working, so we did a U-turn and headed straight back to the airport. During the journey back, the trip was unanimously praised by all of us. It was still difficult to believe that we had done so much within a day. David Westcough, a dead-set engineer, was left frustrated, as straight physics had now creeped onto his radar of consideration. Indeed, our passion for physics had been ignited by this day, which gave us a taste of the brilliance that our subject contains, and the inspiration for thought, dreams and creativity that it leaves in front of us.
On the way, Professor Smith told us about the co-operation between CERN and FermiLab (Home of the Tevatron, the 2nd largest particle accelerator behind the LHC). This seemed surprising, as both institutions have similar goals, namely finding the Higgs at present, and so it is perhaps expected that there would be some kind of rivalry between them in this race for scientific glory. Actually, researchers from both sides go back and forth each institution to share ideas and thoughts, with competitiveness and sabotage nowhere in sight. They understand SCOPE 2010/11 31
The Rational Optimist: How Prosperity Evolves Perhaps it is testimony to the “celebrity-culture” that prevails in much of the popular media that many in the developed world seem increasingly intrigued by impending and actual disaster. Some days even the serious press devotes much of its content to a careful and detailed explanation of how the health and wealth of the healthy and wealthy are soon to be lost forever at the hands of some dark force or other, poised to overwhelm us, such as global warming or a new form of plague. Of course doomsayers are nothing new even in the heady and buoyant world of economics. Malthus for example in 1798 published (amongst a great many perhaps more other important ideas) his theory that geometric population growth would outpace arithmetic growth of agricultural output, with fairly obviously negative consequences. It was therefore a great delight to read Matt Ridley’s “The Rational Optimist”, the basic proposition of which is that “things” in general can rationally be expected to “get better” in the world as they have now been doing for some time. Ridley places our ability to trade at centre-stage in his argument. The robustness of the theory of comparative advantage as a justification for the importance of trade will be well known to readers with an interest in economics. Yet Ridley goes much further: for him, the increasing ability of ideas to “have sex” with each other and to give birth to accelerating innovation further supports the justification of rational optimism. Whilst our ancestors toiled alone to create-hand axes, the basic design of which remained unchanged for literally millions of years, today more than ever we have the ability to specialise, trade, to share ideas and to synergise, and in doing so to create healthier, wealthier and potentially happier lives for all. Ridley encourages us to consider the computer mouse as an example: the mouse is not the endeavour of one individual, but of thousands of individuals specialising in everything from oil extraction to coffee production: for coffee is likely to have played a part in driving the creative process somewhere along the line. It is also the result of ideas and technologies synergising. 32 SCOPE 2010/11
Critics have accused Ridley of under-playing the role of the state and even of causing harm to the “Holy Grail” of free-market economics. Yet such criticisms seem churlish. Ridley’s book is not a tome of economic theory to be carefully critiqued, but more of a tonic in a time that is all too often drawn to dark thoughts. Ridley’s energetic and compelling style whisks the reader to thoughts of a brighter future that we might rationally hope for and work towards. He also reminds us of some basic truths that date back to Adam Smith’s time, for example that humans are unique as an animal in their propensity to trade.
After reading Ridley’s “Rational Optimist”, one simple truth was lodged in the mind of this simple reader: the pessimists have always been with us, and their prophecies of doom have generally proven to be wrong. Perhaps a startpoint of optimism, whether one believes that start point to be fully rational or not, is the most efficient and effective place to be when seeking to make a positive difference to the world.
by Matt Ridley: A review by Mr. Hall
Could Science go too far? facebook and make a ‘friend’ out of someone they’ve never met. Ghrelin is a hormone responsible for the feeling of hunger and the levels of it are said to be 15% higher in those who have had 5 hours of sleep compared to those who have 8 and it is said that the increasing culture of the ‘electronic babysitter’ of the TV and video games is causing this lack of sleep in children and hence, childhood obesity! Technology is taking over our lives in a way that is becoming damaging to our health. Yes, it all sounds a bit extreme and to an extent, it is, but this problem of machine taking over man could be dangerous in the future, either if technology develops even further such that, for example, all of our conversations are typed, or if we do not keep control of ourselves and allow ourselves to fall into the hole that technology has dug for us.
Violence and Crime
80% of British people surveyed in 2008 were ‘amazed by the achievements of science’, whilst only 46% felt that ‘the benefits of science are greater than any harmful effect’. It is certainly true that the developments of science in the past have been magnificent. Our cars, our televisions, our mobile phones are all based around various scientific discoveries. The fact that we can remove a damaged organ and replace it with another or diagnose or treat any illness in the first place is based on science. However, could this, one day, go too far? In the future, as science tries to enhance our already well-developed society, could it start to cause more harm than good? Has it already started to do so?
Man Vs Machine Recent developments in technology and high-speed connections mean that we can now
communicate with someone on the other side of the world, within seconds. This cannot be deemed as anything else but amazing, but could cyber-messaging one day replace actually talking to friends? Of course, new technology is something we can be proud of, especially in this case, where communication would otherwise be very difficult. However, these internet conversations are all written down and through this, we can express or see no emotion. The value of friendship has also fallen. The technology of conversation used to be such that one could speak to another on the phone, having learnt their mobile phone number, or ‘chat’ on an instant-messaging service having learnt a code or email address. The situation now is that just by knowing even a first name or the name of a friend of this ‘friend’, one can hound this person out on
Such social networking sights can, to certain people, present a much deeper issue. Cyberbullying has reached a level where a person can now be stalked to an extent where their address can be found and they can be hunted down, if they’re not careful. A lesser, but still strong, problem is that group bullying on such sites is difficult to stop due to a lack of regulation, and is made more public, as it is presented on a person’s ‘wall’. Most countries do not have ‘anti-stalking’ legislation either, which can lead to a victim feeling isolated, with no means of support. This is only an issue for those who are not careful and allow it to be one. The question is, can we trust people to be careful in this way, when clearly, some have shown that they cannot? The second form of violence that we can consider is such that, quickly after plutonium was discovered in 1941, it was used to make a nuclear bomb. Without turning this into a nuclear debate, one can worry about how easy new technology has made it to kill and kill
SCOPE 2010/11 33
from such a distance that you cannot see the damage that you have caused. Others would say that new technology has also allowed messages of suffering to be spread across the world, in campaigns for aid efforts, and this has worked to the contrary, but couldn’t some of these disasters have been stopped in the first place, had it not been for such developments?
The Environment Right now, there is huge controversy in the development of countries such as China, as people are questioning the right of such Nations to take such leaps forward, due to their advancements causing damage to the environment. Advancement, through industry, means that we rely heavily, now, on the burning of fossil fuels, for many of our modern day needs. This, as well as the development of plastics and
34 SCOPE 2010/11
other chemicals, as well as our use of nuclear power, which can be highly dangerous in cases such as Chernobyl, could lead to our downfall. Acid rain and global warming are only some of the damages said to be caused by the release of greenhouse gases by such power stations and we are allowing this to happen due to our modern demands? Quite to the contrary, many would say that new technology has made us aware of problems that were always going to occur. Our new measuring equipment means we can carefully track the levels of greenhouse gases, damages to endangered species and acid rain levels and this has allowed us to notice a massive problem which we must deal with. This extends, also, to our agricultural needs. Our demands for high volumes and the best quality of crops have lead to the use of Genetic Modification, and who suffers as result? The
poor farmers in low-income countries who have now become highly dependent on demand from multinationals. As well as this, GM crops can all easily be wiped out by a slight change in environment or a disease and this is another risk these farmers must deal with. There are very clearly huge benefits to developments in science: none of us would want to stop research for a cure to cancer or a faster means of transport, which would make our lives much easier. However, science could lead to our downfall, if we allow it to. We are already beginning to fall into the hole of taking in and embracing any development in science, without considering its harmful effects and we must be careful not to be damaged by those and not to rely on such achievements.
SCOPE 2010/11 35
Biological Sciences
Cocaine Addiction produce the feeling of pleasure and satisfaction. The natural function of this response is meant to be to keep us focussed on activities that promote the basic biological goals of survival and reproduction. The receiving cells’ response produces a feeling of pleasure which we instinctively want to repeat.
Cocaine is an alkaloid found in leaves of the South American shrub Erythroxylon coca. Its properties as a potent psychostimulant have led people in the past to use it in a number of patent medicines and even in soft drinks. But cocaine’s highly addictive nature and addicts’ willingness to pay a high price for the drug have propelled it into the public eye. The crime and violence associated with its transportation and sale, and the celebrity nature of some of its victims has kept cocaine in the news. Cocaine is a powerfully addictive drug of abuse. Individuals who have tried cocaine have described the experience as a powerful high that gave them a feeling of supremacy. However, once someone starts taking cocaine, the degree of addiction to follow is difficult to predict. So, why is cocaine so addictive? Since its increased use in the drug-scene since the 1980s, scientists have been researching into the neurobiological mechanisms underlying the drug’s initial effects, and its later developments and longer-lasting effects, which result in cravings and relapse. One of the most intriguing mechanisms is the over-production of the genetic transcription factor, ΔFosB.
Cocaine and the Limbic System Dopamine originates in the dopaminergic cells in the brain, which create the dopamine molecules and release them into the surroundings. Here,
36 SCOPE 2010/11
they bind to receptor proteins on receiver cells, stimulating the receptors to alter electrical impulses causing a change in the cells’ function. The more dopamine molecules in contact with the receptors, the more alterations occur in the electrical properties of the receiving cells. To function at the required intensity, the dopaminergic cells continually alter the number of dopamine molecules by either producing more, or retrieving some of the previously released molecules and absorbing them. Cocaine restricts the dopamine transporter, a protein that is used by the dopaminergic cells to retrieve dopamine molecules. Hence, in the presence of cocaine, dopamine molecules that would otherwise be retrieved, remain in the surroundings and build up. This causes the receiving cells to over-activate. Dopamine build-ups as a consequence of cocaine use occur wherever dopamine transporters are present. The drug’s ability to produce feelings of pleasure, loss of control and compulsive responses to drug-related cues are due to its interaction with regions in the front of the brain which make up the limbic system. Dopamine-responsive cells are very concentrated in the limbic system; set of regions which control emotional responses and associates them with memories. The nucleus accumbens (NAc) is a particular region of this system which appears to be most relevant during a cocaine high. Following stimulation by dopamine, cells in the NAc
Cocaine artificially causes a build up of dopamine in the NAc which results in a significant feeling of euphoria and pleasure, sometimes greater that those produced by natural activities such as sex or eating to satisfy hunger. It has been noticed that some laboratory animals will ignore food and keep taking cocaine until they starve, if presented with the choice. The limbic system includes vital memory centres, the hippocampus and the amygdala (shown below). These centres allow us to remember what to associate with the dopamine release in the NAc. Following a cocaine high, these memory centres remember the cause of the intense pleasure, associated people and places. After those memories are produced, any related scenes, images or paraphernalia will create a desire to repeat the action to produce the high. Scientists believe that repeated exposure to cocaine, with its associated dopamine kicks, alter these cells until they begin converting conscious memory and desire into a nearcompulsion to respond to cues by finding and taking the drug. The frontal cortex is another region of the limbic system where information is integrated and decisions are made regarding taking action. If pleasure is forgone to avoid the negative consequences, the frontal cortex stops the other two regions of the limbic system from functioning. This can stop a non-addicted user to recognise these bad consequences and stop re-use before addiction occurs. Once someone has become addicted, the frontal cortex becomes impaired and less likely to prevail over the urges.
structures from their dendrites. Dendrites are branched extensions at the beginning of a neuron that help increase the surface area of the cell body and are covered with synapses. These tiny protrusions receive information from other neurons and transmit electrical stimulation to the soma. More branches in the NAc should theoretically result in a greater volume of nerve signals being collected from other regions. This will mean the NAc is very influenced by other regions, which could be the cause of addiction-related behavioural changes.
Intermediate-Term Effects of Cocaine – Changes in Gene Expression Genes determine the shape and function of every cell by coding for different proteins. Every individual is born with a unique combination of genes. Every body cell contains all available genes, but not all are expressed, allowing cells to become specialised, depending on the genes which are expressed. Gene activation is controlled by genetic transcription factors, which are proteins that bind to the DNA, stimulation the protein synthesis. The fundamental pattern of gene activation cannot be changed, but every cells is able to change the level of expression of particular proteins in response to the body’s demands. Cocaine alters expression of several genes within the NAc, including some that influence the important neurotransmitter chemical, glutamate, and the brain’s natural opioid compounds produced by the body. Similar to dopamine, ΔFosB is a chemical which controls the pace of cellular activity. However, its action occurs within the cell as oppose to having and effect on neighbouring cells. ΔFosB is naturally occurring in cells within the NAc, but chronic exposure to cocaine causes it to accumulate to very high levels, which is believed to cause addiction. This is for a few reasons: • Once created, a molecule of ΔFosB lasts for 6 to 8 weeks before breaking apart
chemically. Therefore, each new episode of cocaine abuse exacerbates the buildup of FosB that has accumulated from all previous episodes in the past two months. For a cocaine abuser, the levels of ΔFosB will be extremely elevated all the time.
• Mice with elevated ΔFosB exhibit a set of behaviors that correspond to human addictive behaviors, while mice with normal levels do not. It has been noticed that blocking the buildup of ΔFosB in mice during a regimen of cocaine exposure reduces this behavior. • FosB plays a role in the genetic machinery that determines very basic properties of a cell, including very long-term or permanent ones such as its structure and interface with other cells.
Long Term Effects of Cocaine Changes in Structure of Nerve Cells Over its two month lifespan, ΔFosB cannot reveal enough information to explain why former cocaine abusers suffer from relapse and cravings, even years after stopping use. These are extremely persistent features of cocaine, and other drug addictions, which show there must be long term neurobiological effects. One potentially key change related to cocaine use is a change in the physical structure of nerve cells in the NAc. Frequent cocaine exposure causes these nerve cells to grow shoot-like
With chronic cocaine intake, brain cells functionally adapt (respond) to strong imbalances of transmitter levels in order to compensate extremes. So receptors disappear from or reappear on the cell surface, mechanisms called down-upregulation. Chronic cocaine use leads to a upregulation, further contributing to depressed mood states. All these effects contribute to the rise in an abuser’s tolerance thus requiring a larger dosage to achieve the same effect. The lack of normal amounts of serotonin and dopamine in the brain is the cause of the dysphoria and depression felt after the initial high. Cocaine abuse also has multiple physical health consequences. It is associated with a lifetime risk of heart attack that is seven times that of non-users.
Summary Cocaine is a highly addictive, central nervous system stimulant which usually makes the user feel euphoric and energetic, but also increases body temperature, blood pressure, and heart rate. Users risk heart attacks, respiratory failure, strokes, seizures, abdominal pain, and nausea. In rare cases, sudden death can occur on the first use of cocaine or unexpectedly afterwards. The costs to the individual, such as the health risks mentioned above, and the costs to society, such as the lost output of the individual, the cost of extra policing and NHS costs, easily outweigh any short-term benefits the user may feel.
SCOPE 2010/11 37
Are we alone? For millennia man has looked up at the stars in wonderment, and asked himself the eternal question: “are we alone in the universe?” Today’s scientists still ask that question as the search for extraterrestrial life goes on. For all our modern technology, we have no real idea of whether aliens exist, but as the quest continues, there is more and more excitement amongst the scientific community. From the sub-surface of neighbouring Mars to the ice sheets of Jupiter’s moon Europa, from the lakes of Saturn’s moon Titan to the Gliese 581 solar system, theories abound over the existence of life away from Earth. Although man’s beliefs of the extraterrestrial has become somewhat more scientific since Egyptians times, it is interesting to note that our fascination with the cosmos and the possibility of its containing life has existed for over 4500 years. With the birth of the telescope in the early 17th Century, speculation about alien life increased dramatically, much to the displeasure of the
Catholic Church, which did much to quash the seemingly heretic theories on the unimportance of Earth in the universe. But it was during the 20th century that the search for alien life became truly scientific. The Search for ExtraTerrestrial Life (SETI) began in 1959 as a collective search for aliens and has grown alongside NASA and a number of space organisations as mans hunt continues. By scanning the ether for unnatural or inexplicable radiation, SETI hopes to find evidence for intelligent civilisations. But the search is more than looking for electromagnetic traces of aliens. We want to know who these aliens are, and of course, what they look like. On Earth we naturally assume, as film directors have for decades, that aliens will look similar to species from our own planet. However, if we do look at the enormous diversity we find on Earth, we can begin to get an idea for the vast range of possibilities of the forms life may take. If we look at different forms of communication, we can see just how
varied life can be. Humans interact through speech. But cicadas vibrate their wings to make sound and crickets rub their legs. Deaf humans communicate through gestures, dogs wag their tail, octopi change skin colour, and bees use the smell from a material they carry with them to indicate they are part of the hive, and so are allowed safe entry. Clearly on earth there is diversity between animals; extraterrestrial life could be even more different. Having said this, there are a number of features which have evolved independently more than once on Earth. It is generally accepted that flight, photosynthesis and sight evolved several times on Earth amongst different species, and are so intrinsically useful that species will inevitably tend towards them. Skeletons are also believed to exist elsewhere, because they are essential for large terrestrial organisms to maintain their body shape under the influence of gravity. And moving away from morphology and towards biochemistry, there are further similarities conjectured between the inhabitants of our own planet and those that may live elsewhere. Because almost all planets are made up of ‘stardust’, the relatively abundant chemical elements formed in supernovae, it is very probable that other planets have a similar chemical composition to Earth’s. The usefulness of carbon, hydrogen and oxygen has led to many hypothesising that life forms elsewhere in the universe could utilise these basic materials. Therefore, what we are looking for is one problem facing man’s quest. Another crucial question is where do we look. The moon has long been ruled out as a possibility for life, but there are still places close to home in which we can look for our alien friends. The search for life on planet Mars began in 1965 when Mariner 4 ruled out the possibility of liquid water on the surface. Yet Mars is the most likely planet in the Solar System other than Earth to harbour water, and it is very likely that it did so millions of years ago. It has features very similar to Earth, such as volcanoes, valleys and polar ice caps.
The passage on the right of the main chamber was a passage to the heavens
38 SCOPE 2010/11
and tectonically inactive, with a frozen core that cannot provide the warmth life craves.
What do aliens look like? Despite this initial excitement, it is yet to be conclusively proved that the nanobacteria found on ALH 84001 originated from Mars. Many believe the molecules are due to contamination from contact with the Antarctic ice, while there is a school of thought that claims the nanobacteria are too small to contain RNA, so these molecules cannot once have been alive. However, recent developments have put paid to this skepticism, for the most part, because microbiologists have cultured nanobacteria in a lab the size of those on ALH 84001. It has also been noted that these biomorphs do not resemble those found on other, inorganic Martian meteorites. Furthermore, the nanobacteria on ALH 84001 are more embedded than any contamination would be.
It is only by looking further out into our solar system that we might find alien life. Jupiter’s sixth moon Europa is, from an astronomical point of view, fascinating. One of the key characteristics of Europa is that it undergoes tidal flexing. During its orbit, the gravitational push and pull of Jupiter squashes and expands Europa, heating it up rather like a squash ball. The warmer parts of Europa’s surface than spew layers of ice on top of colder parts, causing the moon to be covered in a distinctive pattern of dark streaks, called lineae. This ocean is considered to be the most likely location for extant extraterrestrial life in the solar system. On Earth, we have habitats very similar to Europa’s ocean, and in 1977, colonies of various species were found in the Galapagos Rift. With no access to sunlight, this entirely
Europa’s distinctive lineae independent food chain depend on bacteria that derive their energy from oxidizing reactive chemicals such as hydrogen and hydrogen sulphide, in a process called chemosynthesis. The electron microscope revealed chain structures in meteorite fragment ALH84001 The ALH 84001 meteorite is possibly the most exciting, and certainly most controversial, piece of evidence we have for the existence of extraterrestrial life. And yet even if it is one day proved that the structures on ALH 84001 were once alive, it would only tell us that Mars contained life millennia ago. For Mars nowadays is a dead planet. It is geologically
Chemosynthetic life forms provide a possible model for life in Europa’s ocean. Active geological processes are driven by the heat of tidal flexing, possibly enabling simple organisms to cluster around hydrothermal vents or cling to the lower surface of the ice layer. Even more promisingly, planetary scientist Professor Richard Greenberg calculated in 2009 that cosmic rays impacting on Europa’s surface convert the ice into oxidizers, which could then be absorbed into the ocean below as water wells up to fill
cracks. If this same process applies to Europa, Professor Greenberg estimates that Europa’s ocean could achieve an oxygen concentration greater than that of Earth’s oceans within a few million years. This would enable Europa to support not only anaerobic microbial life but also large organisms such as fish. As our knowledge of the Universe expands, so does possibility of finding extra-terrestrial life. Beyond our solar system, there could be countless worlds harbouring life. Indeed, NASA has identified a number of planets believed to be similar enough to Earth to be able to support life. These planets must orbit their star within its ‘habitable zone’ and have the potential to hold liquid water. In the constellation of Libra, 20 light years away, the planet Gliese 581 d orbits the star Gliese 581. Classified as a super-Earth due to its having a mass of between 7 and 14 times that of our planet, Gliese 581 d is now believed to be one of the likeliest known locations of extra-terrestrial life outside our solar system. A team of astronomers from Geneva Observatory released findings in 2009 that Gliese 581 d might be almost entirely covered by a large and deep ocean, with a greenhouse effect heating up the planet. The social networking site Bebo beamed a high-power transmission, ‘A Message From Earth’ to Gliese 581 d in 2008 in the hope of soliciting a response; the earliest we can hope to receive one, however, is in 2049. Clearly it is a long shot to expect interstellar correspondences with far off planets. But man’s quest for alien life is expanding all the time. From its humble beginnings thousands of years ago when ancient man used to stare up at the stars, to the satellites and probes we send up into space today, we have come a long way in our quest. We have walked on other worlds, we have seen other planets. We have even mapped the universe. Surely, in a Universe as large as ours, it is only a matter or time before we find extraterrestrial life.
SCOPE 2010/11 39
Feeling Down? Depression, indeed a great deal of psychology, is seen as an unscientific and unsubstantiated field of research.’ Depressed? Abnormal emotional profile? How about you man up...’ is often the kind of view taken. If the post-traumatic stress identifiable in soldiers after witnessing the horrors of war has taught us nothing, then perhaps current research will. Current research by George Zubenko of the University of Pittsburgh is an attempt to map the systematic chromosomal regions linked to severe depression. Already such research has yielded results suggesting the molecular markers for depression differ from men to women, and such a marker not only identifies depression as a real problem, but perhaps indicates a method of future treatment. So in this article I intend to not only show that depression is a serious condition but indentify what this condition entails. So what is depression? What symptoms does it exhibit? If you’re depressed you often lose interest in things that you used to enjoy. Depression commonly interferes with your work, social and family life. In addition, there are many other symptoms, which can be physical, psychological and social. For example, symptoms may include continuous low mood or sadness, feelings of hopelessness and helplessness as well as low self-esteem among others. Already the problems with identifying depression become apparent - can anyone really say they haven’t at any point felt some of the above feelings? I would with confidence say no. In addition a particularly quiet reserved fellow may well appear to be depressed, but in fact is simply not that gregarious. Since the identification of depression has for so long been purely subjective in its identification, it is from here the dismissive stance upon it has risen. This unfortunate state of affairs has now changed however, and three physiological markers of depression have been found which not only help in diagnosis of depression but also in treatment:
40 SCOPE 2010/11
• Depression literally makes the world dull
The researchers also found a strong negative correlation between levels of depression and This is because the ability to perceive contrast olfactory bulb size. is impaired. • You probably won’t be getting a sugar high if To investigate links between mood disorders you’re depressed. and vision, Emanuel Bubl at the University of Freiburg, Germany set up an experiment Recent research has suggested that those with which ran an electrode along one eye in each many symptoms of depression are about 60% of 40 people with depression, and 40 people more likely to develop type 2 diabetes than without. The electrodes measured activity in the people not considered depressed. Mercedes nerves connecting photoreceptors - which detect Carnethon at North-western University’s different aspects of light - to the optic nerve, but Feinberg School of Medicine in Chicago, Illinois, not the brain. Participants sat in a dimly lit room US tracked 4681 men and women currently and were asked to observe a black and white aged 65 and older from 1989 (when they chequered screen which became greyer in six did not have diabetes). The participants were distinct stages, reducing the contrast between screened annually over the next decade for each square. Each stage was presented for 10 numerous symptoms of depression. Of course, seconds, and the experiment was repeated over the common sense counter-argument to any an hour. conclusion that could be found from this study would be, ‘a depressed person, with supposedly The team found that electrical signals to low self esteem, would surely not be as careful the optic nerve were lower in people with to look after himself? So then the cause of the depression. This experiment perhaps serves as diabetes is only indirectly related to depression the template for a diagnostic test for depression. but is in reality very removed.’ • If you’re depressed you’ll ‘nose’ all about it. However, the research suggests that it is the Depression seems to link with a reduction in depression which causes the higher proportion olfactory bulb size, the area in the brain know of diabetes, in spite of the adverse effect to be responsible for neurological processing of depression may have on lifestyle. Perhaps smell, as shown by MRI scans. the raised level of the stress hormone cortisol Depression, schizophrenia and seasonal affective in depressed people may be to blame. If this is the case, early markers of diabetes could disorder all suppress the sense of smell. To perhaps serve as a further diagnostic indicator find out why, researchers at the University of of depression, as well as a clear physiological Dresden Medical School in Germany devised symptom. an experiment whereby 21 people with major depression and 21 who weren’t depressed were exposed to a chemical with a faint odour, gradually increasing the concentration until the volunteers could smell it.
What actually goes on during
Non-depressed people were able to smell the chemical at significantly lower levels than the depressed volunteers, and in addition the olfactory bulb was on average fifteen percent smaller in depressed people than in non depressed people.
Neurotransmitters are chemicals in the brain which transmit chemical signals. The neurotransmitter serotonin is seen to be responsible for a person’s mood. Neurones that secrete serotonin are situated in the brain stem, with their axons extending into
depression?
the cortex, cerebellum and spinal cord. This clearly demonstrates the huge area of the brain affected by serotonin.
on the post synaptic membrane, cation channels open and sodium ions flow through the channels.
A lack of serotonin has been linked to depression. Typically across synapses, the Kings’ cross stations of the brain, the following occurs:
6. The membrane of the post synaptic is depolarised and initiates an action potential.
1. An action potential (this is a large change in the voltage across the membrane) arrives at pre-synaptic knob. 2. The pre-synaptic membrane depolarises, calcium ion channels open and calcium ions enter the neurone. 3. This influx of calcium ions cause synaptic vesicles containing neurotransmitter to fuse with the pre-synaptic membrane. 4. The neurotransmitter is released. 5. The neurotransmitter binds with the receptors
If you’re not an A level biologist that probably won’t make much sense, but the essence of it is, that neurotransmitters trigger impulses in the body, which result in different physiological action. The Gene 5-HTT codes for a transporter protein that controls serotonin re-uptake in the pre-synaptic neurones (thus allowing it to be re-used) is shortened in some people, thus resulting in less re-uptake. This now means that there is less serotonin to trigger action potentials; there is therefore less nerve impulse than normal transmitted around the brain, which
has been seen to cause depression. Issues that arise out of this are manifold. Why do some people become depressed with exposure to what would seem objectively mundane life experiences, compared to others who have had far more turbulent lives? If all our behaviour is a result of chemical transmission as a reaction to a given stimuli, it becomes hard to conceive of how we could have free will and responsibility over our own actions. These questions are not opening cans of worms’; these questions are opening industrial oil tanker sized boats which filled with worms. Although currently it’s not possible to give an answer to these questions, what we can know for sure, is that depression is a real condition, and is one that requires treatment.
SCOPE 2010/11 41
Gestational Diabetes • Increased thirst. • The need to urinate more often. • Increased hunger. Some factors which make a patient more at risk to developing GDM are: • Previous diagnosis of gestational diabetes, impaired glucose tolerance and impaired fasting glycaemia. • Family history, having a first degree relative with type 2 diabetes.
Diabetes mellitus often referred to as diabetes, is a condition in which a person has high blood sugar, either because the body does not produce enough insulin, or because cells do not respond to the insulin that is produced. There are three main types of diabetes: Type 1 diabetes: results from the body’s failure to produce insulin, and presently requires the person to inject insulin. Type2 diabetes: results from insulin resistance, a condition in which cells fail to use insulin properly, sometimes combined with an absolute insulin deficiency. Gestational diabetes: is when pregnant women, who have never had diabetes before, have a high blood glucose level during pregnancy. It may precede development of type 2 diabetes. This article will focus on the details of Gestational diabetes Gestational diabetes is formally defined as “any degree of glucose intolerance with onset or first recognition during pregnancy.” This description allows for the facts that patients may have previously had previously undiagnosed diabetes or that the patient coincidently developed diabetes with the pregnancy. Gestational diabetes is a condition in which women without previously diagnosed diabetes exhibit high glucose levels during pregnancy. It is irrelevant to the diagnosis whether the symptoms subside 42 SCOPE 2010/11
after pregnancy. The White classification, named after Priscilla White who pioneered in research on the effect of diabetes types on perinatal outcome, is widely used to assess maternal and fetal risk. It distinguishes between gestational diabetes (type A) and diabetes that existed prior to pregnancy (pregestational diabetes). These two groups are further subdivided according to their associated risks and management. There are 2 subtypes of gestational diabetes (diabetes which began during pregnancy): Type A1: abnormal oral glucose tolerance test (OGTT) but normal blood glucose levels during fasting and 2 hours after meals; diet modification is sufficient to control glucose levels Type A2: abnormal OGTT compounded by abnormal glucose levels during fasting and/or after meals; additional therapy with insulin or other medications is required Although no specific cause of GDM has been identified, it is believed that hormones produced during pregnancy increase resistance to the action of insulin; this causes the glucose tolerance of the body to be impaired. Gestational diabetes does not really have any clear cut symptoms but the following experiences have been noted: • Tiredness.
Increased risk as the woman ages, especially for women over 35, this is because as the woman ages hormone levels gradually decline especially the sexual hormones such as oestrogen which is thought to stimulate the endothelial cells which causes vasodilation due to the release of NO. Therefore as less oestrogen is produced NO is not produced in as great quantities as it once was and the women become more susceptible to high blood pressure. • Ethnic background • Being overweight, obese or severely obese increases the risk factor by 2.1, 3.6 and 8.1 respectively. • Poor obstetric history. • There is double the risk of GDM in smokers. • Polycystic ovarian syndrome. The precise mechanisms underlying gestational diabetes remain unknown. The main characteristic of GDM is increased insulin resistance. This is thought to be caused by interference of insulin by the pregnancy hormones as the insulin binds onto the receptor. This interference most likely occurs at the level of the cell signalling pathway behind the insulin receptor. Insulin resistance is a normal occurrence surfacing in the second trimester of pregnancy, which progresses to levels seen in non-pregnant patients possessing type-2 diabetes. It is thought that this is a precaution to ensure a glucose supply to the growing foetus. Insulin promotes the entry of glucose into most cells, so the insulin resistance prevents glucose
• 1 hour blood glucose level ≥180 mg/dl (10 mmol/L)
from entering the cells properly. Therefore, glucose remains in the blood stream causing glucose levels to rise. Because of the higher levels of glucose in the blood stream more insulin needs to be produced in order to overcome this, about 1.5-2.5 times more insulin is produced than in a normal pregnancy.
Today, the rationale for screening appears unquestionable. There are simple screening tests. However, it remains difficult to define threshold values because there is a strong, continuous association of maternal glucose levels with increased risks of adverse pregnancy outcomes.
Gestational diabetes affects up to 4% of pregnancies and is associated with foetal macrosomia (when a baby is large for the gestational age). Foetal growth is a complex process influenced by determinants such as genetics, maternal factors, uterine environment and maternal and foetal hormones.
Entering pregnancy with overweight, obesity or gaining excessive gestational weight could increase the risk of gestational diabetes • Fasting glucose test – after a period of about mellitus (GDM), which is associated with 8-14 hours the blood glucose levels would be negative consequences for both the mother and tested. the offspring. Weight management through • 2-hours after a meal the blood glucose would nutritional prevention strategies could be be tested. successful in reducing the risk of GDM.
Because glucose travels across the placenta (diffusion facilitated by Glut-3 carriers), the foetus is exposed to higher glucose levels this leads to increased foetal levels of insulin, since the insulin from the mother is unable to pass through the placenta. The growth stimulating affects of insulin can lead to excessive growth and a large body (macrosomia). After birth, the newborns are no longer in a high glucose environment; however, they are still left with high levels of insulin production and a susceptibility to low blood glucose levels (hypoglycaemia). A number of screening and diagnostic tests have been used to look for high levels of glucose in plasma or serum in defined circumstances. One method is a stepwise approach where a suspicious result on a screening test is followed by diagnostic test. Alternatively, a more involved diagnostic test can be used directly at the first antenatal visit in high-risk patients. Screening for GDM is usually done at 24-28 weeks of gestation. Universal screening for gestational diabetes mellitus (GDM) has been a topic of ongoing controversy for many years. In 2005, the French Health Authority concluded that no recommendation could be issued because of insufficient evidence. Recently, several studies have clarified the issues. It is now clearly established that women with GDM, including mild forms, are at increased risk of perinatal complications. Randomized controlled trials demonstrate that treatment to reduce maternal glucose levels improves perinatal outcomes.
Non-challenge, meaning the subject is not challenged with glucose solutions, blood glucose tests include:
Other tests for glucose include: • The screening glucose challenge test (O’Sullivan test) – This test is performed between 24–28 weeks of gestation. It involves drinking a solution containing 50 grams of glucose, and measuring blood levels 1 hour later. If the cut-off point is set at 140 mg/dl (7.8 mmol/l), 80% of women with GDM will be detected. If this threshold for further testing is lowered to 130 mg/dl, 90% of GDM cases will be detected, but there will also be more women who will be subjected to a consequent Oral Glucose Tolerance Test unnecessarily. • Oral glucose tolerance test - The OGTT should be done in the morning after an overnight fast of between 8 and 14 hours. During the three previous days the subject must have an unrestricted diet (containing at least 150 g carbohydrate per day) and unlimited physical activity. The subject should remain seated during the test and should not smoke throughout the test. The test involves drinking a solution containing a certain amount of glucose, and drawing blood to measure glucose levels at the start and at set time intervals thereafter. The following are the values which the American Diabetes Association considers to be abnormal during the 100 g of glucose OGTT: • Fasting blood glucose level ≥95 mg/dl (5.33 mmol/L)
• 2 hour blood glucose level ≥155 mg/dl (8.6 mmol/L) • 3 hour blood glucose level ≥140 mg/dl (7.8 mmol/L)
The goal of treatment is to reduce the risks of GDM for mother and child. Scientific evidence is beginning to show that controlling glucose levels can result in less serious foetal complications (such as macrosomia) and increased maternal quality of life. A repeat OGTT should be carried out 2–4 months after delivery, to confirm the diabetes has disappeared. Afterwards, regular screening for type 2 diabetes is advised. Treatment consists of glucose monitoring, dietary modification, exercise. Insulin therapy is the mainstay of treatment, although glyburide and metformin may become more widely used. In women receiving pharmacotherapy, antenatal testing with non-stress tests and amniotic fluid indices beginning in the third trimester is generally used to monitor fetal well-being. The method and timing of delivery are controversial. Women with gestational diabetes are at high risk of subsequent development of type 2 diabetes. Lifestyle modification should therefore be encouraged, along with regular screening for diabetes. If monitoring reveals failing control of glucose levels with these measures, or if there is evidence of complications like excessive fetal growth, treatment with insulin might become necessary. The most common therapeutic regime involves pre-meal fast-acting insulin to blunt sharp glucose rises after meal. Care needs to be taken to avoid low blood sugar levels (hypoglycemia) due to excessive insulin injections. SCOPE 2010/11 43
Some oral glycemic agents have been shown to be safe to use during pregnancy and to be a significantly better option for the developing fetus than poorly controlled diabetes. Glyburide, a second generation sulfonylurea, have been seen to act as an effective replacement for insulin therapy; in one study just 4% of women also needed supplemental insulin to properly manage blood sugar levels. Metformin has shown promising results. Treatment of polycystic ovarian syndrome with metformin during pregnancy has been noted to decrease GDM levels. A recent trial has shown that many women prefer to use the metformin tablets rather than the insulin injections and that metformin is as safe as and as equally effective as insulin injections. Severe neonatal hypoglycemia was less common in insulin-treated women, but preterm delivery was more common. Almost half of patients did not reach sufficient control with metformin alone and needed supplemental therapy with insulin; compared to those treated with insulin alone, they required less insulin, and they gained less weight. Although it is not entirely certain as to adverse long-term effects of the metformin treatment a follow up of 18 months of a woman with polycystic ovarian syndrome showed that there were no developmental abnormalities. GDM can be the cause of a danger to a mother and child this danger is related to high blood glucose levels and its consequences. As the blood glucose level increases, the risk presented by GDM also increases. Treatment for managing these blood glucose levels can significantly reduce the risks of GDM. Two main risks of GDM are growth abnormalities and chemical imbalances after birth; these may require admission into a neonatal care unit. Infants born to mothers with GDM are likely to suffer from both macrosomia and being small for their gestational age. Macrosomia in turn increases the risk of instrumental deliveries or problems during vaginal delivery. Macrosomia may affect 12% of normal women compared to 20% of patients with GDM. However, the evidence for each of these complications is not equally strong; in the Hyperglycemia and Adverse Pregnancy Outcome 44 SCOPE 2010/11
(HAPO) study for example, there was an increased risk for babies to be large but not small for gestational age. Research into complications for GDM is difficult because of the many confounding factors (such as obesity). Neonates are also at an increased risk of hypoglycemia, jaundice, polycythemia (high red blood cell mass), and low blood calcium (hypocalcaemia) and low blood magnesium (hypomagnesaemia). Unlike pregestational diabetes, gestational diabetes is not thought to be individually responsible for birth defects. Since these birth defects usually originate sometime during the first trimester, before the 13th week of pregnancy, GDM develops and is least pronounced during the first trimester so it is unlikely that this is the sole cause of the birth defects. Although a large case-controlled study had linked gestational diabetes with a small group of birth defects, this association was generally limited to woman with a higher body mass index (BMI) (≥25 kg/m2). This meant that it is difficult to know whether the birth defects were due to the gestational diabetes or whether the women had pre-existent, undiagnosed, before the pregnancy, type 2 diabetes. Due to contrasting studies it is unclear whether women with GDM are at a higher risk of developing Preeclampsia. In the Hyperglycemia and Adverse Pregnancy Outcome study it was said that the risk of preeclampsia was 13-37% higher in women with GDM. However, not all of the factors may have been correct. GDM usually resolves itself once the baby is born; Women who have been diagnosed with gestational diabetes mellitus have an increased risk of developing diabetes mellitus in the future. Women who required insulin in order to manage GDM have a 50% chance of developing diabetes within a time span of 5 years. The risk of developing diabetes seems to be highest during the first five years. Children of women with GDM have an increased risk for childhood and adult obesity and an increased risk of glucose intolerance and type2 diabetes later in life. This risk is related to the maternal glucose levels. It is
currently unclear how much genetic susceptibility and environmental factors each contribute to this risk, and if treatment of GDM can influence this outcome. There is little data based on whether GDM had an effect on the women causing them to develop other conditions. However, from a Jerusalem Perinatal study the results showed a tendency towards breast and pancreatic cancer but there is not enough research for this to be conclusive.
Glossary for the week: Polycystic ovarian syndrome – PCOS is one of the most common female endocrine disorders affecting approximately 5-10% of women of reproductive age (12-45 years old) and is thought to be one of the leading causes of female infertility. Perinatal – The period shortly before and after birth. Neonatal – Newly born. Cell confluence – When the cells being grown in a container reach the stage at which they join together, they must be split to allow them room to continue growing. Pathophysiology – This is the study of the changes of normal mechanical, physical and biochemical functions caused by a disease. Trimester – In pregnancy this is one of three sections of 3 months. Glycemic agent – An agent which contains glucose. Metformin – A drug used to control the levels of glucose in the body by producing more insulin to deal with any excess amounts of glucose. VWF – Von millebrand factor is a blood glycoprotein involved in hemostasis. It is deficient or defective in Von millebrand disease and involved in a large number of other diseases. It is a large multimetric glycoprotein present in blood plasma and produced constitutively in endothelium. VWF has a primary function of binding to other proteins, in particular factor VIII. Factor VIII is released by action of thrombin.
Acne - Pick on someone your own size during this time due to an increase in male sex hormones like testosterone. This hormone accumulates in both genders, not just the men. The good news is that acne slowly disappears over time while we enter adulthood. But some of us are not so lucky, with many cases of the skin condition sometimes continuing past the age of 40.
BEFORE
AFTER
So, you look in the mirror at your smooth, handsome, manly face, thanking God for your glamorous look. But suddenly, you come across a bump thatʼs small, spherical, a little red and almost calling out to be squeezed and popped by its host. But that only makes it worse. And things donʼt get any better from there on as you notice another spot or pimple or zit. Well, here at Scope, we may not have a cure for this annoying little creature, but we certainly have all the information youʼll ever need about the irritating skin condition known as acne. First, letʼs ask what is acne? In simple terms, acne vulgaris, in full, is a common skin condition which results from a myriad of changes in skin structures. These structures, known as pilosebaceous units, consist of a hair follicle and sebaceous gland, responsible for secreting sebum, an oily substance for lubricating the skin and hair, a normal body process. However, in cases of acne, follicles, also known as pores in the skin, get blocked. This blockage is the result of the sebaceous sebum getting blocked before reaching its designated destination, the surface of the skin. The build-up of sebum, sometimes merely referred to as oil, causes bacteria to grow and eventually results in acne. But weʼll come on to that later. Now for some general information. As you probably know from past experience, acne occurs most commonly during adolescence. It affects almost all of us - a whopping 89% of teenagers - and these trends can continue into adulthood. So why does it affect teenagers? Just for fun? Not exactly. Acne often occurs
One square inch of your skin is home to: • 65 hairs • 100 sebaceous glands • 78 yards of nerves • 650 sweat glands • 19 yards of blood vessels • 9,500,000 cells • 1,300 nerve endings • 20,000 sensory cells Acne can occur in various forms and shapes and even colours. There are two types that we can split acne into. We have inflammatory acne and non-inflammatory acne. Letʼs start with the latter. The formation of a blockage in a pore and the subsequent bacterial growth forms an arrangement known as a micromedone. This becomes a non-inflamed skin blemish that is known as a comedone. Comedones can come in the form of whiteheads or blackheads. A whitehead occurs when trapped sebum and bacteria stay below the skin surface. They normally show up as tiny white spots but some are microscopically small. On the other hand, a blackhead occurs when the pore opens to the surface. Therefore, the sebum oxidises and turns to its characteristic dark brown-black colour. You may notice that blackheads can be present for a very long time. This is because their contents drain to the surface very slowly. The other form of acne is inflammatory acne. This is a more severe form of the condition and consists of four types. A papule can be formed. These occur when there is a break in the follicular wall. An immune response of the body causes white blood cells to rush in and so the
pore becomes inflamed. Alternatively, a pustule can form. In fact, these form after the formation of a papule. Days after the break of the follicular wall, the white blood cells continue their journey to the surface of the skin, causing a shiny, white, fluid filled “zit” to form. However, sometimes an inflamed lesion can completely collapse or explode due to picking of the skin or mere random occurrence. This triggers the same response of white blood cells, causing inflammation to the skin surrounding the initial papule. These lesions are larger in size due to wider inflammation and are can take the form of a nodule or a cyst. A nodule is formed when a follicle breaks along the bottom, often causing total collapse of the structure and so producing a large inflamed bump. These are often sore to touch too. On the other hand, some sort of inflammatory reaction can result in cysts. These are very large lesions of the skin that are filled with pus, a whitish-yellow fluid that is formed during inflammatory bacterial infections. Cysts often appear in areas where there are dense regions of hair and therefore follicles and sebaceous glands. These areas include the buttocks, groin and armpits as well as several other areas where sweat collects in hair follicles. In all cases, these forms of acne are the result of Some effects of acne: • Scarring • More severe bacterial infections • Reduced self-esteem due to build up of insecurity due to other issues to which acne is added • In some severe cases, depression and even suicide dehydroepiandrosterone steroid hormone, sebum productions and enlargement of the sebaceous glands occur. These processes normally lead to a noninflammatory form of acne. However, the naturally occuring bacteria Propionibacterium acnes can result in
SCOPE 2010/11 45
inflammation and thus the formation of the secondary form of the condition including lesions like pustules and papules.
Bacteria in the pores
Hygeine
Obviously, if there is more of the bacterium that causes acne (P. acnes), blockages will be more severe and thus acne will be more severe.
Primary Causes of Acne
Use of anabolic steroids
Genetic History
Chemicals found in such steroids have been found to increase acne levels
An easy, yet very effective treatment that you can all do easily is to take careof your skin. Proper washing and general care of the skin can help to wash away and remove both bacteria and oils which easily cause blockage of pores, leading to acne. Good hygiene techniques in general can prevent oils and bacteria from reaching the face.
Like many other characteristics like height and appearance, the tendency to develop this skin condition displays trends in families. So, if someone you know has extremely severe acne, then it is likely that multiple members of his or her family have also gone through the same stage. Factors that affect acne formation, such as skin composition, are often directly affected by genetic information. Hormonal Activity Changing activity of hormones at stages such as during the menstrual cycle and during puberty can have a direct effect. During puberty, an increase in the male sex hormones called androgens results in growth of follicular glands and increased sebum production. These two changes result in increased likelihood of acne development and this factor explains why the disorder is so common in adolescents. Minor habits such as skin scratching or an increased likelihood of skin irritation or inflammation means that bacteria is more likely to grow and ruptures in the follicles is more likely. Essentially this means that the person is more prone to acne.
Some research projects have shown quite significant correlations between intakes of certain foods and severity of acne. For example, after extensive epidemiological studies, an association between dairy products like milk and cottage cheese has been confidently reported. Other trends have been found too. For example, those who have low blood levels of vitamin A and E are more likely to have acne than those with normal levels of the vitamins. There have also been some suggestions that there is a strong positive correlation between acne severity and intake of rapidly digested carbohydrate foods such as soft drinks and white bread. This has been suggested due to the overload of glucose produced which, in turn, stimulates insulin secretion which causes the release of IGF-1. This hormone has direct effects on the pilosebaceous unit, stimulating the buildup of dead skin cells in pores, hence facilitating acne formation. Having said this, there has been no confirmation of this link so carry on eating those carbs!
Stress
Treatments
Although this is a debatable factor, research has shown that increased acne severity is “significantly” associated with increased stress levels. Further to this, the National Institutes of Health list stress as a factor that “can cause an acne flare”. In addition, a study in Singapore which monitored youngsters showed that there is a “significant positive correlation” between stress levels and acne severity.
You may have seen products available for the treatment of acne. Most of these remedies take a matter of months to show any signs of improvement which is probably why Clearasil doesn’t work in 3 hours as they say! Most treatments aim to work in a variety of ways, mainly attempting to
Hyperactive sebaceous glands Greater secretions of sebum in follicles result in more blockages of pores and thus greater severity of acne in the skin.
46 SCOPE 2010/11
Diet
• Normalise shedding into the pore to prevent blockage • Kill p.acnes - the bacterium associated with acne vulgaris • Anti-inflammatory effects • Hormonal manipulation
Topical bactericidals Other treatments • Topical antibiotics • Hormonal treatments • Phototherapy • Surgery Some less common treatments • Aloe vera • Azelaic acid • Hea • Zinc • Tea tree oil • Pantothenic acid Oral retinoids Daily oral intake of a vitamin A derivative such as isotretinoin can cause long-term rduction of acne. The substance works by reducing the secretion of oils from the glands and thus preventing blockages in pores. This treatment is one of the few that can often cure acne for good. However, some reports allege that the oral retinoids can cause liver damage, so there are some risks in addition to side effects like dry skin and nosebleeds. There you have it - an in-depth look to acne. So, next time you see a big red, juicy pimple on your face, remember exactly how it got there and what may have caused it as well as what you can do to treat it - unless you can’t resist popping it, that is.
Theories of General Anaesthetic Action Anaesthetics are one of the most fundamental discoveries in modern medicine as they allow a state of reversible unconsciousness. However the mechanism of anaesthesia is still not fully understood.
A Brief History Before the first administering of anaesthetics in 1846, surgery was a terrifying last resort and a final attempt to save life. Unlike today, surgeons were not judged on their accuracy, but their speed. As well as an operation being a horrifying experience for both the surgeon and the patient, the speed and inaccuracy with which they were done often resulted in death. On 16th October 1846 William Morton administered the chemical diethyl ether, the first anaesthetic, in a public operation. This operation sent shockwaves throughout the scientific community and revolutionised surgery. But despite this breakthrough, the use of anaesthetics was still a dangerous process. In 1847 chloroform was first used as an anaesthetic. Even though this was more potent, it sometimes caused disastrous side effects such as liver damage, and even death. Over the course of the next 100 years various advances such as local anaesthesia, intravenous injection, muscle relaxants and life support made anaesthetic treatment the relatively safe process we know today.
General Anaesthetic Action Even though anaesthetists are highly trained and have a deep knowledge of anaesthetics, the biochemistry of anaesthetic action continues to be largely theory. Von Bibra and Harless proposed a mechanism of anaesthetic action in 1847, only a year after that first administration of diethyl ether. They suggested that general anaesthetics dissolved in the fatty components of neurone cells, thereby changing their activity. In 1899 experimental evidence was produced to support this hypothesis by Meyer and Overton. They found a positive correlation between an
Figure 1: Meyer Overton Correlation increased solubility in lipids (relatively small organic molecules, composed of one glycerol molecule and three hydrocarbon side chains) of an anaesthetic and its potency, as shown in Figure 1.
Lipid bi-layer expansion hypothesis This hypothesis was based on the Meyer Overton correlation of 1899. It was suggested by Miller and Smith in 1973 that hydrophobic (water-hating) anaesthetic molecules would pass into the middle of the lipid bi-layer of a neurone cell membrane. Depending on the molecular volume, the anaesthetic would expand and distort the shape of the membrane, which would significantly alter membrane ion channels, which are proteins found in the cellular membrane, acting as passages for certain inorganic ions. This alteration could induce an anaesthetic effect, as the action of signals would be impaired between neurone cells. The key idea for this theory is that the volume of chemical inside the membrane plays a more important role than structure of the chemical itself when determining anaesthetic potency. Once the molecules depart from the bi-layer, the ion channels are restored to their
original shape making the anaesthetic effect reversible. This theory was also supported by the fact that an increase in atmospheric pressure decreases anaesthetic effect. Until recently, this hypothesis had been largely accepted by the scientific community as shown in Figure 2.
Objections to the Lipid bi-layer expansion hypothesis However, there are several objections to this theory, as follows: Anaesthetic molecules, like many other chemical molecules, have isomers. Isomers are molecules with the same chemical formula, but a different physical structure. This structural difference can lead to different properties between different isomers. Optical isomers are non - super imposable mirror images of each other and have the same physical properties e.g. molecular volume. Optical isomers of anaesthetics are known to have very similar molecular volumes and lipid solubility. According to the expansion hypothesis, they should then have a very similar anaesthetic potency. However, this is not true.
SCOPE 2010/11 47
Figure 2: Lipid bi-layer expansion Some optical isomers have no anaesthetic effect. This demonstrates that anaesthetics might interact more specifically with proteins within the membrane, rather than just disrupting them. Another objection to the theory is that chemicals that are predicted to have a high anaesthetic potency (a high lipid solubility and molecular volume) only induce amnesia and do not suppress movement. These chemicals only affect the brain and not the spinal cord. True general anaesthetics depress both brain and spinal cord functions. From this evidence, it can be inferred that a further interaction is taking place between the anaesthetic molecule and different molecular targets, and not just at the neurone membrane bi-layer.
48 SCOPE 2010/11
these signals then induce an anaesthetic effect. However, the detailed mechanism of exactly how the molecule disturbs the lipid formation is unclear. This theory was investigated by exploring the thermodynamics of the lipid membrane. Lerner used the example of the lipid oleamide to explain anaesthetic action. Oleamide is a lipid found in the cerebrospinal fluid of sleep-deprived cats, thought to affect connexons as described above. Anaesthetics could shadow the mechanism of oleamide as they both induce a deep sleep in the animals. Lerner suggested that anaesthetics may work in much the same way.
that anaesthetics affect proteins directly by binding to receptors and changing their function. This is in direct contrast to the non-specific lipid theories. Various studies using NMR (Nuclear Magnetic Resonance) have shown that inhaled anaesthetic molecules can interact with the motion of proteins on a very small time scale, by affecting the flexible loops that hold proteins chains together. The Membrane Protein Hypothesis opens up a new perspective on anaesthetic action, as it is the first theory backed up by experimental evidence that anaesthetics act in specific ways towards proteins.
Membrane Protein Hypothesis
In Conclusion
Modern lipid Hypothesis
Although lipids have been at the centre of hypotheses and research, a contrasting theory of protein action has been proposed.
In 1997 R. A. Lerner proposed the modern lipid theory. This acts on the same principals as the outdated lipid hypothesis, but it differs by stating that the anaesthetic molecule does not act directly on the lipid membrane, but disturbs the specific lipid formations on the surface of the membrane. A redistribution of pressures through the membrane then causes the channel proteins, or connexon, to close. These connexons transmit signals between neurone cells. The blocking of
In 1984 N.P. Franks and W. R. Lieb found that the Meyer Overton correlation (Figure 1) can be repeated by using a soluble protein. This led to further research that showed that the activity of certain proteins was inhibited due to the presence of anaesthetic, with no lipid involved. Franks and Lieb found that there was a positive correlation between the potency of the anaesthetic and how effectively it inhibits the protein. This evidence suggests
The implications for fully understanding anaesthetic actions are enormous. A greater understanding of the process will increase the safety of anaesthetic procedure. There is also a potential for greater advancement for medicinal drugs in general with relation to the nervous system. If the mechanisms of anaesthetics can be fully understood, then their effects may be controlled and the principles behind anaesthetic action can be applied to other drugs.
Intelligent Octopuses: New Evidence for Invertebrate Intelligence What is the most intelligent animal? The chimpanzee? The dolphin? Ask a person to list the top ten most intelligent animals and nearly everyone would include a select group of well-publicised mammals and birds. An observer would be forgiven for imagining a progression of intelligence through the tree of life, with brainless plants at the bottom, then drone-like ants, through famously (and falsely) forgetful goldfish, frogs, snakes, lizards, birds such as the parrot and raven, small mammals like squirrels, trainable dogs, elephants, dolphins, primates, with man at the top. The logic behind this is understandable, for rising complexity comes a corresponding average increase in intelligence. The ability to use tools and solve complex problems is seen as reserved for mammals and a few birds, warm-blooded vertebrates that are relatively new on the evolutionary scene. References 1: Watch a video of an octopus’s escape at www.youtube.com/watch?v=dvv537vye8U 2: Watch the coconut-carrying octopuses at www.museumvictoria.com.au/ discoverycentre/videos/coconut carrying-octopus/ 3: Reported in Current Biology, Volume 19, Issue 23, 15 December 2009
observation invariably dropped the shell and dived inside for protection[2]. No observer can deny that the coconut halves count as tools: taken from their original context, the octopus carries the shells (at the detriment of its own capability to move freely) in anticipation of their use as portable shelter[3]. Next, she added an object that behaved in an unexpected way to the tank - a neutrallybuoyant pill bottle. Again, various responses were observed. Some individuals grabbed the bottle and immediately tried to eat it, others examined it and soon lost interest, but one individual displayed a unique response: by pushing the bottle with a water jet into the flow from the tank’s filter, the female octopus was able to throw the bottle back and forth in a game of catch. This playful activity showed a high level of interaction with the octopus’s environment, requiring memory, co-ordination and curiosity to recognise the unusual event of the bottle returning and to replicate the event accurately. With all of the evidence accrued pointing to genuine intelligence in octopuses the inevitable question is: how did this intelligence come about? The answer may lie in the similarity between the situation of an octopus and an early hominid. There are fewer differences between these two than might be imagined.
They both have prehensile limbs capable of grasping, carrying and manipulating objects; indeed, with eight limbs and wide-angle vision it is at least possible for an octopus to manipulate multiple objects simultaneously). Both exist in complex, nutrient-rich environments with high biodiversity, a coral reef as opposed to a forest, and both are physically fragile compared to the dominant predators. Physical weakness promotes the development of strategic thinking – octopuses are ambush predators, waiting for prey to pass by a hideaway before springing out and enveloping them in their tentacles. A complex environment promotes the development of detailed memory and object identification: the octopus remembers where its habitual shelters are, must learn to identify possible dangers in order to avoid being ambushed by other predators outside its shelter. Finally, prehensile limbs allow object recognition to progress into creative object application, demonstrated by the coconut halves. Current evidence places the veined octopus’s intelligence level slightly above that of a dog, demonstrating that intelligence as an advantageous adaptation can arise whenever this shared set of conditions arises, and goes some distance to dispelling entirely the convenient fantasy of vertebrate superiority.
Navigating a complex environment is clearly no problem for the octopus – they have solved complicated mazes, and are notorious escape artists, capable of leaving their tank, walking across a room and entering another tank in order to pilfer food[1]. The most recent and concrete evidence of octopus tool use was documented on video by a cameraman from the Museum Victoria in Australia– four veined octopuses in the wild were filmed carrying one or two coconut half-shells, often forcing them to ‘stilt-walk’ along the sea bed. When the cameraman swam too close, the octopus under
SCOPE 2010/11 49
Bibliography Front Cover
“Many-worlds interpretation”
Biomimetics
http://sci.esa.int/science-e/www/object/ index.cfm?fobjectid=45902
http://en.academic.ru/dic.nsf/ enwiki/5598
1. AsknatureTubercle
“Quantum Physics - Schrodinger’s Cat 74”
States of Matter www.lucnix.be Luc Viatour iRocks.com Rob Lavinsky http://web.mit.edu/newsoffice/2010/expquark-gluon-0609.html
Green Building in Zimbabwe modeled after termite mounds
http://hubpages.com/hub/QuantumPhysics---Schrodingers-Cat
3. Robin Lloyd Alabone Armour: Toughest stuff theoretically possible
“Schrodinger’s Cat”
4. Senthil Sankar Wiperless windshields using lotus effect & Satish Dhanapal
http://www.bbc.co.uk/dna/h2g2/ A1073945
5. Andrew R. Parker Biomimetics of Photonic Nanostructures & Helen E. Townley
PvsNP
6. Richard Black
Gecko inspires sticky tape
Aylward, Kevin. “Quantum Mechanics. The Ensemble Interpretation”
Homer, Steve. “A Short History of Computational Complexity”, 2002
7. Belle Dumé
Painless needle mimics a mosquito’s bite
http://www.kevinaylward.co.uk/qm/index. html
Wigderson, Avi. “Knowledge, Creativity and P versus NP”
8. Mark Allard Characterisation of a polymerbased MEMS pyroelectric infrared detector
Budnik, Paul. “Schrödinger’s cat for a 6th grader”
Devlin, Keith. “The Millennium Problems” Basic Books, 2002
9. Darren Murph
http://www.mtnmath.com/cat.html de Muynck, Willem M. “ ‘Individual-particle interpretation’ versus ‘ensemble interpretation’ of quantum mechanics”
Knuth, Donald. “The Art of Computer Programming: Volume 1” Addison-Wesley, 1968
10. Jennifer Quellette
http://www.phys.tue.nl/ktn/Wim/qm11. htm#obs_evid_ens_int
http://www.scottaaronson.com/blog/ http://rjlipton.wordpress.com
12. Physorg Researchers can grow carbon nanotubes in lab using faster, cheaper means
Prasar, Vigyan. “Erwin Schrodinger The Founder of Quantum Wave Mechanics”
http://www.turing.org.uk/philosophy/ex2.html
13. Jemery Faludi
http://www.vigyanprasar.gov.in/scientists/ eschrodinger.htm
Particles and Sparticles
Theories of General Anaesthetic Action
“Higgs, dark matter and supersymmetry: What the Large Hadron Collider will tell us” (Steven Weinberg lecture at the University of Texas)
“A hypothesis about the endogenous analogue of general anesthesia”. Lerner RA(December 1997).
http://motls.blogspot.com/2010/07/ susy-and-dark-matter.html
“Molecular Anesthetic Modulation of Protein Dynamics: Insight from an NMR Study”. Canlas CG, Cui T, Li L, Xu Y, Tang P (September 2008).
Schrödinger’s cat
Price, Michael C. “The Everett FAQ” http://www.hedweb.com/manworld. htm#do Simmonds, Andy. “Quantum Superposition and Schrödinger’s Cat” http://www-staff.it.uts.edu.au/~simmonds/ Sophy/superposition.htm The Nobel Foundation “The Nobel Prize in Physics 1933 Erwin Schrödinger, Paul A. M. Dirac” http://nobelprize.org/nobel_prizes/physics/ laureates/1933/schrodinger-bio.html Vaidman, Lev. “Many-Worlds Interpretation of Quantum Mechanics” http://plato.stanford.edu/entries/qmmanyworlds/
50 SCOPE 2010/11
2. Abigail Doan
Technology blades
http://motls.blogspot.com/2010/07/ susy-and-hierarchy-problem.html “In SUSY we trust: What the LHC is really looking for” (Anil Ananthaswamy, New Scientist, 11 November 2009) http://www.bbc.co.uk/dna/h2g2/A666173 (The Standard Model of Particle Physics, BBC h2g2)
11. Belle Dumé
LEXID prototype gun can peek through walls Through a Lobster’s eyes Is bubble fusion back?
Biomimicry 101
“A critical assessment of the lipid theories of general anaesthetic action”. Janoff AS, Miller KW (1982).
Intelligent Octopuses: New Evidence for Invertebrate Intelligence Current Biology, Volume 19, Issue 23, 15 December 2009
The Scope 2010/11 Team
Nicholas Parker - Chief Editor
Mr. Roger Delpech - Master in Charge
Aadarsh Gautam - Cheif Editor
Salil Patel - Senior Editor
Matthew Earnshaw - Senior Editor
Zachary Spiro - Editor
Tom Ough - Editor
Javi Farrukh - Editor
Aneesh Misra - Editor
Akshay Kishan Karia - Editor
Andrew Yiu - Editor
Viraj Aggarwal - Editor
Josh Goodman - Editor
Keyur Gudka - Editor
SCOPE 2010/11 51
The Scientific Journal of the Haberdashers’ Aske’s Boys’ School
The Haberdashers’ Aske’s Boys’ School Butterfly Lane, Elstree, Borehamwood, Hertfordshire WD6 3AF Tel: 020 8266 1700 Fax: 020 8266 1800 e-mail: office@habsboys.org.uk website: www.habsboys.org.uk