YOUNG SCIENTISTS HOW CAN ANGIOGENESIS INHIBITORS BE USED TO TREAT CANCER? RISKS AND BENEFITS OF SUNLIGHT WHEN HALF OF YOUR WORLD IS FORGOTTEN
MAKING PERFUME FROM BACTERIA
Into Space Without Rockets
NUCLEAR FUSION
THE REAL MEANING OF SCIENCE WITH LIZ BONNIN
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
Editorial
I am proud to present Issue 16 of Young Scientists Journal. Having updated to a new submission system on our website, allowing articles to be uploaded, edited and peer reviewed online, it is now even easier to submit original research or review articles on any scientific topic that interests you. We are very happy to receive articles from anyone aged 12-20 years old, if you have any questions about an article’s suitability or if you want to run an idea past me before committing, please send me an email to editor@ ysjournal.com
the journal and develop it in exciting directions. We are also very proud to announce our partnership with The Royal Society and Lunar Mission One. The Royal Society is celebrating is 350th year of scientific publishing and as part of this it is encouraging pupils who have been given grants and help from professional scientists due to the Royal Society to write up their research and submit it to Young Scientists Journal. We will then publish the articles in a special edition of the journal next summer to coincide with the Royal Society’s Summer Exhibition in July.
As well as updating our online system we have changed the way the journal is run by the students as well. Instead of only having a single Chief Editor, we now have a team of Senior Editors, with each person looking after a different aspect of the journal and my role as the Chief Editor is to co-ordinate the rest of the team and to help them where I can. Our Senior Editorial team is made up of Claire Nicholson who is in charge of the Editorial team and looks after the day to day editing of the articles, Chimdi Ota who is developing a database of everyone who has ever contributed to the journal, James Molony who looks after the technological support and the website and Michael Hofmann who runs the design and publicity side of the journal. My thanks go out to all of them – and indeed ALL the editors - for the effort that they put in to run
Lunar Mission One is a planned unmanned moon mission to the South Pole of the Moon – an unexplored area that aims to drill down 20100m into the Moon’s surface to analyse rock dating back 4.5 million years. It will also leave behind time capsules that anyone can buy space in to store data or even DNA to immortalise themselves on the moon. We are part of the educational program and we aim to encourage participation from school children from all over the world to not just increase their interest in science but to present opportunities for students to drive the actual science, making decisions about many aspects of the mission itself.
2
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
This issue sees a huge range of topics, ranging from the myriad risks and benefits of sunlight to whether scientific discoveries are the product
Young Scientists Journal partners with Lunar Mission One of a particular time and place or the result of exceptional genius, a new section of current science in the news, written by Rachel Hyde and even an exclusive interview with the TV presenter Liz Bonnin. Following on from our ‘Theme of the month’ of Space, Sansith Hewapathirana has written an article discussing the best methods for reaching space without using rockets for propulsion and for our Christmas theme, Daniel Chow, aged 12, wrote an article about Chocolate - including its history and its chemistry. These days, nuclear power is a controversial topic in Britain, largely due to the disasters such as Chernobyl and Fukushima, but in countries such as France and China, nuclear power is considered normal and even vital. In his article, James Pye considers whether nuclear fusion can truly be considered a viable energy source and whether its benefits outweigh its risks. Throughout history, perfume has been made from some very strange ingredients that you wouldn’t have thought could contribute to a pleasant fragrance, including: cigarette ends, animal anal glands and the contents of a whale’s stomach but now the trend is to genetically engineer bacteria to chemically alter substances to produce new perfumes. Ellie Powell from Kent University explains her research into this field in her article ‘Making Perfume From Bacteria’.
Finally we have a range of fascinating, biologically orientated articles. Sophie Stephens investigates how half of your world could disappear, not just from your sight but from your thoughts as well, Magathe Guruswamy discusses how research into the Extracellular Matrix is helping to make breakthroughs in restorative medicine by ‘regenerating human cells, tissues or organs’ as well as looking at the use of bioactive glass in making new tissues, especially new bones. Finally, Sohail Daniel explores a cutting-edge new cancer treatment that uses Angiogenesis Inhibitors to prevent cancer from stealing blood from its host before going on to discuss its possible uses in medicine today. We hope you enjoy reading Issue 16 and will watch our website for new articles published week by week. If you are interested in publishing an article, or getting involved in running the journal in any way, please contact editor@ ysjournal.com. You might also like to follow us on Facebook and Twitter (@ysjournal) ! Ed Vinson Chief Editor
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
3
Contents 6
News
8
How can Angiogenesis Inhibitors Be Used to Treat Cancer?
The latest science news
Sohail Daniel discusses what angiogenesis is and how tumours trigger blood vessels to begin to grow into them, provide them with nutrients and oxygen that the tumour requires to survive.
13 The Risks and Benefits of Sunlight
There are widely recognised benefits to sunlight, including our main source of vitamin D but Shivakshi Ravi also looks at the dangers.
17 Is nuclear fusion a viable source of future energy?
The world is on the brink of an energy crisis James Pye investigates whether fusion is the solution.
21 Chocolate
Daniel Chow looks at one of the world’s favourite snacks.
23 When Half of Your World is Forgotten
Sophie Stephens unravels Hemispatial Neglect, a condition that causes patients to ‘ignore’ half of their space after a trauma.
26 The Real Meaning of Science with Liz Bonnin
Young Scientists Journal’s editorial team leader Claire Nicholson interviews BBC Television presenter Liz Bonnin.
28 Great Scientific Discoveries - Genius or Time & Place?
Sam Wallace asks: Are Great Scientific Discoveries the Product of a Particular Place and Time, or the Result of Exceptional Genius?
34 Bioactive glass in bone tissue engineering
& The Extracellular Matrix
Magathe Guruswamy provides an insightful overview into two new areas of scientific research.
36 Into Space without Rockets
With the cost of rockets exeptionally high Sansith Hewapathirana looks at alternatives on how we can reach above the atmosphere.
42 Making Perfume From Bacteria
Ellie Powell explains how a team of undergraduates genetically engineered a strain of E.coli to produce aroma compounds for use in the fragrance and flavourings industry for Kent iGem 2014.
45 Advances in polymer based solar cells
4
Utkarsh Jain explores the advances in organic solar cell technologies while emphasising the inevitable challenges that lie ahead for this technology.
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
17
26
28
23
8
36
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
5
News I Biology & Chemistry Four years after his spinal cord was damaged, a paralysed man has regained his ability to walk. Doctors used cells found in the nose, which usually repair damage to nerves in the nasal area, to fill the gap in his spinal cord. This prompted new spinal nerves to grow to replace those that had previously been damaged. Shortly after, feeling and movement returned to his legs. [PA]
[Alamy]
Hearts that had stopped beating for 20 minutes have been successfully transplanted into two Australian patients, an operation never done before. As soon as the heart stops beating, there is not enough oxygen to prevent the death of cardiac cells, making the organ no use for a transplant. However, the specialised fluid and pump have doubled the previous window of 4 hours in which a heart can be successfully transplanted which greatly increasing the number of organs available. A cure for diabetes could be imminent after scientists discovered how to make huge quantities of insulinproducing cells. Harvard University has, for the first time, managed to manufacture the millions of beta cells required for transplantation. It could mean the end of daily insulin injections for the hundreds of thousands of people living with Type 1 diabetes.
[Harvard]
A 400 kg block of copper has become the coldest macroscopic object in history, having been cooled to -273.144 °C for 15 days. The Cryogenic Underground Observatory for Rare Events got their block to 0.006K, by encasing it in a one of a kind container, which is now being used to detect rare forms of radioactivity. [Istituto Nazionale di Fisica Nucleare]
[Shutterstock]
6
Adult skin cells have successfully been converted into the type of brain cell affected in Huntington’s disease. The brain cells were produced without the need to use stem cells as an intermediary, so no other cell types were formed as by-products. Even more importantly, the cells survived when transplanted into the brains of mice. Much more research is necessary before these cells are used to treat humans, however researchers are now closer to treating those with Huntington’s.
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
News I Astronomy & Physics
[LM1]
[NASA]
[Paragon Space Development Corp]
[Science Photo Library]
[ESA]
Author: Rachel Hyde
Lunar Mission One, an exploratory robotic mission that will use pioneering drilling technology to deliver extraordinary new insights into the origins of the Moon and Earth, has raised £672,447 on crowd funding site Kickstarter. This is the first step towards the mission’s plan to build the lander from 2018 onwards, ahead of landing at the moon’s south pole in 2024 where it will drill down up to 100m and deposit a time capsule, containing an archive and space bought by backers. The largest sunspot in 24 years has been the source of a recent burst of solar flares. The sunspot in question is called AR12192, with an average diameter of 129,000 kilometres, approximately 10 times that of the Earth! The only notable effect is some disruption to radio communications. It is fortunate for us that the coronal mass ejections, in which huge amounts of solar material are spat out, are predicted for when the sunspot has rotated to the other side of the sun. It was not predicted that Felix Baumgartner’s 2012 record-breaking jump from a helium balloon in the stratosphere, in which he famously became the first person to break the sound barrier, would be topped so soon. The end of October saw Google executive Alan Eustace fall from 41,425 metres, beating Baumgartner’s record by 2450 metres. Although his freefall speed of 1,300 km/h was not enough to snatch the greatest free fall velocity title from the Austrian daredevil. Two new particles, each with about three times the mass of a proton, have been discovered. It was confirmed mid-way through October that the LHCb experiment at CERN’s Large Hadron Collider has discovered two mesons, each being made up of 2 quarks. These are bound together by the strong force, therefore a further understanding of the new particles’ properties should provide insight into nature of the fundamental force. After a decade of flying, on the 12th November, ESA’s Philae spacecraft landed on comet 67P. It was designed to drill into the comet surface. However, all three systems designed to attach the lander to the comet failed, resulting in Philae bouncing twice before going into a ‘sleep’. Until the comet moves closer to the sun, to provide enough sunlight for the probe to recharge its batteries, scientists will analyse the large amounts of data that was uploaded before it shut down. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
7
REVIEW ARTICLE How can Angiogenesis Inhibitors Be Used to Treat Cancer?
How can Angiogenesis Inhibitors Be Used to Treat Cancer? Abstract
In this article, I will be explaining how angiogenic inhibitors are being used to treat cancer. I will discuss what angiogenesis is and how tumours trigger blood vessels to begin to grow into them, provide them with nutrients and oxygen that the tumour requires to survive. Through analysis of cutting edge research, I will outline the various mechanisms of action drugs can take to prevent angiogenesis in detail, from changing the shapes of protein receptors to infiltrating our DNA. Finally, I will discuss their possible future in the world of medicine as well as some of their dark pasts.
8
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
[Genentech]
How can Angiogenesis Inhibitors Be Used to Treat Cancer? REVIEW ARTICLE Introduction and basic fibroblast growth factor (bFGF) [5]. ore than 1 in 3 people in the UK will Both VEGF and bFGF are over expressed by develop cancer at some stage during many types of cancerous tumours [9]. VEGF and their lifetime; worldwide, it is estimated bFGF are initially synthesized within the tumour that 7.8 million people died from cancer in 2008 cells and are then secreted into surrounding [1]. So it is of paramount importance that we tissues. The molecules come into contact with develop new drugs to help fight cancer. One of and bind to their specific protein receptors on the main reasons cancer is so deadly is due to a the endothelial cells of the existing capillary process called metastasis. network. The endothelial cells now activated, begin to produce matrix metalloproteinases (MMPs). These break down the extra cellular What is metastasis? Metastasis is when tumour cells penetrate blood matrix which fill the space between proteins. The extra cellular matrix is made up of proteins or lymphatic vessels then circulate through the and polysaccharides, creating hollow spaces intravascular stream, and proliferate at another between cells in the tumour. This allows site [2]. A tumour requires a blood supply in order for some of its cells to break off into it and endothelial cells to migrate into the tumour where they begin to divide. The endothelial cells spread around the body. In the absence of such vascular support, tumours may become necrotic organise themselves to form hollow tubes. The [3]. This means the tumour cells die due to a lack hollow tubes develop into a new vascular system of nutrients and oxygen that would otherwise be with angiotensin-1, -2, and their receptor Tie-2, stabilizing and maturing the blood vessels [8]. provided for by a blood supply, thus preventing metastasis. Once the mass of a tumour has become about the size of a pea [4], its surface Angiogenic Inhibitors area to volume ratio has become too small for it As previously mentioned, the increased to survive by diffusion alone. It must develop a production of VEGF and bFGF by the tumours series of blood vessels by stimulating a process is not enough to trigger angiogenesis. The called angiogenesis. activator molecules must overcome the dozen naturally-occurring inhibitor proteins that deter angiogenesis. Included in this group of What is angiogenesis? angiogenic inhibitors are angiostatin, endostatin, Tumor angiogenesis is the proliferation of a interferon, thorombospondin, prolaktin and network of blood vessels which penetrates into inhibitors of metalloproteinase-1, -2 and -3 [10]. the cancerous growth [5]. Angiogenesis occurs I will be focusing on the inhibitors endostatin, naturally, for example in a developing embryo. But in a normal adult human, angiogenesis rarely interferon-ι, bevacizumab and thalidomide which all have different mechanisms in occurs with endothelial cells only dividing on average, once every 1000 days [6]. Angiogenesis preventing angiogenesis. is regulated by both activator and inhibitor Endostatin is a naturally occurring protein molecules [7]. Normally, the inhibitor molecules (a 20kDa C-terminal fragment of type XVIII predominate preventing angiogenesis. The upcollagen) (11) that inhibits angiogenesis directly regulation of angiogenic factors is not enough – the negative regulators or inhibitors must also by preventing the growth of endothelial cells. It be down-regulated for endothelial cell division to take place.
M
How do tumors stimulate angiogenesis?
The inner cells of a small tumour become stressed when there is a lack of nutrients and oxygen. Because of this they stimulate the external tumour cells to produce angiogenic activators. The two most important angiogenic molecules have been identified as vascular endothelial growth factor (VEGF), a glycoprotein,
REVIEW ARTICLE How can Angiogenesis Inhibitors Be Used to Treat Cancer?
Figure 1: Angiogenesis occurring in a metastatic carcinoma. Note the proliferations of many small capillaries. inhibits angiogenesis by binding to αvß1/αvß3 integrin as well as VEGF receptors VEGFR-1 and VEGFR-2 (12). Because the endostatin has bonded to the endothelial cell receptors, it is now blocked so no VEGF can bind to the receptors. This means the endothelial cells are not activated and thus, no longer produce MMPs. Endostatin additionally causes proliferating and migrating endothelial cell to apoptosis [11]. This is useful as existing vascular networks are not harmed, only those that are beginning to grow are inhibited. Bevacizumab is an artificially manufactured monoclonal antibody. Genetic engineering has produced a protein sequence that is 93% human and 7% murine [13]. The monoclonal antibody is able to irreversibly bind to the VEGF-A produced by tumour cells. It can do this as the molecule is structurally and pharmacologically similar to the natural VEGF antibody [13]. Because the bevacizumab is bonded VEGF-A, the structure of VEGF-A changes. This new conformation is no longer complimentary to its VEGFR-2. VEGF-A does not bind to its receptor. The molecules sent by the tumour cells to the endothelial cells 10
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
are not received, so the tumour cells cannot communicate with the existing vascular network. An old drug with a new use is thalidomide. Thalidomide’s original use was as a sedative in the 1950s. However, it was soon taken off the market as it was found to cause birth defects when given to pregnant women. It has recently been found to have anti-angiogenic properties and is currently being used to treat cancer in non pregnant patients. The mechanism of action of thalidomide is currently unknown but new research is currently underway to develop new models of thalidomide’s mechanism. Angiogenesis is highly dependant on cell surface adhesion proteins called integrin, and thalidomide has been shown to dramatically down-regulate these subunits, resulting in a decreased rate of angiogenesis. Integrin, like all proteins, is produced by transcription of DNA into mRNA. It is translated from mRNA into the protein. Thalidomide can intercalate with DNA and ‘slide into’ gaps between DNA nucleotides [14]. Thalidomide only tends to intercalate at guanine sites because its structure is very similar to that of guanine and adenine. The binding of thalidomide Figure 2 in the DNA sequence
How can Angiogenesis Inhibitors Be Used to Treat Cancer? REVIEW ARTICLE Figure 3: Tumour cells were injected into nude mice. After 10 days, when the tumour was roughly 25 mm2, the mice were treated daily with injections of INF-α. The graph on the left shows significantly decrease tumour growth with treatment compared to without treatment. On the right, the bar graph shows that micro-vessel density is also reduced compared to the control after treatment. [15] disrupts the transcription process of DNA to mRNA, so proteins are not correctly formed. For transcription to occur, promoter proteins must bind to the DNA strand at particular points called promoter regions. The most common sequences found in gene promoter sites are TATA and CCAAT. They are 91% prevalent [14]. The remaining 9% of gene promoter sites contain a sequence called a GC box with the code: GGGCGG. This sequence is guanine-rich, so thalidomide has a strong effect. The gene coding for integrin αvß3 is commonly associated with GC boxes [14], indicating a likely high amount of thalidomide integration within this gene. Transcription of the integrin αvß3 gene is reduced, so fewer integrin αvß3 proteins are produced. These integrin αvß3 proteins are important in forming new blood vessels as they allow endothelial cells to ‘stick’ together to produce hollow tubes. By the same process, the transcription of protein tumour necrosis factor alpha (TNFα) is reduced [14]. TNFα is involved with endothelial cell proliferation; without it, new blood vessels will not be able to migrate into the tumour. The fourth group of angiogenic inhibitors, such as the naturally-occurring protein interferonalpha (INF-α), interferes with the signalling cascade produced by the tumour cells. INF-α treatment results in the inhibition of VEGF gene expression [15]. Sp1 and Sp3 are transcription factors. Transcription factors bind to promoter regions on genes, activating or deactivating the transcription of the genes accordingly. Several activators of the VEGF promoter act in Sp1 and Sp3 dependent ways [15]. Sp1 and Sp3 are particularly good at activating GC-rich target genes [16]. In a study, it was found that the proximal GC box I (found in the VEGF promoter region -66/-55) was sufficient to confer INF-α responsiveness [15]. Therefore, INF-α inhibits
VEGF transcription by inhibiting Sp1 and Sp3 transactivation activity [15]. The mechanism for this is currently unknown, but researchers are testing possible hypothesis, that include the phosphorylation [17] or glycosylation [18] of the Sp1 and Sp3 proteins. Treatment with INF-α decreases VEGF plasma levels and decreases VEGF mRNA concentrations [15]. It should be noted that INF-α treatment does not decrease bFGF plasma levels.
Conclusion
Although all of the aforementioned drugs are currently in use, they all have their respective limitations. Angiogenic drugs do not aim to destroy the tumour [4]. Instead, by limiting their blood supply, they shrink the tumour and prevent it from growing again. The tumour is shrunk to a size in which it can obtain its oxygen and nutrients by diffusion. Long term survival benefits with angiogenic treatment alone has not yet been documented [19], but coupled with chemotherapy or radiation therapy, angiogenic treatments increase survival rates [20] — marginally or substantially? Is there a certain percentage associated?. Paradoxically, destroying vascular networks to tumours coupled with radiation therapy can disrupt the delivery of cytotoxic drugs to tumours [21]. In addition, some angiogenic drugs are very expensive, especially those that use monoclonal antibodies. However, extensive research continues to occur into the potential uses and development of angiogenic drugs. Over time, pharmacologists will improve manufacturing techniques for these drugs, improving their effectiveness and reducing their cost. Angiogenic drugs have also been found to be useful in treating other diseases not associated with cancer, such as macular degeneration and neovascular glaucoma [4]. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
11
REVIEW ARTICLE How can Angiogenesis Inhibitors Be Used to Treat Cancer?
Angiogenic drugs have exciting potential for both cancer treatment and for use in other diseases. It will be interesting to see how their uses and mechanisms develop in the future.
References
1) www.cancerresearchuk.org 2) Folkman J., “Tumour angiogenesis therapeutic implications.” N Engl J Med, 285, 1182-6 (1971) 3) Parangi S., O’Reilly M., Christofori G., et al., “Angiogeneic therapy of transgenic mice impairs de novo tumour growth.” Proc Natl Acad Sci USA, 93, 2002-7. (1996) 4) Folkman J., “Fighting cancer by attacking its blood supply.” Scientific American (September 1996), pp. 150-154 5) www.cancer.gov 6) Denekamp J., “Angiogenesis, neovascular proliferation and vascular pathophysiology as targets for cancer therapy.” Br J Radiol, 66, 181-96 (1993) 7) Dameron K.M., Volpert O.V., Tainsky M.A., et al., “Control of angiogenesis in fibroblast by p53 regulation of thrombospongin-1.” Science, 265, 1582-4 (1994) 8) Tournaire R., Simon M.P., le Noble F., et al., “A short synthetic peptide inhibits signal transduction, migration and angiogenesis mediated by Tie2 receptor. EMBO Rep, 5, 262-7 (2004) 9) Dvorak H.F., “Vascular permeability factor/ vascular endothelial growth factor: a critical cytokine in tumour angiogenesis and a potential target for diagnosis and therapy.” J Clinl Oncol, 20, 4368-80 (2002) 10) Nishida N., Yano H., Nishida T., et al., “Angiogenesis in cancer.” Vascular Health and Risk Management, 2, 213-9 (2006) 11) O’Reilly M.S., Boehm T., Shing Y., et al., “Endostatin: an endogenous inhibitor of
angiogenesis and tumour growth.” Cell, 88, 277-85 (1997) 12) Rehn M., Veikkola T., Kukk-Valdre E., et al., “Interaction of endostatin with integrins implicated in angiogenesis.” Proc Natl Acad Sci USA, 98, 1024-9 (2004) 13) Clarke S.J., Sharma R., “Angiogenesis inhibitors of cancer — mechanisms of action” Australian Prescriber, 29, 9-12 (February 2006) 14) www.thalidomide.ca 15) von Marschall Z., Scholz A., Cramer T et al., “Effects of interferon alpha on vascular endothelial growth factor gene transcription and tumour angiogenesis.” Journal of the National Cancer Institute, 95, 437-48 (2003) 16) Black A.R., Black J.D., Azizkhan-Clifford J., “Sp1 and kruppel-like factor family of transcription factors in cell growth regulation and cancer.” J Cell Physiol, 188, 143-60 (2001) 17) Jackson S.P., MacDonald J.J., LeesMiller S., et al., “GC box binding induces phosphorylation of Sp1 by DNA dependant protein kinase. Cell, 63, 155-65 (1990) 18) Jackson S.P., Tjian R., “O-glycosylation of eukaryotic transcription factors: implications for mechanisms of transcriptional regulation.” Cel,l 55 125-33 (1988) 19) Mayer R.J., “Two steps forward in the treatment of colorectal cancer.” N Engl J Med, 350 2406-8 (2004) 20) Hurwitz H., Fehrenbacher L., Novotny W., et al., “Bevacizumab plus irinotecan, fluorouracil and leucovorin for metastic colorectal cancer.” N Eng J Med, 350, 2335-42 (2004) 21) Ma J., Pulfer S., Li S., et al., “Pharmacodynamic mediated reduction of telozolomide tumour concentrations by the angiogenesis inhibitor TNP-470.” Cancer Res, 61, 5491-8 (2001)
Author Sohail Daniel Hi, I’m Sohail, 17, from the UK. I am currently studying at Bolton School Boys’ Division, and am applying to read Medicine at university next year. I have always had a passion for the sciences and couldn’t wait to get to college so I could drop all of my non-science subjects! I regularly read journals such as the BMJ to keep up to date with new research in the ever changing field of medicine. In my free time I enjoy going for a round of Badminton, playing Rachmaninoff on my piano and baking French pastries! 12
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
REVIEW ARTICLE The Risks & Benefits of Sunlight
The Risks and Benefits of Sunlight Abstract
Sunlight, composed of radiation in the form of electromagnetic energy at mixed wavelength is essential to the continued survival of living creatures on Earth, whether it be autotrophs using energy from sunlight to synthesize food via photosynthesis or heterotrophs using light to obtain food indirectly. The main source of Vitamin D that is so beneficial for humans is through sunlight and it helps keep the body especially the bone and neurological functions in good condition. While there are widely accepted benefits, it is equally imperative to be wary of the risk that comes with persistent sun exposure. Carcinomas of the skin contribute to 1.6% of all world-wide cancers and ultraviolet light that forms part of the spectrum of sunlight has been implicated as a carcinogen.
T
characteristics such as colour and energy level are dependent on its wavelength.
In strictly scientific terms, sunlight is composed of radiation from the sun in the form of electromagnetic energy at mixed wavelengths, ranging from infrared, visible light to ultraviolet light, all traveling at a speed of around 3.0 x 108 m/s in vacuum. Albert Einstein referred to this form of solar energy as photons, where its
Each day, enough sunlight falls on the earth’s surface to meet the world’s energy demand. Many creatures, plants, animals and humans have established physiological reactions in response to the sun’s spectral idiosyncrasies,
he essential necessity that leads to the successful survival of mankind has long been disputed. Air, food, water, shelter and even love have been considered fundamental to our continued existence. However, to state the Sun is what maintains life on Earth is no mere exaggeration as life has evolved and flourished under its watchful glare.
This expanse of electromagnetic energy differs in its intensity depending on the latitude, the time of year, and time of day. It peaks at noon when the Earth’s surface is inclined perpendicular to the sunlight so it collects the maximum amount of sunlight.
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
13
REVIEW ARTICLE The Risks & Benefits of Sunlight
taking into consideration any daily or seasonal variations.
penetrating skin and leading to the formation of cholecalciferol.
Autotrophs, such as plants, use energy from sunlight to synthesize their own food from inorganic substances via a process called photosynthesis.
7-dehydrocholesterol, is the functioning provitamin in the skin, found in its highest concentration in the basal and spinous layer of the epidermis. In normal circumstances, about 25–50 mg/cm2 of 7-dehydrocholesterol is formed, which is enough to meet body’s requirements.
Heterotrophs, such as animals, use light to obtain food in an indirect manner, either by consuming autotrophs, by consuming their products, or by consuming other heterotrophs. Through a process of cellular respiration, the heterotrophs convert the by-products of their feed to give them energy needed for survival. There are crucial health implications to human exposure of solar radiation. The main source of vitamin D is through sunlight. Adequate vitamin D status is fundamental for all round good health. Vitamin D is necessary for bone, joint, muscle and neurological function. Low levels are associated with a number of complications and an increased risk of chronic diseases. However, there is growing evidence of harm that comes from over-exposure to sunlight. Skin cancers including malignant melanoma are among the more severe health effects. Mark Twain famously quoted, ‘Too much of anything is bad’. It has become imperative to find a harmony between achieving enough sun exposure and avoiding an increase in risk of skin cancer. Vitamin D, first discovered in 1920, is part of a group of fat soluble chemicals. Several forms of Vitamin D exists but the two major forms vital for humans are Vitamin D3 (cholecalciferol) and Vitamin D2 (ergocalciferol). The epidermis of the skin has the capability of synthesizing Vitamin D3. The production takes place through the photochemical process of UV radiation of type B (UVB) with a wavelength between 290-315 nm,
“There evidence comes exposure
14
In the presence of UVB, 7-dehydrocholesterol is converted to the intermediate isomer pre-vitamin D3, which undergoes spontaneous isomerisation to cholecalciferol (Vitamin D3) There are many causes to consider when determining important factors that drive the generation of Vitamin D. These include season, the quantity of melanin in an individual’s skin acting as a barrier to effective UV light absorption, sunscreen, latitude, time of day, environmental factors such as cloud cover and smog. The Liver and Kidney are responsible for the conversion of Cholecalciferol to its active metabolite – Calcitriol by action of enzymes in the Liver (Vitamin D 25-hydroxylase) and Kidney (25-Hydroxyvitamin D3 1-alpha-hydroxylase) The actions of calcitriol are considerable. In the intestine, calcitriol causes an increase in calcium and phosphorous absorption and a decrease in magnesium absorption. Indirectly, through its influence in calcium homeostasis and regulation, calcitriol increases the mineralization of bone and preserves its density. Hence, Vitamin D maintains a normal serum balance of calcium and phosphate through the response of the active metabolites on target organs: the intestine, the bone and parathyroid gland.
is growing of harm that from overto sunlight”
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
Without an adequate level and intake of Vitamin D, the body can only absorb 10-15% of dietary calcium leading to a state of low calcium stores called hypocalcemia. Therefore, deteriorating levels of vitamin D predisposes a
The Risks & Benefits of Sunlight REVIEW ARTICLE child to Rickets and the adult to its milder form – Osteomalacia.
syndrome, obesity, ischemic heart disease and Type 2 Diabetes.
Osteomalacia (adults) and rickets (children) share the same underlying process of failure of the mineralization process of the normal bone tissue.
One study conducted by Dr. Thomas Wang revealed that individuals with low vitamin D levels had a 60% increase in incidence of heart attacks and strokes. The theory behind this could be postulated to Vitamin D reducing arterial calcification by diverting calcium to bones and teeth instead of soft tissues such as the arteries. Arterial calcifications reduce the diameter of the arteries, for example in the coronary arteries, causing a diminished flow of blood to areas of the heart muscle supplied by the coronary arteries. This indicates a strong disposition to future heart attacks.
Presenting symptoms of Osteomalacia include diffuse pain of the skeleton, bone tenderness, weakness of the muscles and difficulty in walking. Rickets is among the most frequent childhood diseases in developing countries. The dominant cause is vitamin D deficiency. Symptoms of Rickets include tender bone, dental defects, deformity of the cranium and spine, problems with growth and weakness of the muscle.
“Strong links have been found between poor Vitamin D condition and bowel cancer, metabolic syndrome, obesity, ischemic heart disease and Type 2 Diabetes”
Not surprisingly, the treatment often focuses on increasing dietary intake as well as increasing exposure to Ultraviolet B light Osteoporosis is defined as a decrease in bone density of bone, which is normally mineralized causing thinning and increased porosity of the bone. The remaining bone is fragile with a high risk of fractures and pain. The production of Vitamin D decreases during the winter due to lower intensity, and it has been estimated that around 80% of the UK population do not get sufficient exposure to UVB for Vitamin D synthesis. Thus, people have to rely on dietary sources. It has been theorized that levels of at least 50 – 75 nmol/L of Vitamin D is optimal to sustain bone form. Strong links have been found between poor Vitamin D condition and bowel cancer, metabolic
Seasonal Affective Disorder (SAD) is a type of mood disorder caused by decreasing light levels. It is a winter depressive episode common in areas with low light levels such as Scandinavia. The suprachiasmatic nucleus in the hypothalamus oversees the electromagnetic energy that penetrates the eye, which is then used to coordinate the output of hormones such as melatonin and serotonin, within the endocrine system. One of the many theories to the formation of SAD in certain individuals is an imbalance in this delicate system of hormones and regulation by light. People with SAD suffer from lethargy, lack of interest in activities, weight gain, slowing of psychomotor, anxiety and low motivation. Although the importance of sunlight and Vitamin D cannot be taken lightly, it is advisable to limit the skin being over-exposed to UV rays from the sun. Melanoma of the skin contributes to 1.6% of all cancers world-wide. The Department of Health attributes that 1.5 million skin cancers JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
15
REVIEW ARTICLE The Risks & Benefits of Sunlight are diagnosed annually in USA. In the UK, 2000 people die from the disease each year. Carcinogenesis is the course of formation of cancer is a multistage process, where a series of genetic and external influences leads to the emergence of mutated cancer cells.
‘premalignant lesion’. If it is not addressed and treated, it can transform into cancer.
“Balance and moderation is the Melanomas are cancers key to attain the of the skin arising from cells called maximum benefits that pigmented melanocytes found the epidermis the sun gives us” between and dermis. Melanoma
Ultraviolet light of both A and B type have been incriminated as carcinogens. The two wavelengths of radiation (UVA 320–400 nm) infiltrate differing depths of epidermis and dermis of the skin. UVA has the capability of deeper penetration to reach lower levels and causes oxidative damage to components of the skin cells. UVB, on the other hand, interacts and causes direct damage to the DNA molecule. This damages genetic information carried by the DNA causing molecular lesions and photoproducts leading to the development of skin cancer. The two photoproducts are cyclobutane yrimidine dimer (CPD) and 6-4 pyrimidine –pyrimidone and they execute their destructive behavior by interfering with DNA replication and causing specific mutations. Actinic (solar) keratosis is common to those exposed to the sun in a frequent manner. It is an area of crusty, scaly skin on sun exposed areas such as the face, neck and scalp. While it causes no damage of sorts, it is considered to be a
is usually caused by damage to the DNA through production of photoproducts. Melanomas can be aggressive and spread to distant areas and organs, making the condition incurable and increasing the mortality rate. Basal cell carcinoma is the most common type of skin cancer and arises from the basal layer of the epidermis. Intense, intermittent exposure to the sunlight has been implicated as a strong predisposition for developing Basal cell carcinoma. The prognosis for patients with this type of cancer is highly favourable and has a very strong curative rate. In conclusion, cultural attitudes and behavioural changes in the 20th century have seen an increase in popularity of tanning and sun bathing and lax reactions to the growing threat of skin cancer. Public education is vital not only to reduce the risks posed by sunlight but also to understand that it would be equally harmful to avoid it. Balance and moderation is the key to attain the maximum benefits that the sun gives us.
Author Shivakshi Ravi I am a 18-year old student from Rugby School, whose love of science started early. From a very young age, determined to be the next great scientist, I was often found in the bathroom mixing lotions galore in the hope that I would create ‘medicines’. In my spare time I am often found playing chess or composing music on my piano. Having obtained a black belt in Taekwondo at 12, I enjoy regular sessions at my local martial arts club. 16
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
Is nuclear fusion a viable source of future energy? REVIEW ARTICLE
Is nuclear fusion a viable source of future energy? Abstract
In this article, I have discussed the merits and the problems of nuclear fusion. I have tried to answer my question of viability by discussing whether or not fusion is possible, and if it will ever be economically viable to do so. To do this, I have explored the current level of the science behind fusion, and where it could be progressing over the next fifty years or so. I also compared fusion to other sources of fuel, to see if it was the best option and so go into depth about the pros and cons of the different fuel sources. I concluded that fusion was needed as a replacement for fossil fuels, and that it would be the preferred method due to its safety, minimal damage to the environment, its longevity of resources, and its ability to create copious amounts of energy. I decided that fusion, whilst not at the moment, would become economically viable within the next forty to fifty years, and so answered my question, as I concluded that fusion is a viable source of energy for the future. Introduction it is not always easy to justify spending billions he world is believed to be currently on on research on fusion, when there have been so the brink of an environmental and an many failures. energy crisis. Our dependency on non renewable and polluting energy sources has What is nuclear fusion? lead to a dramatic increase in carbon dioxide Nuclear fusion is ‘a reaction in which light atomic and sea levels. The world needs a solution to the nuclei fuse to form a heavier nucleus, releasing problem, an energy source which can fill the gap much energy’. [1] The process which would take of fossil fuels, whilst providing us with renewable place in a fusion reactor involves the fusion of and clean energy. hydrogen, or more specifically with the isotopes deuterium and tritium. The reaction generates This has lead to the emergence of renewable energy because of the difference in binding energy sources, but these have so far been energy. unable to stop our dependency on fossil fuels. The most common sources, wind and solar, The binding energy is ‘equal to the energy simply are not efficient enough and are too needed to split the nucleus into its individual expensive. There is however an alternative, nucleons.’[2] Binding energy is the difference which, should it be successful, would help us in mass of the nucleus and of its constituent harness the power of the sun. Nuclear fusion is nucleons, because the mass of the separate still in its infancy. However it has the enormous nucleons is different to the mass of the total potential to change the world as we know it. It nucleus, which is known as the mass defect. is clean- it emits no carbon dioxide, and leaves This mass defect is equated to energy by the behind very little radioactive waste. It can dwarf formula e=mc2, e being energy, m being the all other power plants in its energy production. mass defect, and c being the speed of light. This is energy (which would be) released in forming Unfortunately, at the moment there are no the nucleus. As shown on the graph, [3] commercial reactors, only ones built for research. There have been many false starts, as There are two directions from which energy can overconfident scientists claim fusion is only 40 be obtained. This is by the disintegration of the years away, but so far, each claim has proven to nucleus, to make it smaller, which is nuclear be over-ambitious. During this economic crisis, fission, or by fusing nuclei together to form Background Image: JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM 17 Pre-Amp at NIF [NIF]
T
REVIEW ARTICLE Is nuclear fusion a viable source of future energy?
Left: Magnetic Containment Reactor [CCFE] Right: Experimenting inside the target chamber at NIF [NIF]
larger atoms, nuclear fusion. In both cases, the aim is to form nuclei which have a greater binding energy per nucleon, and therefore a mass transfer, and liberation of energy.
The different reactors
There are currently two main ways of producing fusion power. These are magnetic confinement and inertial confinement.[4] [4] Magnetic confinement uses a strong magnetic field to pull the nuclei very close together and create a high temperature. Inside the reactor, there is deuterium-tritium plasma, which is confined by the magnetic field. These magnets slowly compress the hydrogen closer together, and a current is sent through the gas, heating it to enormously high temperatures of around 10,000,000°c, which is considerably higher than the temperature found on the surface of the sun. Inertial confinement takes a different approach than magnetic confinement. Powerful lasers are aimed directly onto the surface of a deuterium-tritium pellet. The outer surface is heated up to enormously high temperatures, causing it to become ablated, and this material 18
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
flies outwards. This then causes the pellet to compress inwards, due to Newton’s third law. The material becomes incredibly dense, around 1,000 times its liquid density, and it also heats the inner material due to the high pressure, and so conditions are created in which fusion can occur. There are now several fusion power stations around the world. NIF (National Ignition Facility) works using the confinement technique. NIF was designed to produce net energy, and has recently passed this landmark [5] . Despite this success, NIF has highlighted some of the major problems with the project. The plant was completed in 2009, six years late, and the cost ballooned from $1 billion to $4 billion [6] . JET (Joint European Torus) uses magnetic confinement. ITER, (the International Thermonuclear Experimental Reactor), which the target for completion is 2019, aims to generate 10 times the amount of energy than what is put into the reactor. DEMO and ARIES-AT are reactors which are planned to be built around 2030 and it is hoped they can nearly be competitive with fossil fuels, running at $0.05 per kilowatt-hour, and pave the way for commercial fusion power.
Is nuclear fusion a viable source of future energy? REVIEW ARTICLE
“Fusion is a clean, virtually limitless source of energy, with a high energy density”
Are there better alternatives?
Renewable energy would, it would seem, be the perfect answer for many, to the question of which energy should become dominant, simply due to its zero carbon emissions, and the fact that their sources of energy will never run out (at least until our sun dies, but energy will be the least of our worries then.) However, market forces are currently stifling their use. It simply makes little economic sense to use solar panels, running at relatively high cost, when coal can run for much less. Furthermore, they are really quite inefficient. The most efficient solar panels are only 41% efficient, losing the rest through heat. Of course, the science of renewables will improve. Solar panels will become more efficient, and new ingenious ideas will be brought forward, such as the snake-like Pelamis Offshore Wave Energy. More energy can thus be made from these renewables, but they are flawed. Most of the problem is that there’s only a certain amount of energy that can be taken from these resources. However efficient they may be, to generate all of the energy that the world would need, they would take up huge amounts of space in a world where there is becoming increasingly little of it. Therefore, I do not believe
that renewables, and solar, are viable sources of energy for the future, largely for economic and practical reasons.
Can already well established fuels be the answer?
Nuclear fission, using uranium, is carbon neutral, and produces a large amount of energy, it is surely another competitor to fossil fuels. Uranium is in less danger of running out than coal and oil, and so certainly until fusion can be fully operational; it would seem to be an incredibly viable source of energy. However, like everything, it has flaws, and in fission’s case, they are quite major drawbacks. For a start, fissions’ radioactive waste is probably more dangerous than if it were to pump out carbon dioxide. Its radioactive waste will last for many millions of years, and will render the area it is in (and indeed the area around the power station) completely unusable. An example of this happening is America’s one nuclear waste site in New Mexico, which has been subject to two leaks, as recently as March 2014 [7] , which shows that the waste disposal is not safe. However, the major problem with fission is the devastation which it can cause if something is JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
19
REVIEW ARTICLE Is nuclear fusion a viable source of future energy?
to go wrong. Whilst the safety at a reactor is quite good, a meltdown would cause far more destruction than it would if a coal powered station were to explode. Chernobyl is the obvious example of this. The explosion isn’t even the part which causes the most damage. The immediate explosion left 31 people dead, but the long term affects from cancer caused by high radiation levels, are thought to have led to around 4,00030,000 deaths. The Chernobyl disaster has left a devastating image of nuclear fission in the public’s conscious. Safety measures can and have been put in of course, to reduce the risk, but the recent Fukishima disaster showed that these methods are certainly not infallible, and so I believe that the risk still outweighs the benefits. Therefore, whilst nuclear fission can compete with fossil fuels, it seems unlikely to ever replace them fully due to the huge risk which is attached to them. They are therefore not viable as a source of energy for the future, not because of their economics, as they are quite competitive, but because of their inherent dangers, which outweigh financial gains.
Is fusion the answer then?
This brings us back to fusion. Fossil fuels will eventually run out, and it seems that renewable sources can never fully replace them. They are too inefficient to be truly economically viable, and they cause their own separate problems to outweigh the benefits of being carbon neutral. Nuclear fission is simply too dangerous to become the sole or major provider of our energy, and its waste, while not contributing to global warming, are a major hazard to all kinds of life. Fusion would therefore be the answer. It is a clean, virtually limitless source of energy, with a high energy density. It has always seemed to be just around the corner, but now it seems that it is really the case. Fusion is entirely possible, being carried out in the stars, and has been achieved
in the laboratories. With recent important milestones being reached, such as net energy being produced, real progress is seemingly being made. To fully answer my question however, fusion has to be economically viable, rather than just being possible. Whilst this is a question that I have realised can never be answered with 100% assuredness, I believe that this will happen. This will be due to energy prices continuing to rise, and the price of fusion energy being brought down thanks to refinements and perhaps even breakthroughs with technique. When this “crossover” point is reached, when fusion is more economically viable than other fuels, is also hard to judge. Science will always come along in leaps and bounds. However, at the current rate, this could be achieved before the mid century mark. DEMO is intended to be almost comparable with fossil fuels, and is intended to be finished by 2033 [8] . Should DEMO work as hoped, it could lead to the widespread use of fusion.
References
1) Concise Oxford English Dictionary, Eleventh Edition (Revised). Oxford University Press inc, New York 2) Tom Duncan (2000), Advanced Physics, Fifth Edition. John Murray Publishers Ltd 3) http://www.euronuclear.org/info/ encyclopedia/bindingenergy.htm 4) http://www.world-nuclear.org/info/Currentand-Future-Generation/Nuclear-FusionPower 5) http://www.theguardian.com/science/2014/ feb/12/nuclear-fusion-breakfthough-greenenergy-source 6) https://www.llnl.gov/news/ newsreleases/2009/NR-NNSA-09-03-06.html 7) http://www.bbc.co.uk/news/world-uscanada-26441154 8) http://www.efda.org/fusion/
Author James Pye I am an 18 year old pupil from The King’s School Canterbury. My interests in science lie in particle and thermonuclear physics, which I am hoping to study at university. 20
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
Chocolate REVIEW ARTICLE
I
Chocolate
carefully opened the door of the fridge, trembling with excitement, and I saw this: a chocolate bar. Chocolate, the very word makes mouths water. What is the one food which is sold in almost every shop? Chocolate, in my opinion, is the best food in the world because of the way it melts in your mouth. After a tiring day, don’t you just want to eat chocolate? Don’t you remember the time you started chewing a delicious bar of chocolate, and then suddenly, everyone within sights, starts pleading for some chocolate? Some say the best invention is the wheel; I say it’s chocolate. There is evidence of chocolate beverages dating back to 1900 BC. The majority of the magnificent Mesoamerican people made chocolate beverages, the mighty Mayans and astounding Aztecs, also made it into a beverage known as xocolātl, which is a Nahuatl word meaning “bitter water”. The popular drink of the Central and South American peoples, wasn’t discovered by Europeans until the 16th century. Christopher Columbus and his son Ferdinand, discovered the cacao bean on the 15th August 1502, in a large native canoe. The Spanish conquistador Hernán Cortés may have been the first European to encounter chocolate though, as an exquisite drink in part of the after-noon dinner routine of Montezuma. Chocolate is made from the bitter beans of the cacao tree. Chocolate expert, Clay Gordon states that,” Ecuador has excellent cacao”, but it is also grown in many other countries. The making of chocolate starts in a tiny tropical tree: the Theobroma cacao. The tree grows pods
which contain 30 or 40 seeds. First, the pods must be harvested, which is accomplished by weary workers with razor-sharped machetes, sometimes attached to long poles. The pods are then carefully opened by the workers. Then, the beans are placed in earthen pits or wooden
beans and covered with mushy banana leaves and left to slowly fervent. The fermentation miraculously changes the bitter taste of the beans into more edible beans. The beans then sunbathe from a few days to up to a week, depending on their quality. The flavour of the beans changes dramatically during this period. Finally, when the beans are dry, they are shipped to a factory, and then are made into a crunchy chocolate bar. Once the cacao beans arrive at the factory, they are made into chocolate. First, the beans are sifted for foreign objects. Would you want part of a pick axe in your delightful crunchie? The cacao is then weighed and sorted by type in order to make sure the manufacturer knows exactly what type of cacao is going to be in the JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
21
REVIEW ARTICLE Chocolate
delicious chocolate. To make bring out even more flavour, the cacao beans are then roasted in large, rotating ovens, at temperatures around 210-290F. The roasting lasts for round about one hour and a quarter. The beans are now darker. The cacao beans’ outer shells are then cracked and blown away, this leaves cacao nibs. The cacao nibs must then go through three processes: crushing, grounding, and sweetening. The nibs are crushed and ground into chocolate liquor, and sweetened with sugar, cocoa butter, vanilla, and milk. The mixture is then run through a conch that mixes and mashes the chocolate. Now, some more cocoa butter and soy lecithin is added to make it that smooth silky bar which you gorge yourself with. The chocolate is then stirred, cooled, and heated. This is repeated until the chocolate has a magnificent, glossy look. Finally, the mixture is poured into a mould, slowly hardened, and then finally ready to be eaten. YUM!!! Chocolate, as well as making you happy and helping you relax, contains some substances which are good for you. Cacao seeds contain significant amounts of naturally occurring flavonoids; flavonoids are connected with a reduced risk of cardiovascular disease and some cancers. Which means that since chocolate is made from cacao seeds: eating chocolate could reduce
Cacao Nibs
Pods from a cacao tree chances of cancer. Chocolate also makes people happy because sugar and caffeine in chocolate boost endorphin and serotonin levels in the brain. Dark chocolate is high in magnesium, which calms nerves and helps relaxation. It also contains tryptophan which causes drowsiness. When the two are combined, they help minimise stress and anxiety. Chocolate is definitely popular, with the UK consuming 605,000 tonnes a year, and even as far back as 1700, the people of Madrid consumed nearly 12 million pounds a year. So, if chocolate: makes you happy, promotes relaxation, lowers the risk of cancer, and is so popular. Why not eat chocolate?
Author Daniel Chow Hi I’m Daniel. I am 12 years old and in my second year at Rugby School. As well as taking an interest in science I, along with the rest of my school house, raise huge sums of money for Children in Need with our annual Pudsey Bear Cafe. 22
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
When Half of Your World is Forgotten REVIEW ARTICLE
When Half of Your World is Forgotten Hemispatial Neglect Abstract
With our population continuing to live for longer, doctors are beginning to learn about new conditions that affect our ageing population. Hemispatial neglect is a condition that causes patients to ‘ignore’ half of their space after a trauma to the brain which withdraws the ability to respond to sensory stimuli on the affected side. This article will discuss the causes of and treatments for hemispatial neglect, including the disabling effects on the individual.
A
t the beginning of the 20th Century, the average life expectancy was said to be about 50 for a female, and 47 for a male. But in the space of just over 100 years, this has soared to about 81 for a female, and 77 for a male. Changes to our general lifestyle, such as improved hygiene and diet, and our improved medical knowledge, like the ability to use antibiotics to treat disease, have contributed to these significant changes for our society. This elongated life expectancy means that we’re more susceptible to conditions that were previously a very low threat. For example, although they can and do occur at all ages, three quarters of all strokes happen after the age of 65, so strokes are usually considered a higher risk for more elderly people. In addition, the number of patients suffering from ischemic strokes, the strokes that block blood vessels in the brain, is increasing, so clot-busting therapies are being administered to save these stroke patients. More people than ever are surviving strokes, enabling us to become more aware of post-stroke conditions that would have been previously unheard of.
Hemispatial Neglect
Hemispatial neglect is one such condition that results after traumas to the brain, typically following a stroke. It is an enormously debilitating condition that causes a patient’s
brain to essentially ignore all stimuli from one side of their surroundings and, in severe cases, to ignore limbs on the neglected side. Neglect is more common following damage to the right side of the brain (meaning that the left side of space will be neglected), although has been documented following damage to the brain’s left side. There can be a substantial range of neglect severity, which could cause patients to neglect:
“The effects of neglect on the individual are quite substantial. Simple tasks like reading and writing become more complicated” Their own body or personal space – Patients may ‘ignore’ any stimuli coming from one or both of their limbs on the neglected side. They may refuse to accept that the limb is theirs, increasing the risk of injury if they are caused to forget pain and sensory stimuli from the neglected body part. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
23
REVIEW ARTICLE When Half of Your World is Forgotten The space within their reach – This could cause for a driving license and are often not entitled a patient to ignore objects immediately outside to an electric wheelchair, which places further of their line of vision, like books on the table for obstacles in the way of their social recovery, example, so may believe they are losing items even if good physical recovery is made. leading to feelings of paranoia. The area beyond the body’s current contact – It is also the patients’ family who are affected by The patient may have control over their limbs neglect. Sufferers are often in complete denial and recognise objects in their immediate vision, about their condition, so it can be very tough on but ignore larger, further away objects like other family members to accept the new challenges people or cars which could pose a threat. faced by their loved one. Neglect is usually The Effects of Hemispatial Neglect triggered by a traumatic brain injury, so the family will be dealing with the aftermath of this, plus this new, difficult lifestyle. The patient may The effects of neglect on the individual are quite also mistakenly ignore family relatives if they are substantial. Simple tasks like reading and writing standing on the neglected side, which can be become more complicated, as patients will heartbreaking for all involved. completely ignore the left side of a page, so the words will become detached and meaningless. There are a number of tests currently in practice, Patients suffering from hemispatial neglect which are being used to diagnose hemispatial have also been known to apply makeup to neglect. To determine the severity of the patient’s half of their face, eat half of their plate of food, neglect, a 30 centimetre line may be drawn on a Figure 1: Line A shows the expected bisection from a patient who is not suffering from neglect. However, lines B and C show the bisection attempts from moderate and severe neglect impairments respectively. The patient will bisect the line they see, be it the full length or a shortened length, so this is used as a clear method of testing the neglect’s severity. and are often at increased risk of injuring the contralesional side of their body (the side that they neglect) if they collide with obstacles like door frames or walls. In addition, it can be hard for a patient with hemispatial neglect to integrate fully back into society, as simple tasks like crossing a road will become dangerous if they forget to check for oncoming traffic on the neglected side. The neglect will also make patients almost entirely dependent on others and incredibly isolated from society, as tasks like efficiently carrying out personal care by themselves will become near impossible. Neglect patients will also lose their eligibility 24
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
piece of paper in front of the patient, who will be asked to bisect the line. The professionals will expect the line to be bisected at the midpoint of 15 centimetres, indicating that the patient is not suffering from neglect. If the patient bisects the line at 17 centimetres (from the left), the doctors will know that the amount of the line that is being seen a line that is about 26 centimetres in length, which will indicate that the patient is suffering from mild neglect. However if the line is bisected at about 28 centimetres from the left, the patient will be seeing a line that is about four centimetres in length, showing a neglect of a far more severe nature.
When Half of Your World is Forgotten REVIEW ARTICLE Figure 2: This is another method of diagnosing neglect. Figure 2 shows an illustration of how a patient suffering from neglect may sketch a clock face, on the right, compared to an average drawing of a clock face on the left.
As we are still in the process of learning about hemispatial neglect, the treatments are still in the pioneering stage. Currently, doctors and hospital professionals try and bring the patient’s attention to their neglected side, in the hope that this will make them acknowledge and accept it. Rehabilitation can also be carried out by a multi-disciplinary team including doctors, psychologists, physiotherapists and occupational therapists to try and reduce the implications of neglect. I was able to observe this rehabilitation first hand when I spent time in Dr Sakel’s Neuro-Rehabilitation department, based in Canterbury, where I learnt about how possible new technologies involving electrical currents may help in the treatment of hemispatial neglect. Dr Sakel’s team has researched the benefits of stimulating the brains of such patients, which has so far been successful. More research is planned into this. While hemispatial neglect is not yet completely understood it is hoped that, as our medical knowledge and treatments advance, we will one day have all the answers regarding this fascinating and intriguing, yet disabling condition.
Acknowledgements
I would like to thank Dr Mohamed Sakel and the Neuro-Rehabilitation team at Kent and Canterbury Hospital for inviting me into their department and giving me the opportunity to learn about neurological conditions such as hemispatial neglect. This has provided me with a very insightful view into aspects of medicine that I hadn’t previously explored, and I have thoroughly enjoyed the journey.
References 1) 2) 3) 4)
http://www.localhistories.org/ http://www.strokecenter.org/ http://www.theguardian.com/science/ http://en.wikipedia.org/wiki/Hemispatial_ neglect 5) Hemispatial Neglect: Clinical Features, Assessment and Treatment. British Journal of Neuroscience Nursing 2014. M Gallagher, D Wilkinson and M Sakel 2013:9;273-7 http://www.magonlinelibrary.com/doi/ abs/10.12968/bjnn.2013.9.6.273 6) Galvanic Vestibular Stimulation in HemiSpatial Neglect. Frontiers in Integrative Neuro Science Special Edition 2014. D Wilkinson, O Zubko, M Sakel, S Coulton, T Higgins and P Pullicino (free access)
Author Sophie Stephens I am studying Biology, Chemistry, Maths and History at Simon Langton Grammar School with the hope of pursuing a medical career in the future. I am interested in all areas of science, particularly anatomy but also astronomy, My interest in neurodegeneration inspired the writing of my article. I also enjoy playing the piano. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
25
INTERVIEW
The Real Meaning of Science with Liz Bonnin
The Real Meaning of Science with Liz Bonnin Young Scientists Journal's editorial team leader Claire Nicholson interviews BBC Television presenter Liz Bonnin.
O
n Thursday 6th November, I had the pleasure of interviewing BBC Television presenter Liz Bonnin. Liz has presented many TV programmes over her career, from BBC Bang Goes the Theory since it first aired in July 2009 and Operation Snow Tiger to specials like the BBC Horizon Series to BBC Stargazing Live. She’s used her Biochemistry background to inspire people of all ages into science. “…when I’m sent to do anything to do with science or natural history, it’s pretty much a stand out moment for me, it’s like a dream come true…” Throughout Liz’s TV career she’s had countless stand out moments, she mentioned a number of events which stand out in her memory, one of which was the ambitious challenge she had last year where she went to Norway for BBC’s Stargazing Live on the hunt of the Aurora. The challenge combined an incredible amount of luck with a huge amount of technology to capture the aurora live from an aeroplane in the couple of minutes she’d be on air. Throughout the interview she talked about many of the experiences that she’d had the amazing opportunity of doing. She’s managed to use her biochemistry background to do lots of programmes on the intelligence and behaviours of animals – doing these programmes she’s encountered grey whales in Mexico that came up to the boat and present to their calves and play around the boat and like to be scratched. She’s filmed a programme on Siberian tigers and been lucky enough to see the Amur Tiger in Russia. One of the endangered species in the world, there’s between 350-400 tigers left in the world. The Indigenous peoples of the Russian Far East forbid killing the tigers, whom they called “Amba”, and considered that a meeting with the striped cat was a sign of bad luck. The world of science is full of inspirational people, some that get ample media coverage and plenty of unsung heroes in science. In particular, Liz mentioned a Russian Scientist called Viktor Luckureski. Liz said that he’s “obsessed with tigers and leopards… one of the most passionate generous kind…human being that I’ve ever met”, as she said it’s people like this who work long hours, dedicating their lives to protecting our natural world and yet they gain next to no credit for their work who are the real heroes of science. One example of this was where she talked about the big BP Oil Spill. Liz did a piece on the spill for BBC Bang Goes the Theory where she travelled to the Gulf of Mexico. She mentioned one hanger that was incredibly humid, with very very hot temperatures where there were all these scientists working tirelessly both night and day to save the thousands of birds that had been affected. In this massive ‘factory’ these pelicans were all being washed, warmed and fed, while the crew were filming they were just getting on with it – again, with no recognition. 26
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
The Real Meaning of Science with Liz Bonnin
INTERVIEW
I also asked Liz about how we can make the image of science better in schools, to her the appeal of science all comes so naturally, she said that “science by its very nature doesn’t have to be sold because it’s just so cool”, science shouldn’t have to be a ‘subject’ because it’s about describing the world around you. Many people see science as the old science where there’s an old man in a lab coat and zero creativity, but as Liz mentioned there’s hundreds of jobs you can do as far as science is concerned, but we need to reignite people’s curiosity for the world. “It’s almost like reminding people what they were like when they were children and they couldn’t stop asking questions and they were so excited about the world, that’s what scientists are, and that’s what us science communicators are trying to do is remind them of their of their childish enthusiasm and curiosity about the world.”
Author Claire Nicholson Editorial Team Leader
Watch the full interview on YouTube, just scan this QR code your 16 smartphone. JUL-DEC with 2014 I ISSUE I WWW.YSJOURNAL.COM http://youtu.be/qeYWrRxXI6s
27
REVIEW ARTICLE Great Scientific Discoveries - Genius or Time & Place?
Are Great Scientific Discoveries the Product of a Particular Place and Time, or the Result of Exceptional Genius? Abstract
The question posed asks whether time and place, or exceptional genius has the greater part to play in the advancement of scientific understanding. In this essay I will try to gauge the extent of each factor in such discoveries, and how this affects the discoveries that are made.
W
e must first examine the nature of scientific discovery, and how these breakthroughs come about. The most commonly accepted model for the advancement of scientific thinking is through a process of observation and application of knowledge. Simply put, if a scientist observes a change in one variable and can relate this to another variable or factor, then conclusions can be reached which serve to further the understanding of the scientific community as a whole. In this fashion, knowledge is accumulated and improved upon, as more natural phenomena become rationalised and explained. For this to be true the scientist must have some cognitive skill with regards to any observations made, and either the knowledge or the understanding necessary to compose this into an external impact. This ability will be affected by a number of factors, and will be influenced by the scientist’s intellect, as well as any external factors such as time and location. Louis Pasteur is credited with the quote: “In the fields of observation, chance favours only the prepared mind” [1]. What he meant by this, is that observations will only yield meaningful conclusions, and lead to a discovery, if the mind of the observer has been properly prepared beforehand. However, it is uncertain what he meant by: “prepared mind”. This can be interpreted as the raw intellectual ability of the scientist, or the standard of knowledge possessed by the scientist (which is majorly influenced by the time period in which a discovery takes place). Regardless, it is the ability to make connections 28
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
such as these which account for most, if not all, of scientific discoveries since the most fundamental observations made by the Greek mathematicians of 3rd century BC. Although this model for scientific discovery is rational (it follows that an observation made by a competent mind will break new ground if given the right application), how can we account for the fact that ancient mathematicians had very few preformed conceptions upon which to base their discoveries, and yet made fundamental contributions in the fields of science and mathematics? The fact that these earliest physicists had no laws on which to base their work makes their formidable intellect and imagination all the more impressive. Indeed, this only serves to enforce the argument that such discoveries are the product of observational genius, as time and place obviously had very little influence on such discoveries. Take the example of Archimedes, who is often acclaimed to be the first physicist and applied mathematician. Original and imaginative, whole branches of scientific thought began with him, which is staggering given the lack of works on which he could base an understanding of the natural world from. There were, of course, great technologies pre-existing Archimedes by thousands of years, and there may have been other contemporary physicists working on ideas similar to his. Despite this, Archimedes is widely regarded as one of the leading scientists of classical antiquity, with vast contributions to physics and mathematics- including an explanation of the principle of the lever. As he
Great Scientific Discoveries - Genius or Time & Place? REVIEW ARTICLE
“It can only be concluded that intellectual ability was by far the more important factor in Archimedes’ contributions”
was able to make these discoveries irrespective of his place in time, it can only be concluded that intellectual ability was by far the more important factor in his contributions, as there was no scientific knowledge-base on which to build. On the contrary, in Melvyn Bragg’s book titled On Giants’ Shoulders, Professor Lewis Wolpert makes the point that the geographical location of Ancient Greece was fundamental to the development of scientific thought, mainly due to the Greeks’ access to Euclidean geometry. Wolpert contends that this understanding of geometry was fundamental to the instigation of scientific thought, and this could only have happened in the crucible of Ancient Greece. As such, the global position of Archimedes at the time of his discoveries is significant, despite his exceptional ability. Moreover, whilst time (and by extension, accumulated knowledge) may have had little influence on Archimedes’ discoveries, it undeniably had a great impact on later advancements. For instance, any later scientists that lack Archimedes’ insight and innovation would require this ‘knowledge-base’ on which to build theories, apply observations, and reach a higher ground of scientific understanding. Newton alludes to this concept in his famous quote: “If I have seen further it is by standing on the shoulders of giants” [2]. Indeed, most advances in modern science require basic laws
and concepts around which conclusions can be formed, and even exceptional academics such as Sir Isaac Newton have relied on the works of previous scientists. Researchers cannot be expected to make discoveries independent of any work that precedes them, and this is only accentuated by the fact that certain discoveries require technologies such as transmission electron microscopy, that are the direct product of time and place. We also cannot ignore the impact that circumstance has over a person’s intellectual capability. Exceptional genius can often be quelled or at least subdued if a person’s position in society or geographical location has a negative impact on their ability to exercise such traits. For instance, up until recently the position of ‘scientist’ was not a particularly prestigious or well-paid job, a fact which Antoine Lavoisier resigned himself to when he became a tax-collector, studying science only part-time. This raises the concerning notion that even if a person possesses exceptional academic ability, they may never have the chance to utilise this capability and contribute to the scientific community. This, again, gives weight to the argument that scientific discoveries are the product of place and time. It is note-worthy however, that there are examples of people throughout history who have become great scientists from very humble JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
29
“Wartime is widely regarded as a catalyst for great scientific invention”
A scientist uses special apparatus to measure the focus ranges and scales of binoculars and telescopes in Teddington 1944. [IWM]
beginnings. The most prominent example of this is Michael Faraday, a bookbinder’s apprentice who made ground-breaking contributions to the fields of electromagnetism and electrochemistry, despite leaving school at the age of thirteen. As such there are clearly scientists whose exceptional intellect can manifest itself despite any limiting factors concerning background and education. Of course, circumstance can have the exact opposite effect on such affairs. When considering the conditions that are particularly conducive to scientific advancement, one example immediately springs to mind: Wartime is widely regarded as a catalyst for great scientific invention, as many of history’s greatest discoveries have occurred, often directly, due to the pressures that only a nation at war can create. An example of such an innovation is radar, which was discovered and secretly developed by several nations before and during World War II. Radar uses a transmitter that emits radio waves, allowing objects of high electrical conductivity to be detected, with their relative movement ascertained though an appreciation of the Doppler effect. Developed pre-war, radar showed great potential as an object-detection system that could be used to anticipate enemy ships and aircraft. Due to this potential and the 30
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
rising pressures in pre-war Europe, projects such as this were given status and funding by the government that may not have been awarded during peacetime. As such, breakthroughs were made in the science behind radar technology as it was developed first at the Naval Research Laboratory, and then at the British Air Ministry. This work culminated with the design and installation of aircraft detection and tracking stations along the East and South coasts of England by the outbreak of WWII in 1939. The technology provided vital information in advance which helped the Royal Air Force win the Battle of Britain in late 1940. Whilst this scientific discovery was developed to be used as an air-defence system, it has various other applications such as radar astronomy, and has enabled scientists to make observations that were previously obscure without the proper equipment. As this discovery would have taken much longer to culminate had wartime not have been such a pressure on research, it is apparent that contemporaneous matters such as this have a great influence over scientific discovery. This is not only true in areas of military technology, but also in medicine and human biology, as wartime provides a huge influx of patients that require medical attention. This may not be significant
Great Scientific Discoveries - Genius or Time & Place? REVIEW ARTICLE
in modern times, with a near comprehensive working knowledge of the human body, but in a less medically informed time period, a large patient base can have a huge impact on the understanding of human biology.
As always there are instances of ambivalence, when it is unclear which scientific discoveries are the product of time or place, and which are the product of exceptional genius. I would argue that the discoveries made in Renaissance Italy epitomise such a situation. This is because Italy, and essentially the city of Florence, was the early epicentre of the Renaissance period, and clearly a time and place of driven and committed innovation. Such an environment would have a huge influence over scientific advancement, as any breakthroughs would inspire and impel others to do the same. Interestingly, such an exciting time driving the pursuit of scientific knowledge was propagated partly by the raging military campaigns of the period. As alluded to earlier, wartime can be a huge pressure on the advance of scientific understanding, and this seems to be the case in Renaissance Italy. As Paul Strathern contends in his book The Artist, the Philosopher and the Warrior, “…as in ancient Greece, such achievements frequently occur when the civilisation that produced them is under threat”. [4] It is true that the Italian city states of the time were weak and divided, but far from undermining the academic stability of the region; this prevented Italy from lapsing back into a time of intellectual stagnation. However, the political pressures were not the only thing driving the Renaissance, and once again we appreciate the importance of exceptional genius in scientific discoveries as it becomes clear that a number of exceptional characters, both political and intellectual, were responsible for a period of such intense discovery. Most would consider Leonardo da Vinci as the most prominent innovative force, as he was an unparalleled visionary and the most talented military engineer in Italy at the time. He is the perfect example of a genius who transcends his point in time, and whose advanced thinking did not depend on the current, and often unfounded, scientific method. Throughout his life he produced more than 200 anatomical drawings of astonishing detail
and accuracy, in the course of which he made many discoveries far ahead of his time, as well as correcting many conceptual errors that had persisted through the medieval era since classical times. According to Strathern, da Vinci was able to detect the circulation of the blood some 150 years before it was fully discovered and explained by the British physician William Harvey, and even came close to determining the difference between arterial and venal blood, remarking: “The blood which returns when the heart opens again is not the same as that which closes the valves of the heart”. [4]
Da Vinci was not restricted by contemporaneous matters in his inventions, and assuredly possessed an exceptional genius and imagination. This contention is strengthened by Strathern, who states that: “Instead of being taught what to do, what to think, he dreamed of what he wanted to do, and no formal schooling persuaded him otherwhise”. [4] The fact that Da Vinci and many of his contemporaries were independent thinkers facilitated this time of scientific insight, and the intellectual prowess of such people was a huge enabling factor in Renaissance Italy’s intellectual domination. As such, we can view the Renaissance as a mixture of the two influences. It was obviously a period of intense discovery and revolutionary thinking, but this was brought about by a collection of geniuses who drove their respective intellectual communities to succeed. In this we see that the influences of circumstance and genius are far from mutually exclusive, but rather more symbiotic, as the two can work together to produce a period of fervent curiosity and scientific innovation. In the same way that time and place have an influence over scientific discoveries that are made, some are influenced directly by chance or coincidence. For example, it was a fortuitous accident that Alexander Fleming isolated the world’s first antibiotic, revolutionising modern medicine. In fact, it was completely by chance that Fleming stumbled upon the anti-bacterial effects of the fungus Penicillium notatum, when he left cultures of staphylococci to stand in his laboratory. Upon return he discovered that one of the cultures had been contaminated, causing the colonies of staphylococci immediately JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
31
“The transition between scientific models can only take place if previous concepts are challenged”
Copernicus [Science Photo Libary]
surrounding the growth to be destroyed. This simple observation marks the start of modern antibiotics, and happened almost completely by chance. It was the archetypal ‘right place, right time’ discovery that has proved to be instrumental in the development of modern antibiotics, saving many millions of lives since mass production began in 1945. This is not to say that a certain level of scientific nous was not required for the discovery of penicillin: Fleming had already developed a reputation as an excellent researcher by the time of penicillin’s discovery, and was wellknown amongst other scientists for his earlier work. He had spent years searching for natural anti-bacterial agents following World War I, after observing the inefficacy of wartime antiseptics in the treatment of sepsis. Without such research being conducted, the culture within which the Penicillium notatum developed would not have been able to instigate such a monumental chain of events. More importantly, had Fleming not been a distinguished researcher and pharmacologist, the discovery of penicillin may well have simply passed him by. Indeed, the application of knowledge and experience is crucial in the face of such an observation as, to the untrained eye, a “halo of inhibition around a blue-green mould” [5] is not immediately recognisable as one of the greatest scientific discoveries of the early 20th century. Regardless, it may still be argued that chance has a large part to play in any scientific discovery, and that even a scientist of exceptional calibre would require a certain moment of inspiration to realise the implications 32
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
of any observations made. This can be applied to Fleming’s discovery in that even a veritable genius would not have been able to isolate such an elusive anti-bacterial treatment through intellect and deduction alone. Whilst it may be too far to call humanities greatest discoveries ‘serendipitous’, there is a strong case for the influence of time and place in many, if not most, of history’s scientific triumphs. On the contrary, there are examples throughout history of intellectual brilliance that transcends time. It certainly wasn’t serendipitous when Einstein developed his quantum theory of light. Equally, time and place had no part to play in the publication of his general theory of relativity; it was purely the product of the man whose name became synonymous with the word ‘genius’. However, it may be noted that true genius such as his is rare, and few enough individuals possess the intellect capable of making such discoveries independent of their place in time. Regardless, the simple fact of the matter is that for a discovery to be made, any knowledge gained requires an application. These ideas often form in a single mind, and an individual can typify as well as exemplify a sudden breakthrough in thought. Although circumstance undoubtedly has an influence over such individuals, it is the intellect of the mind making the connections that ultimately makes a discovery. As Einstein himself famously said: “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand”. [2]
Great Scientific Discoveries - Genius or Time & Place? REVIEW ARTICLE This forward thinking is absolutely crucial in wrong. To see past this requires a great amount the formation of new ideas, as the transition of insight, which Copernicus demonstrated between scientific models can only take place when he disputed the Ptolemaic model of the if previous concepts are challenged. In his book solar system, and thinking beyond the accepted The Structure of Scientific Revolutions, Thomas paradigms of the time in a display of exceptional S. Kuhn argues that the evolution of scientific intellectual and imaginative ability. theory does not emerge from the accumulation Knowledge requires application. State of of facts, but rather through ‘paradigm shifts’ understanding alone cannot produce innovation that radically change scientific thought and of thought. That is the nature of innovation: interpretation in a particular field. This concept it cannot be stimulated by preconceived and was highly controversial upon its release in possibly out-of-date thinking; it must be brought 1962, as many believed that it introduced an into being by intellectual genius that can element of the irrational into a scientific method introduce new paradigms and change scientific that should be totally logical. He challenged thinking. Apart from some rare exceptions, great the prevailing view that scientific progress scientific discoveries require both genius and the is a “development-by-accumulation” of facts right state of knowledge. However, as state of and theories, by asserting an episodic model knowledge alone cannot succeed in discovery, in which periods of conceptual continuity are genius must be regarded as the more important interrupted by periods of revolutionary science. factor. Paradigm shifts are more than just standard discoveries within pre-determined scientific theory; they change the theory itself, uprooting all previous work and causing scientists to look again at any observations they have made. A prime example of such a paradigm shift is the Copernican revolution. At its centre was Nicolaus Copernicus, a Renaissance mathematician and astronomer who was largely responsible for a major shift in scientific theory when he formulated the heliocentric model of the universe. This model placed the Sun, rather than the Earth, at the centre of the solar system and was hugely controversial at the time. To be able to look at data objectively is difficult, especially when given presumptions about the natural world, and yet Copernicus was able to overcome the misconceptions concerning the structure of the solar system when formulating his model. It is often necessary to stand on the “shoulders of giants”, but this can be false support when the presupposed paradigm is fundamentally
References
1) Wikiquote (January 2014), Loius Pasteur, available at: http://en.wikiquote.org/wiki/ Louis_Pasteur 2) Bragg, Melvyn (1998), On Giants’ Shoulders, Hodder and Stoughton 3) Alan Dower Blumlein webpage (May 2012), The Story of Radar, available at: http://www. doramusic.com/Radar.htm 4) Strathern, Paul (2010), The Artist, the Philosopher and the Warrior, Vintage 5) Wikipedia (February 2014), History of Penicillin, available at: HHHHHjh http:// en.wikipedia.org/wiki/History_of_penicillin 6) Antibiotic resistance (September 2001), History of Antibiotics, available at: http:// web.archive.org/web/20020514111940/ http://www.molbio.princeton.edu/courses/ mb427/2001/projects/02/antibiotics.htm 7) Kuhn, Thomas (1996), The Structure of Scientific Revolutions, University of Chicago Press
Author Sam Wallace I study Biology, Maths, and Chemistry at the King’s School, Canterbury. My particular interests are in Marine Biology, and I want to study Biological Sciences at university. I recently conducted a four-week research placement where I studied the activity of sulfhydryl oxidase on the formation of disulphide bonds in the bacterial micro-compartments of E.coli. I enjoy reading classic novels and rowing for my school’s 1st VIII. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
33
REVIEW ARTICLE Bioactive glass in bone tissue engineering
Bioactive glass in bone tissue engineering Abstract
This article discusses the relatively new bioactive glasses which have been extensively experimented within the field of regenerative medicine and have shown promising results. We will look at how this is a more viable alternative to already existing options in bone tissue regeneration, and also briefly at the mechanisms of how these materials actually work in the human body.
A small introduction to Bioactive glass
So how do bioactive glasses work?
An autogenous bone graft is when bone is removed from the patient’s own body and used to regenerate bone at the site of damage/ disease. Unfortunately this means that there’s a limited supply, and problems could arise at the donor site ( the area where the bone was removed). Bone allografts are similar to autogenous bone grafts, with the difference that bone is removed from someone other than the patient. This also has its risks, such a possible transmission of disease and an immune reaction triggered by the foreign bone material. Growth factors encourage new bone to form by stimulating the growth and activity of osteoblasts, but unfortunately, this method is expensive, and recent studies have showed there may be possible connection between high usage of growth factors and cancer.
The use of bioactive glasses in regenerative medicine is fairly new, but what we can see from its clinical history is that is has proven to be successful when utilized, and has most importantly shown to produce no adverse effects in the human body.
B
‘
ioactive glasses are surface-reactive glassceramic biomaterials.’ These remarkable materials, composed of silicon dioxide, calcium oxide, sodium oxide and phosphorus pentoxide, have been extensively investigated as implants in the human body, to help repair or replace damaged or diseased bone. This could, in the near future, eradicate the use of autogenous bone grafts, bone allografts and growth factors, all of which come attached with several problems.
These numerous problems give us to reason to believe that the use of bioactive glass will significantly reduce surgical complications, thus proving a less detrimental, safer and cheaper option.
34
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
A bioactive material is defined as: ‘ a material that elicits a specific biological response at the interface of the material, which results in the formation of a bond between the tissues and the material.’ Bioactive glasses do just this, as they are able to react with fluids in the human body to form hydroxyapatite (HA) and hydroxycarbonate apatite (HCA) layers. These bonding layers are almost identical structurally and chemically to our bone, and in addition, HA is the main mineral constituent of bone, so this allows strong bonds to form between the biomaterial and bone. Bioactive glasses also release silica and calcium ions, which are thought to stimulate osteoprogenitor (osteoblast) cells that help in the processes of growth and repair of bone. A study was conducted using soluble silica and human osteoblast-like cells. What was found was that when the cells were exposed to the silica, the DNA synthesis rate increased, as well as the rate of mitosis of the cells, which supports the aforementioned hypothesis of the effect of silica ions.
References
1. http://www.sciencedirect.com/science/ article/pii/S1742706112004059 2. http://en.wikipedia.org/wiki/Bioactive_glass 3. http://www.tandlakartidningen.se/ media/927/Greenspan_8_1999.pdf 4. http://www.wisegeek.com/what-areosteoprogenitor-cells.htm
The Extracellular Matrix REVIEW ARTICLE
The Extracellular Matrix Abstract
The Extracellular Matrix (ECM) is the fluid, containing tissue and extracellular molecules, that surrounds cells in inter-cellular spaces. Once thought merely to serve the purpose of supporting tissues and acting as scaffolding in the human body, the ECM has recently made a promising appearance in regenerative medicine. It is important to note that there are many types of ECM, and the ECM is contained in most living organisms. This fundamental substance is secreted by specialised cells in the body called fibroblasts, and certain types of ECM are secreted by particular fibroblasts, e.g. chondroblasts secrete cartilage ECM and osteoblasts secrete bone ECM.
The Role of the ECM in Regenerative Medicine
T
he ECM is a highly sophisticated substance that has shown promising potential in the field of regenerative medicine. This branch of medicine looks into the “process of replacing or regenerating human cells, tissues or organs to restore or establish normal function”, and the ECM has proven to do just this. ECM used in regenerative medicine is commonly extracted from pig bladders. It is also removed from other body parts, such as pig small intestine submucosa, and other parts that are commonly unused and freely available. When harnessed to its full potential, the ECM could maybe even open up the possibility of creating a whole new human being. The ECM has been used so far for the regrowth and repair of cells and tissue, and is currently being used in hernia repair and breast reconstruction. There have been many reported cases of ECM grafts helping close and heal wounds. One such case was of a soldier who had been fighting in Afghanistan, and had been harmed in an explosion, leaving a large wound on his thigh. Considerable muscle mass had been torn from his leg, leaving him unable to walk. He was then treated with extracellular matrix extracted from pig bladder, and soon enough, muscle was rebuilding in his thigh, and he soon regained the ability to walk. The way in which the ECM works in order to build and repair tissue is highly intelligent. The ECM ensures the immune system does not attack it and then
attracts stem cells to the location in which tissue regeneration is required. This remarkable research that has been made into the extracellular matrix is very exciting, and gives us reason to believe that if we can
“When harnessed to its full potential, the ECM could maybe even open up the possibility of creating a whole new human being” regenerate many tissue types, therefore various different organs, we can help treat victims of tissue damage, burns. We could also help grow organs for individuals waiting for organ transplants, help amputees re-grow missing limbs, the list continues. Finally, this leads to an interesting question: could we create a whole new, sentient human being just from the extra cellular matrix?
References
1. New Scientist, 14/09/2013 2. Regenerative Medicine, 2008, 3(1), 1-5[47] 3. http://www.examiner.com/article/ regenerative-science-growing-body-partswith-extracellular-matrix 4. http://users.rcn.com/jkimball.ma.ultranet/ BiologyPages/E/ECM.html 5. http://en.wikipedia.org/wiki/Extracellular_ matrix 6. http://www.techtimes.com/ articles/6356/20140504/extracellular-matrixfrom-pigs-helps-regenerate-muscles-ofinjured-soldiers.htm
Author (Bioactive Glass & Extracellular Matrix) Magathe Guruswamy I study Biology Chemistry and Maths at Norwich High School for Girls. I find the science of the human body particularly fascinating, and am hoping to study medicine at university.
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
35
REVIEW ARTICLE Into Space Without Rockets
Into Space Without Rockets
R
ockets that could reach near space altitudes have existed since World War 2 and have been developed greatly since then. This is a great improvement from the steam powered rocket of Archytas (4th Century BC) and the Chinese fireworks (7th Century AD). From then on rockets have carried satellites well beyond Earth’s gravity and to the edge of the solar system. Rockets are used for anything concerning space or near space-like altitudes. This is because they are our only contact to space (except for the space shuttle which has been taken out of use due to it costing more than anticipated). We’ve currently pretty much perfected the rocket design and have no significant improvements which can be made. We use rockets for sending research probes into other planets and putting various satellites into orbit, for example communications satellites, weather satellites, space stations and research satellites (Hubble Space Telescope). So far, we’ve only explored rockets and the space shuttle, but they have many disadvantages. The main ones are the very high costs (approx. $22,000 per kg [1]), extremely large fuel consumption, and fuel storage problems (as the fuel is either poisonous; or the oxidiser and fuel need temperatures close to 0 Kelvin). Most of the expenditure is still spent through getting the rockets out of Earth’s gravity. Although due to their costs research is being done on 36
alternative methods of getting into space. Other methods of launching objects into space include launch loops, space guns, laser propulsion, and space elevators. If research is done into nonrocket launch systems then the cost of going into space is significantly reduced.
So projects that have been so far hindered by cost will have a chance to flower, such as space colonisation; space based solar power; and the terraforming of Mars.
Launch Loops
This is a loop which is 2000km long from west to east and is 80km high in the middle. It starts from the one end on ground level it is then inclined up to a height of 80km and then goes straight for 2000km and then inclines down to the ground again. It then bends around itself and goes along the track that it initially took to end up where it started and connects to where it started (see Fig. 1). Several of these structures have been proposed to be built in equatorial Pacific. The loop is a tube known as a sheath and inside the sheath is a
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
[Artwork: Holly O’Connor]
Into Space Without Rockets REVIEW ARTICLE
Figure 1 (left) The red line shows the tube in which the rotor is. The blue lines are stabilization cables.
rotor, the rotor is the size of a wire and is also in the same shape as the loop and it is suspended in the sheath without touching the sheath. The rotor is also made out of a magnetic substance (e.g. iron). When the rotor is not moving the whole structure is lying on the ground. When the rotor is gradually accelerated the structure lifts itself because of centripetal force. Wires are used to keep it in place. A constant power source is required to keep the structure in this way and more power is required to launch an object. To launch an object it will be lifted to the westernmost point where it is 80km above ground, then the object will create an electromagnetic field. This will create eddy currents with the rotor which will both pull the object in the direction of the rotor and push it away from the rotor. When the object has reached the end of the 2000km track it would have reached escape velocity. Some of the advantages about this is, that it can be made by materials available today; it can launch many launches per hour; it can reduce launch costs $300/kg to as low as $3/kg2 ; it gives a safe amount of acceleration; and has a very low chance of getting hit by space debris. Some of the main disadvantages include the fact that no research has been made into the weather systems of the Equatorial Pacific that is detailed enough for this. It has stored, in the rotor, the same amount of energy as the Hiroshima nuclear bomb in kinetic energy [3] . So heating effects from weather can cause the rotor or sheath to expand, this will cause slack
which will mean that, unless it is accounted for, it could cause the rotor and sheath to touch and explode. Also, the sheath must be absolutely airtight, otherwise air will leak into the sheath and will melt the rotor through friction.
Laser Propulsion
Laser propulsion involves a very high powered laser and specially designed crafts. It was first formally introduced by Arthur Kantrowitz in 19726 . There are several different ways in which energy can be transferred to the craft, through pressure from the radiation or by using the laser to burn some form of propellant. The last method is the only one possible to launch the object. Most of the different methods involve a pulsed or continuous laser to shine on a parabolic mirror on the rear of the craft which focuses the beam on a gas or a solid which explodes. A model craft (known as the Lightcraft) uses this technique to heat air and has reached a record height of 72 metres (236ft)6 ; a different source [7] says 71 metres (233ft)). Using this technique a megawatt laser could put a one kilogram satellite into orbit.
Figure 2 (right) The Lightcraft is the most advanced version of laser propulsion available today JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
37
REVIEW ARTICLE Into Space Without Rockets The other methods use a heat exchanger and when that craft is brought back to Earth. This a generator to convert the energy from the design is by far the most permanent link to lasers into electrical energy which will power space and would be crucial in colonising space. some form of propulsion, for example using the Once built it would make travel to space very electricity to push plasma out of the rear. cheap ($220 to $880 per kg [8] The estimated costs for it to be built are $6 billion and one Models of this technique have already been space shuttle flight costs 500 million, so it is tested (Lightcraft), so it is quite advanced as within our scope. It will take a lot longer to go a technology. If this technology is developed it into space but the forces would be much more could significantly reduce travel costs. It would tolerable for a much wider amount of people. also mean that, because most of the equipment However, there is no known material strong is on the ground, there would be a lot more enough that can support its own weight over room inside the ‘rocket’. If a system of satellites such large distances. Theoretically though is built, which can convert energy from the carbon nanotubes are strong enough for they sun to high energy laser beams, then, not only have a theoretical tensile strength of 300GPa but interplanetary travel but interstellar travel as well the current maximum is 120GPa. The minimum will be possible. However building such a fleet estimated requirement of tensile strength is would be enormously costly and the technology 130GPa. Another candidate is Boron Nitride. to build lasers that powerful, exists, but are The tests are on strips of carbon nanotubes a expensive and very power-hungry. Although the few nanometres long and so far nothing has developments of lasers can make them very been built which can even compare to a space available building a space port for this would still elevator. Vibrations from the sun and moon; cost a lot head-on. climbers; weather; and meteoroids along the space elevator could literally split it open.
Space Elevator
It is a tether like structure where there is an extremely long (taller than geostationary orbit) ribbon made out of, possibly, carbon nanotubes. The carbon nanotubes will be braided into a rope and coiled and sent up in a satellite; the satellite will then uncoil it, while keeping the centre of gravity on or just above geostationary orbit. When the cable has gone down far enough other craft will clamp onto it and pull it down to a base (probably in the sea). The end result will be an extremely long rope made of carbon nanotubes extending above geostationary orbit and pulled taut with a counter-weight made up of a captured asteroid or the space craft that took it into orbit in the first place.
Building one on Earth can be very hard because of many factors like earthquakes, fluctuations in gravity and space junk which can have devastating effects on the cable and cause whip like motions or cause the cable to split. It will also require huge amounts of planning and funding. It also needs to be on the equator but many countries on the equator have political problems and the most powerful countries would not like to hand over all power to a small country.
A huge advantage of this is that it can send and bring all types of cargo with very little energy spent for the energy used to send the craft into space would be gained
In the Project HARP a U.S. Navy 410 mm 100 calibre gun was used to fire a 180 kg slug at 12,960 km/h, reaching an apogee of 180 km, hence performing a suborbital
38
Space Gun
A space gun is very literally a very large gun capable of shooting payloads into space. It is also featured in many early novels of getting into space including Jules Verne’s From the Earth to the Moon.
Fig. 3: Space Elevator This is the size of a space elevator compared to earth (all to scale)
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
Into Space Without Rockets REVIEW ARTICLE
“If the Space Gun made then it can produce a very cheap form of space travel. However unless the barrel is a few hundred kilometres long it can only be suitable for specially adapted spaceships and fuel”
Fig. 4: Space Gun A space gun with its end very high into the atmosphere [gadget fix]
spaceflight. However, a space gun has never been successfully used to launch an object into orbit4 . Incidentally another object was shot into space using nuclear power during Operation Plumbob by the U.S. it was part of the Pascal-B nuclear test where a 900 kg steel plate cap was accidentally blasted off the top of a test shaft at more than 66km/s. The cap was never found though extremely rough calculations say that it reached six times the escape velocity. However, it is widely believed that it burned up in the atmosphere [5] . Unlike a rocket a projectile fired in this way would continuously lose energy so huge amounts of energy is required amounting to around 64000g (62762.56N/kg) (assuming that there is constant acceleration which there isn’t so this is hugely conservative) to reach escape velocity with a muzzle of 100m5 . So to make it tolerable for humans (12.8g max.) a gun barrel of 500 km is required. This is not as simple as making the barrel longer because more and more problems are encountered. One of the first problems you encounter is the fact that the air in front of the barrel cannot get out of the way, this can be solved by making it a vacuum with a seal on the top which is weak enough for the projectile to go through or lifting the barrel so much that there is very little atmosphere itself, each of them with their own problems. Another problem is that the propellants themselves cannot accelerate the projectile beyond the speed of sound. However, a light gas gun could be used to make the upper limit for acceleration much higher.
The acceleration is not constant because the pressure drops. This could be remedied through timed explosions along the barrel and keeping the pressure high. Another way in which the pressure can be kept high is by making the projectile the shape of a ramjet and filling the barrel with fuel [5] . Another disadvantage is that it will have to deal with re-entry style heat and friction on the ascent. This means that there will be severe energy losses on ascent and it would also mean that the trajectory would be difficult to control. One other disadvantage is that the only orbit that would not involve crashing into Earth or reaching escape velocity would be something that would essentially hit the launchers in the back. The only way this can be avoided is only when there is an active payload (rockets). The main advantage for space guns is that small scale demonstrations of this have already been made, for example Project HARP, Project Babylon, and Project SHARP. Additionally, if it is made then it can produce a very cheap form of space travel. However unless the barrel is a few hundred kilometres long it can only be suitable for specially adapted spaceships and fuel.
Discussion
Because rockets have such a huge cost, they are on the decline. Fewer and fewer rockets are sent out but each one of them is crammed with as many satellites as possible. Research has been carried out to find a method that significantly reduces these costs and open space up for everyone not just governments. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
39
REVIEW ARTICLE Into Space Without Rockets
“As rockets have such a huge cost, they are on the decline. Fewer and fewer rockets are sent out but each one of them is crammed with as many satellites as possible” Space elevators are the most permanent link to space. They also have the potential to make a trip to the moon as common as a holiday to Spain. However, building one on Earth with our current technology is out of the question, as we don’t have any material that is even remotely strong enough so it is impractical. Even if it is completed then there will be some major complications to consider. For example, space debris and meteoroids may collide with the space elevator and destroy it. So some type of method that actively destroys space junk needs to be introduced and made and the space around Earth completely cleared of space junk before building a space elevator can be considered. Vibrations that can occur from the gravity of the sun, moon and other planets may produce whiplash movements which could be dangerous for both itself and anyone on it. There are also some political problems with this. A space elevator is a very easy target for a terrorist for you cannot expect there to be a very high security along the entire elevator because it is so big. One other political problem is the placing of the space elevator. Many of the most powerful countries are nowhere near the equator and are unlikely to give the power of controlling all space travel to any other country. So it very likely that it would be built in the middle of the ocean (probably the Pacific). Of all the technologies looked at so far space guns are the most developed. They can form a very cheap form of transport. However, the ‘bullet’ must have an active component (something that directly changes the orbit of the ‘bullet’) if it is to achieve a stable orbit. This is explained by Isaac Newton in his book Philosophiae Naturalis Principia Mathematica which explains the three possible orbits that 40
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
can be achieved from his example: cannon shooting a cannonball. The three possible orbits are: 1) the cannonball falls onto the Earth; 2) the cannonball makes an orbit and essentially hits the cannoneers in the back; and 3) the cannonball reaches escape velocity and shoots out of Earth’s gravitational field. It is very likely that the space gun will be restricted to fuel and specially adapted spacecraft because the acceleration would be too high for humans to survive. The only way that humans can travel on it is when the ‘gun’ is a few thousand kilometres long. Extending the barrel however will not solve this for the pressure behind the ‘bullet’ drops so the acceleration is not constant. So, even more imaginative methods must be used. For example the blast wave accelerator in which the ‘bullet’ has a tapered end and a series of sequenced explosions push against this and keep the acceleration high; another method to keep the acceleration high is filling the barrel with a combustible mixture and making the projectile the shape of a ramjet. Another problem with long barrels is that the air in front of the ‘bullet’ does not have time to get out of the way so the methods proposed that would solve this involve either a gun barrel so long that it ends several kilometres high in the air where there is practically no atmosphere; or having a cover on the nozzle of the barrel that is strong enough to stop the atmosphere getting in yet weak enough for the projectile to get through. Launch loops can send a large volume of materials into space (estimates are for it to be able to launch up to 6 million tons per year [2] . It can also dramatically reduce costs to maybe as low as $300 to $3 per kg [2] . It also has a much higher launch rate than any conventional rocket. However, it requires high precision control of the rotor because if the rotor touches the sheath at any point this will cause a release of 1.5 petajoules of energy roughly equivalent to a detonation of a nuclear bomb. Weather effects can cause it to heat up and cause slack, this slack has to be then accounted for otherwise it could touch the sheath and explode. Laser propulsion can not only make travel to the planets seem feasible but also makes interstellar travel likely. It can also revolutionise aircraft. The main problem is the fact that lasers powerful enough to send a manned mission are, at the
Into Space Without Rockets REVIEW ARTICLE
moment, very expensive (the rule of thumb is that it takes a 1MW laser to send 1kg into low Earth orbit) and needs a lot of electrical energy. However, running costs, compared to traditional chemical rockets, are much lower. To use this method for interplanetary travel having all the lasers based on Earth is not feasible. So an array of satellites which convert sunlight into lasers and then direct them to the satellites will be needed.
Conclusion
Of all the methods discussed so far, space guns are the most readily available and, because I think that an alternative to traditional rockets must also be readily available, space guns should be incorporated in our design of nonrocket launching until a time when a more efficient method become readily available. Although space guns cannot participate in sending humans they can send huge volumes of raw materials into space. This way instruments too delicate to survive the trip to space may be built by space manufacturers. Shipping raw materials at a lower cost will mean that projects such space colonies and the terraforming of Mars may have a chance to blossom. Because sending projectiles into low Earth orbit is impossible with just a space gun, I propose that laser propulsion to be used to alter the trajectory and deliver the projectiles back to Earth. Whatever the type of space port created they must be as close to the equator as it is possible and send projectiles in an easterly direction. This way as much velocity as possible is gained through the earth’s rotation. As a fuel electricity could be used as it is highly versatile and can be generated in a number of different ways. The space gun could be a light gas gun with a ram-jet shaped projectile which
will combust specially tuned gas mixtures in the gun barrel to reach as high a velocity that is possible.
Acknowledgements
I would like to thank Mr P Wilson and Mrs A Blacklock for their encouragement and advice throughout the project and also the Nunthorpe Academy for giving me the opportunity to do this project.
“Because sending projectiles into low Earth orbit is impossible with just a space gun, I propose that laser propulsion be used to alter the trajectory” References
1) Bonsor, K. (6 October 2000) How Space Elevators Will Work [Online]. HowStuffWorks. com. 2) Wikipedia (2013) Launch loop [Online], 7 March 2013. 3) Launch Loops: A viable alternative to space elevators?, May 2012. 4) Wikipedia (2013) Space gun [Online], 3 May 2013. 5) Condliffe, J. (11 January 2013) The Science of Building a Space Gun [Online]. Gizmodo. com. 6) Wikipedia (2013) Lightcraft [Online], 7 December 2013. 7) Bonsor, K. (09 February 2001) How Light Propulsion Will Work [Online]. HowStuffWorks.com. 8) Wikipedia (2013) Space elevator [Online], 4 July 2013.
Author Sansith Hewapathirana I am 15 years old and am studying Triple Science, Maths and Further Maths for my GCSEs at Nunthorpe Academy. In my leisure time, I enjoy playing the piano, reading and doing chess and Judo. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
41
RESEARCH
Making Perfume From Bacteria
Making Perfume From Bacteria Kent iGEM 2014 Abstract
Every year hundreds of kit plates containing DNA parts are shipped to Universities and other scientific institutions across the globe as part of the iGEM competition. Teams of undergraduates must utilise these kits to create their own project, which they must present at the annual iGEM jamboree which this year, was held in Boston. It is the world’s largest undergraduate synthetic biology competition with 245 teams competing last year in 2013. This year a group of undergraduates from the University of Kent will be going to Boston to present their summer project on fragrance producing bacteria in the hope of obtaining a gold medal. But what does the iGEM competition hope to achieve? And more importantly, what is the Kent iGEM project all about and why is it significant?
So what’s the point of iGEM?
The competition has a wide range of goals and plays a massive role in the field of synthetic biology. Its primary aim is to provide students with the materials to create their own projects using genes from the iGEM repository. And to ultimately inspire the next generation of young scientists. Teams can utilise these genes by expressing them in living cells of their own choice (usually bacteria) to generate novel characteristics or ‘phenotypes’ in the host organism that can be used for a purpose. Furthermore, they can extract or synthesize other genes that they may wish to use in their project and submit these genes to iGEM HQ to add to the repository. This means that the iGEM database of genes and other functional DNA parts or ‘biobricks’ is constantly built upon every year and gets bigger. By using these ‘biobricks’ teams can create complex genetic circuits in organisms to meet the aims of their project. The projects are incredibly diverse ranging from the genetic engineering of colour changing bacteria (Cambridge iGEM 2009[1] to generation of tumour killing bacteria (Tronheim iGEM 2012) [2] These projects can be used to tackle real life problems and can provide promising new applications for the future. An example of this is Edinburgh’s ‘Arsenic Biodetector’ in 2006[3]. The team developed a system whereby bacteria could detect arsenic in water systems and emit a pH signal in response. The concept behind this 42
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
project has major implications for regions where arsenic contamination of drinking water poses a serious problem such as Bangladesh and Nepal, with a need of a safe, non-toxic, and sustainable method of arsenic detection. There is currently a large initiative, ‘The Arsenic Biosensor Collaboration’, that is utilising this technology to develop a commercially viable biosensor for use in South East Asia. Team projects are assessed on a wide range of criteria with awards given in tracks such as: best environment project, best energy project, best new application project, and best art project, just to name a few. One of the criteria to achieve a medal is that there is an element of human practices. Teams are required to spread the word of their project to the general public as well as collaborate with other iGEM teams. Thus, iGEM also aims to make the field of synthetic biology better known and has an educational component.
Kent Perfume
So this year, a team of eight undergraduates at the University of Kent (including myself) have been spending the summer working on their project to submit to iGEM. The aim of the project is to genetically engineer a strain of E.coli to produce aroma compounds for use in the fragrance and flavourings industry. In particular, we investigated the generation of a class of
“The aim of the project is to genetically engineer a strain of E.coli to produce aroma compounds for use in the fragrance and flavourings industry�
odorants known as terpenes. Terpenes are a diverse class of organic compounds, produced by a variety of plants, particularly conifers, and they are typically the primary components of many plant essential oils. Of these terpenes, we intended to produce limonene (a lemon smell), zingiberene (a ginger smell), and R-linalool (a lavender scent). Terpenes are typically produced in plants via a biochemical pathway known as the mevalonate pathway (see figure 1). Intermediates of this pathway are converted into specific terpenes by terpene synthase enzymes. Unfortunately, the mevalonate pathway does not exist in bacteria; therefore, to optimize odorant production, we expressed this pathway in E.coli by transforming bacteria (ie. getting bacteria to uptake DNA) with the plasmid pBbA5cMevT-MBIS5 , which contained genes encoding components of the mevalonate pathway.
The terpene synthase genes we selected as our novel biobricks were genes encoding Zingiberene synthase (converts farnesyl-PP into zingiberene to produce a ginger odour) and R-linalool synthase (converts gernyl-PP into R-linalool to produce a lavender scent), taken from the grass Sorghum bicolour and lavender respectively. We searched for these gene sequences on an online database (uniprot) and then manipulated the sequences for optimal expression in E.coli on a computer before ordering them to be synthesised as gene fragments by the DNA synthesis company Life Technologies. We took these gene fragments and put them in plasmids for expression in the bacteria.We also used limonene synthase (provided in the iGEM kitplates) in our project, which converts gernyl-PP into limonene to produce a lemon smell. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
43
RESEARCH
Making Perfume From Bacteria
Implications of the Project
The generation of odorant molecules using bacteria could present an interesting scientific development for the future. The cosmetic and perfume industry is huge, with industry analysts estimating its worth to exceed $33 billion by 2015[8]. Fragrances are typically produced by chemical synthesis, in which dangerous chemicals are often used, or by natural distillation and extraction of plant oils. These processes are relatively inefficient and costly so a lot of research into the use of using microbes to produce perfumes is being carried out. In fact, in March 2012, BASF announced that its venture arm has invested $13.5 million in the San Diegobased biotech firm Allylix for this purpose. The use of bacterial perfume production also is less susceptible to variable environmental factors affecting yield. Plant crop yields are dependent on many factors and can be greatly affected by natural disasters, meaning that often the supply fails to meet the demand. For example, a shortage in Patchouli oil from Indonesia has resulted in an incredible price rise in recent years. The use of bacteria may, therefore, provide a more reliable source of aroma compounds since it’s lab environment can be more carefully controlled. Furthermore, if this method were to become commercially viable, it could free up land used for plant crops for other uses. In a world where food shortage is becoming an increasing problem, freeing up more land for food crops can have massive implications.
References
1) http://2009.igem.org/Team:Cambridge 2) http://2012.igem.org/Team:NTNU_ Trondheim/Project 3) http://2006.igem.org/University_of_ Edinburgh_2006 4) http://www.clinsci.org/cs/105/0251/ cs1050251f01.htm 5) http://www.addgene.org/35150/ 6) http://www.uniprot.org/ 7) https://www.lifetechnologies.com/uk/en/ home.html 8) http://www.prweb.com/releases/2011/2/ prweb8151118.htm
Author Ellie Powell I absolutely love science, which is why I’m studying Natural Sciences at Cambridge University. I find developmental genetics particularly interesting and am hoping to do a masters in this field of Biology. In my free time I like drawing and painting. I also have a keen interest in cinema and theatre and like to go running from time to time 44
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
Advances in polymer based solar cells REVIEW ARTICLE
Advances in polymer based solar cells Abstract
This article explores the advances in organic solar cell technologies while emphasising the inevitable challenges that lie in the path to achieve efficient and cheap power production on a large scale using polymer solar cells. It touches upon different approaches that have and are further expected to overcome those challenges while justifying the importance of further research in this field.
Introduction
Ever since the development of the first practical photovoltaic cell in 1954 at Bell Labs by Chapin, Fuller and Pearson[1], there has been continuous development in the field of silicon based cells. They have had a successful run following the industry backed rapid development of integrated circuits which brought down the cost of manufacturing. This contributed to making their application not only limited to space exploration but also drove them into the hands of people for domestic applications. However, despite rising efficiency and existing know-how, there are limitations, such as flexibility, high cost of production and demand for silicon in the computer industry on the use of this technology for grid scale power production. This highlights the need for alternative solar cell technologies. One such technology that promises cheaper photovoltaic devices is based on organic materials. The advancement in organic photovoltaic devices followed the discovery of conducting polymers by Alan J. Heeger, Alan G. MacDiarmid and Hideki Shirakawa in the 1970s for which they were awarded the Nobel Prize in chemistry in 2000. However, the progress was painfully slow until 1986 when Tang [2] discovered that
“Despite rising efficiency and existing know-how there are limitations on the use of this technology for grid scale power production�
efficiencies of around 1% were achievable when the electron donor and acceptor species were combined together in a cell. The next challenge was to design the cell in a way that maximised the interaction between the donor-acceptor species. The field made another major leap when a selfassembling interpenetrating network could be obtained with the use of bulk heterojunction cell architecture, the most successful one in the field, where the donor and acceptor materials are blended together in organic solvents [3,4] . With theoretical predictions for the efficiencies around 11%, those of around 5-6% had been achieved by 2008 [5] . Further work done towards developing new active layer polymer materials have demonstrated efficiencies of over 6% whereas Konarka, once a private solar energy company based in MA(USA), had reported a power conversion efficiency of 8.3% in 2009 [6]. Still higher cell efficiencies by using tandem structures, which consist of a combination of two or more cells, have been reported [7]. According to the set of design rules, a maximum power conversion efficiency of 15% is expected from tandem cells [5].
Mechanism
Photons absorbed from incident sunlight excite the electrons in donor materials leading to creation of an electron-hole pairs known as excitons which have a typical binding energy ranging from 0.4 to 1.4 eV [8].The newly created excitons start to diffuse within donor phase. They need to encounter the donor-acceptor interface to dissociate quickly. This dissociation results in charge separation leaving free electrons and holes to be transported by an JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
45
REVIEW ARTICLE Advances in polymer based solar cells internal field created by electrodes with different Bulk Heterojunction work functions [9,10]. Bulk heterojunction is a type of device architecture in which the acceptor and donor This leads us to the conclusion that the factors and materials are blended together throughout that need to be taken into consideration to the bulk. Here, the active zone extends maximise efficiency are absorption of light, throughout the bulk and helps address the charge separation, charge transport and charge deficiency due to short exciton diffusion collection. lengths. This led to major progress in the field and helped researchers To maximise light absorption, the photoactive achieve efficiencies higher polymer layer must be sufficiently thick to than 5%. absorb all incident light whereas quick charge separation requires the exciton
energy be smaller than the energy difference between the donor and the acceptor. This ensures that charge separation is a physically favourable phenomenon. Due to extremely small exciton diffusion lengths of around 10nm, a device architecture that can capture an electron in such short a distance was needed [11]. Various processes can be used to optimise the network formed by donor- acceptor species such as thermal annealing, solvent annealing and morphology control by mixtures of solvents or additives [12]. The next section focuses on novel device architectures that have been employed to address previously mentioned shortcomings.
Figure 1. Bulk Heterojunction Architecture.[13]
46
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
The entire process from charge generation to transport within the cell is a series of 5 steps: absorption, diffusion, disassociation, transport and charge collection. Each of these steps can be said to have a yield that depends on the yield of the preceding process. This idea leads us to define a quantity called external quantum efficiency (EQE) which is defined as the ratio of the number of charge carriers collected by the solar cell to the number of photons of a given energy shining on the solar cell from outside (incident photons)[12].
Advances in polymer based solar cells REVIEW ARTICLE
“Organic solar cells offer many advantages. They cost less, are flexible, transparent and customisable to any shape and size”
EQE=πAπdiffπdissπTrπcc (1) where πA is the absorption yield and is determined by the absorption coefficient of the photoactive layer and its thickness. πdiff represents the ability of exicton to diffuse without recombination, πdiss stands for the probability that the hole and electron will be separated by the internal electric field at the heterojunction. πTr is the charge carrier transport yield and πcc is the charge collection yield [12]. Another useful way of measuring device efficiency is by using a parameter called power conversion efficiency (PCE) [12] which stands for the familiar concept of device efficiency and is defined as the ratio of maximum power output to the power gained from incident light: PCE=PCE/φc (2) Fill factor 12, a quantity defined as FF=Pmax÷(Jsc Voc) (3) is directly related to the quality of a photovoltaic cell. Here, Jsc is defined as the current that reaches the contacts with no applied forces and is generated with open circuit potential Voc, which is the maximum potential that can be generated by the device. Using the concept of fill factor, we can easily derive PCE= Jsc Voc FF ÷ φc (4) which shows us that Jsc, Voc and FF are key factors in determining the PCE.
Ordered Heterojunction
An organic photovoltaic cell with ordered heterojunction active layer is characterised by small, straight channels that provide the most direct path to the electrodes [11,14,15]. This architecture was made possible by engineering the donor-acceptor interfacial morphology and is generally made by infiltrating polymers into titanium oxide-nanostructures as shown in figure 2. Titania (titanium dioxide), an abundant mineral, is a good choice to develop ordered heterojunction templates. To improve the chances of complete exciton harvesting, TiO2 templates can be engineered with channel radii that match exciton diffusion lengths [11]. Keeping in mind that morphology affects charge carrier to a great extent, it is well worth noting that polymers can be aligned along the channel length to maximise mobility. The filling of pores with polymers can be done using various methods such as melt-infiltration, dip-coating and polymerization [11]. However, poor stability of the polymer inside small channels and poor understanding of why this happens makes the process difficult. JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
47
REVIEW ARTICLE Advances in polymer based solar cells
Figure 2. Ordered Heterojunction.[13]
It would come across as the ideal nanostructure because it is thick enough to absorb most of the incident sunlight and it seems to maximise the EQE of a cell. In spite of these properties, device efficiencies for cells with ordered heterojunction are far lower than those employing bulk heterojunction architecture. This is primarily due to difficulties in fabricating nanostructures with small pore sizes (10nm) and high aspect ratios. It may be tempting to suggest that thickening the films may help in such a scenario. However, since holes in ordered heterojunction traverse the entire thickness, this change may raise the series resistance and the chances of recombination losses. An increase in resistance may lead to a buildup of holes and another field that opposes the internal field created by the electrodes [16].
Tandem Cell Structure
Organic solar cells suffer from technical limitations such as low charge carrier mobility and narrow absorption spectra. This calls for novel approaches so as to not miss out on certain parts of the spectrum which could be otherwise harvested. This is where tandem solar cells prove their worth. They are made by stacking together two or more photovoltaic devices in series as was first shown by Hiramoto et al. [17] in 1990. Since this a series network, the open circuit voltage is obtained by linear addition of open circuit voltages of all the sub-cells while the short circuit current flows towards the opposing contacts.
Commercialisation
48
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
As organic materials can be dissolved in organic solvents, they can be processed at solutions. There are a number of coating techniques which make use of these solutions such as inkjet printing, doctor-blading and slot-die coating [12]. The fate of all new technologies regarding their performance in the market depends much on the commercial interest they have generated, in other words, their capabilities to draw in sufficient investment from the industry as they do on the advances in the technology itself. Taking into account all these factors, it can be seen that even though cell production techniques are fairly efficient, the devices themselves expensive to produce, inefficient and unstable. This drawback became much more prominent with the failure of Massachusetts based Konarka Technologies Inc in 2012. This shows that the technology is unable to offer at the present time what is expected from it and has not reached a stage where mass production is viable.
Conclusion
Organic solar cells offer many advantages. They cost less, are flexible, transparent and customisable to any shape and size. Also, they are environment friendly and can be made via various synthetic routes. This makes them practically inexhaustible [12]. However, as has been noted earlier, polymer photovoltaic cells do suffer from several limitations. They absorb radiation from a very limited part of the solar spectrum and are particularly unstable towards their interaction with oxygen, moisture and UV radiation. These
Advances in polymer based solar cells REVIEW ARTICLE
issues have been and can be addressed by studying and modifying the morphology of the active region in a cell at the nanoscale and other chemical means. Over the entire development of polymer solar cells, there have been a few revolutionary breakthroughs that jumped device efficiencies in single heterojunction polymer solar cells from around 1% to over 8% [6]. In devices with tandem arrangements, a record high of 10.6% has been achieved [7]. The current projections suggest that if progress continues at the same rate, the actual achievement will be in line with the theoretically predicted efficiency of 15% very soon [5]. As energy demand and pressure on existing technologies continues to grow, one can subconsciously feel that more of such revolutionary and not evolutionary advances are needed.
References
1) Tsokos, K. A. ‘Energy degradation and power generation’. Physics for the IB Diploma, Fifth edition. Pp 423-424. Cambridge: Cambridge University Press, 2008. 2) Tang, C.W. Two layer organic photovoltaic cell. Physics Letters. 1986, 48, 183. 3) Sariciftci, N. S., Smilowitz, L., Heeger, A.J., Wudl, F. Photoinduced Electron Transfer from a Conducting Polymer to Buckminsterfullerene. Science. 1992, 258, 1474. 4) Morita, S., Jakhidov, A.A., Yashino, K. Doping effect of buckminsterfullerene in conducting polymer: Change of absorption spectrum and quenching of luminescence. Solid State communications. 1992, 82, 249. 5) Dang, M. T., Hirsch, L. & Wantz, G. P3HT:PCBM, Best Seller in Polymer Photovoltaic Research. Adv. Mater., 2011, 23: 3597–3602. 6) Chen, R. High-performance polymers for flexible OPV raise cell efficiencies. MRS
Bulletin 2011, 36, 955. 7) You, J., Dou, L., Yoshimura, K., Kato, T., Ohya, K., Moriarty, T., Emery, K., Chen, C.C., Gao, J., Li, G & Yang, Y. A polymer tandem solar cell with 10.6% power conversion efficiency. Nature communications 2013, 4, 1446. 8) Hill, I. G., Kahn, A., Soos, Z.G., Pascal, Jr, R.A. Charge-separation energy in films of π-cgure 2. Ordered Heterojunction.[13]onjugated organic molecules. Chemical Physics Letters 2000, 327, 181. 9) Morteani, A. C., Sreearunothai, P., Herz, L.M., Freind. R.H., Silva, C. Exciton Regeneration at Polymeric Semiconductor Heterojunctions. Physics Review Letters 2004, 92, 247402. 10) Mihailetchi, V. D., Koster, L.J.A., Hummelen, J.C. and Blom, P.W.M. Photocurrent Generation in Polymer-Fullerene Bulk Heterojunctions. Physics Review Letters 2004, 93, 216601. 11) Mayer, A., Scully, S., Hardin, B., Rowell, M., and McGehee, M. Polymer-based solar cells. Materials Today 2007, 10, 28. 12) Singh, R.P., Kushwaha, O.S., Polymer Solar Cells: An Overview. Macromolecular Symposia 2013, 327, 128. 13) Laboratory for Complex Materials Research, Materials Sciences and Engineering, University of Michigan. <http://www. greengroup.engin.umich.edu/electronics. html>. [Accessed on 19th September 2013] 14) Bach, U., Lupo, D., Comte, P., Moser, J.E., Weissortel, F., Salbeck, J., Spreiter, H., and Grätzel, M. Solid-state dye-sensitized mesoporous TiO2 solar cells with high photon-to-electron conversion efficiencies. Nature 1998, 395, 583. 15) O’Regan, B., and Grätzel, M., A low-cost, highefficiency solar cell based on dye-sensitized colloidal TiO2 films. Nature 1991, 353, 737. 16. Mihailetchi, V. D., Wildeman, J., Blom, P.W.M. Space-Charge Limited Photocurrent. Physics Review Letters 2005, 94, 126602. 17. Hiramoto, M., Suezaki M., and Yokoyama M. Chemistry Letters 1990, 3, 327.
Author Utkarsh Jain I’m currently an undergraduate at University of Manchester studying theoretical physics. Post my degree, I hope to study towards a PhD. I like listening to a variety of music and can waste a little too much time obsessing over Batman
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
49
YOUNG SCIENTISTS The Young Scientists Journal is the world’s only peer review science journal run by young scientists for young scientists. The journal publishes research papers and review articles on Science, Technology, Engineering and Mathematics (STEM) both online and in print, written and edited exclusively by 12-20 year olds across the globe. 16 issues have been published since it was founded in 2006 at The King’s School Canterbury by Christina Astin and Ghazwan Butrous.
Chief Editor
Editorial Team
Name: Ed Vinson, UK Email: editor@ysjournal.com Term: June 2014- June 2015
Team Leader: Claire Nicholson, UK
The Chief Editor oversees the whole journal and coordinates the efforts of the team leaders.
Design & Marketing Team
The Design & Marketing Team is responsible for the design & promotion of the journal across media. Team Leader: Michael Hofmann, UK, Invicton Ltd Team Members: Holly O’Connor
Technical
The Technical Team manages the website and its content. Team Leader: James Molony Team Members: Jea Seong Yoon
Database & Schools Liaison Team
The Database and Schools Liaison team is responsible for the management of the journal database and the communication between schools and the journal. Team Leader: Chimdi Ota 50
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
The Editorial Team is responsible for overseeing the editing and publishing of articles.
Team Members: Abbie Wilson Ailis Dooner Areg Nzsdejan George Tall Gilbert Chng Jenita Jona James Konrad Suchodolski Lisa-Marie Chadderton Mischa Nijland Prishita Maheshwari-Aplin Rahul Krishnaswamy Sophia Aldwinckle Corrie Crothers Cathy Li Hannah Glover Sophie Stephens Maddie Mills Dena Mohavedyan Sanjay Kubsad Rachel Hyde Stephanie Leung Chris Boulo John Clark Zaid Limbada Mustafa Majeed Mackenzi Giri Nandakumar Tom Noneley Chris Pantelides George Rudd Sam Tilley Haseeb Wazir Lauren Smith
International Advisory Board The IAB is a team of experts who advise the editors of the journal. Team Leader: Christina Astin, UK Email: cma@kings-school.co.uk
The Kingâ&#x20AC;&#x2122;s School
Team Members: Ghazwan Butrous, UK Anna Grigoryan, USA/Armenia Thijs Kouwenhoven, China Don Eliseo Lucero-Prisno III, UK Paul Soderberg, USA Lee Riley, USA Corky Valenti, USA Vince Bennett, USA Mike Bennett, USA Tony Grady, USA Ian Yorston, UK Charlie Barclay, UK Joanne Manaster, USA Alom Shaha, UK Armen Soghoyan, Armenia Mark Orders, UK Linda Crouch, UK John Boswell, USA Sam Morris, UK Debbie Nsefik, UK Baroness Susan Greenfield, UK Professor Clive Coen, UK Sir Harry Kroto, FRS, UK/USA Annette Smith, UK Esther Marin, Spain Malcolm Morgan, UK Steven Simpson, UK
Canterbury
Partners
We are establishing a number of Young Scientists Journal hubs based at schools around the world where groups of students can work together on the journal. Contact Christina Astin if you would like to find out more about getting your school involved.
Young Advisory Board Steven Chambers, UK Fiona Jenkinson, UK Tobias Nørbo, Denmark Arjen Dijksman, France Lorna Quandt, USA Jonathan Rogers, UK Lara Compston-Garnett, UK Otana Jakpor, USA Pamela Barraza Flores, Mexico Cleodie Swire, UK Muna Oli, USA editor@ysjournal.com
Supported by
All rights reserved. No part of this publication may be reproduced, or transmitted, in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the editor. The Young Scientists Journal and/or its published cannot be held responsible for errors or for any consequences arising from the use of the information contained in this journal. The appearance of advertising or product information in the various sections in the journal does not constitute and endorsement or approval by the journal and/or its publisher of the quality or value of the said product or of claims made for it by its manufacturer. The journal is printed on acid free paper.
/YSJournal
@YSJournal
www.ysjournal.com
JUL-DEC 2014 I ISSUE 16 I WWW.YSJOURNAL.COM
51
YOUNG SCIENTISTS editor@ysjournal.com
52
/YSJournal
WWW.YSJOURNAL.COM I ISSUE 16 I JUL-DEC 2014
@YSJournal
www.ysjournal.com