Leonardo Times April 2015

Page 1

april 2015

Leonardo Times Journal of the Society of Aerospace Engineering Students ‘Leonardo da Vinci’

page 26

Putting the Wind in Wind Turbines

Urban Airspace Design

LOUPE

Observing the Earth as an Exoplanet

Year 19

number 2

ATM for extremely high traffic densities

cover 0415.indd 1

27/03/15 14:14


A little more room, a lot more comfort You’ll be amazed what extra legroom and a seat that reclines more can do for your journey. Get comfortable with Economy Comfort, available on all KLM ights. Visit klm.com/economycomfort for more information.

031001136 EC tbv LeonardoTimes_210x297mm.indd 1

21-01-14 09:36


Contents

03 Contents 04 Editorial 05

From Leonardo’s desk

06

Current affairs

Highlights

26

Table of contents

08 Helicopter torque sensing & power assurance 10

Bridging the knowledge gap The need for further cost reduction in offshore wind energy calls for a better understanding of the wind climate and the effect on large rotors. Research at DUWIND helps to close the knowledge gap that exists between engineering and meteorology.

Hypersonic trajectory optimization

16

8

12 RVD – Making Falcon 9 re-usable 14 Automotive composites for crashworthiness

Putting the wind in wind turbines

Which rotor concept will it be?

20 LVD – Aftermath of disaster

Helicopter torque sensing & power assurance

22 Internship – ‘Deerns’ In Abu Dhabi 24

‘We vlogen met een knal’ – A pioneering leap of faith

Verification of torque sensor’s signal of a helicopter’s turboshaft engine

26 Putting the Wind in Wind Turbines 28

How well does it heal?

30

Metropolis: Urban Airspace Design

Power assurance and torque sensing at helicopters is important to verify that the engine’s condition is sufficient and limits will not be exceeded during flight. Using different verification methods, deviations in torque measurement can be calculated and alternative ways for power assurance could be applied.

36

Student Project – Lambach HL II

38

Grid-stiffened composite structures

12

34 A Deployable Earth Observation Telescope

Making Falcon 9 re-usable

42 Plasma Enhanced Aerodynamics 44 It’s the Energy, stupid! 46 LOUPE: Observing the Earth as an exoplanet The Drone “Threat”

50

Column – Women in Aerospace

Advertisement index 02

KLM

19

DAG

32

Minor

33

NS

41

Akzo Nobel

51

NLR

52

Fokker

On January 10, SpaceX managed to land a rocket on a barge in the ocean. It was a crash landing though. How can a new reusable launch system avoid the pitfalls of the Space Shuttle? Elon Musk thinks reliability; simplicity and rapid development are crucial.

30

48

SpaceX’s Reusable Launch system

Metropolis: Urban Airspace Design

ATM for extremely high traffic densities This project proposes the investigation of radically new airspace design concepts for ATM scenarios where unmanned cargo drones and personal air vehicles become commonplace. If we want to maximize the airspace capacity for airborne separation assurance, do we need a complex airspace structure or would it be better to use free routing?

april 2015 Leonardo Times

contents 0415.indd 3

03

02/04/15 00:23


Editor’s letter Dear reader, ‘The Eagle has crashlanded’, remarked a senator after a smalldrone was found in the lawns of the White House on 26 January. If at all, this is the right sign that the FAA was looking for in order to come up with guidelines for Unmanned Aerial Vehicles. The incident caused some degree of queer concern in the media. Thankfully, the President was not in the White House at the moment but in New Delhi, India. Ironically, the Secret Service had unsuccessfully tried to negotiate a no-fly zone for his security for the Indian Air Force as he attended the annual Republic Day parade from an outdoor platform. In US, knee-jerk reactions from critics ranged from simply banning the “wretched” things to building an Israel-inspired Iron Dome. On the other hand, from a technological standpoint, UAV’s are here to stay and the sooner their use gets rationally regulated, the better. Now, consider this, the Chinese made DJI Phantom Aerial UAV Drone Quadcopter is sold on Amazon.com starting at $479. At such a low entrybarrier for enthusiasts and the general public, the security establishment is bound to be worried. A Jihadi Kamikaze drone squadron with explosive strapped as payloads provides a cheap, ubiquitous and anonymous way to deliver terror for a fanatic evildoer and his brethren. It may sound rather contrived and April fool-esque but such situations are being discussed by Department of Homeland Security. The concerns thrown up by an unregulated, easy access to drones could be addressed by

Colophon carefully drafted regulation that is backed by technology. Most drones rely on a radio link to a ground controller. With an accurate enough sensor, one can identify and pinpoint drones and jam them. For the existing and upcoming hobby drones, updating the firmware and enabling use of GPS location for restricting take-off and entry into a sensitive no-fly zone is also another possibility. However, in crowded areas, attempting to search, identify and inactivate the drones comes at a high price and with technological challenges. Attempting to jam drones in such areas will inadvertently cause conflict with cellphones, Wireless Internet routers etc. French authorities are having a hard time coping with a situation like this. After the Paris terror attacks and the resulting tightened security on ground, authorities are dealing with unexplained drone sightings over landmarks such as Eiffel Tower, the US Embassy, Place de la Concorde and Montparnasse Tower. It is not yet known if the flights were coordinated and who the people behind them are. Despite being quite popular in the security establishment for surveillance purposes, for years, military’s advanced drones have suffered from un-encrypted data. The Taliban were able to intercept the Predator’s surveillance videos. Iran used GPS- spoofing to capture a secretive US drone. The Security establishments really need to encourage better technologies and seriously address the long-awaited regulation issues. It’s only when better UAV’s and accompanying infrastructure is encouraged and put in place that the true benefits of UAV’s can be reaped. Sushant Gupta

Year 19, Number 2, APRIL 2015 The ‘Leonardo Times’ is issued by the Society for Aerospace Engineering students, the VSV ‘Leonardo da Vinci’ at the Delft University of Technology. The magazine is circulated 4 times a year with a circulation of 5500 copies. EDITOR-IN-CHIEF: Sushant Gupta FINAL EDITOR: Raphael Klein EDITORIAL STAFF: Anita Mohil, Apeksha Amarnath, Bob Roos, Haider Hussain, Joris Stolwijk, Manfred Josefsson, Martina Stavreva, Victor Gutgesell, Vishal Balakumar, Thom van Ostaijen. THE FOLLOWING PEOPLE CONTRIBUTED: Sjoerd van Rooijen, Bert van den Bos, Jeffrey van Oostrom, Lukas Schreiber, Lourens Blok, Ricardo Pereira, Meander Leukfeldt, Niels Waars, Floris Haasnoot, René Bos, Maarten Holtslag, Antonio M. Grande, Santiago J. Garcia, Sybrand van der Zwaag, Prof.dr.ir. Jacco Hoekstra, Dennis Dolkens, Saish Sridharan, Dries Decloedt, Dan Wang, Rik Geuns, René Alderliesten, Lucas Amaral, John-Alan Pascoe, Liaojun Yao and Thijs Arts. COVER IMAGE: NV NOM DESIGN, LAYOUT: dafdesign, Amsterdam PRINT: Quantes Grafimedia, Rijswijk Articles sent for publishing become property of ‘Leonardo Times’. No part of this publication may be reproduced by any means without written permission of the publisher. ‘Leonardo Times’ disclaims all responsibilities to return articles and pictures. Articles endorsed by name are not necessarily endorsed editorially. By sending in an article and/ or photograph, the author is assured of being the owner of the copyright. ‘Leonardo Times’ disclaims all responsibility. The ‘Leonardo Times’ is distributed among all students, alumni and employees of the Aerospace Engineering faculty. VSV ‘Leonardo da Vinci’ Kluyverweg 1, 2629HS Delft Phone: 015-278 32 22 Email: VSV@tudelft.nl ISSN (PRINT) : 2352-7021 ISSN (ONLINE): 2352- 703X For more information, the website can be visited at www.vsv.tudelft.nl. At this website, the ‘Leonardo Times’ can also be digitally viewed. Remarks, questions and/or suggestions can be emailed to the following address: LeoTimes-VSV@student.tudelft.nl

Secret Service searches the grounds of the White House

04

Leonardo Times april 2015

Ediorial and Colophon 0415.indd 4

27/03/15 14:26


From Leonardo’s Desk

Dear reader, While working on the second Leonardo Times of the year, two realisations spring to mind: One, we have an incredible half year full of exciting events behind us, and two, we still have an entire half year filled with great upcoming events, including everything in the lustrum month May and the spectacular Airshow in August. For now, however, lets focus on what has happened in the last few months in order to revive all the good memories that you have. In the final week of December, the activities committee organised the annual, notorious Belgian Beer Drink. It seemed that the installation of our novel Members of Honour, which happened only one week before, was the right motivation for one thousand students to come and celebrate with us in society Phoenix. A few weeks later, almost immediately after the Christmas holidays and the exams that followed, the VSV ‘Leonardo da Vinci’ – with 150 members – left for Risoul, France. Its fantastic to witness the take over of a small part of France with a group of Aerospace students, playing the VSV-songs in bars and skiing down trails with the entire group at once. Unfortunately, this week of top-level sport and a little bit of relaxing had to come to an end as well; large projects were awaiting us back in Delft and it turns out that our society won’t run itself. One of these large projects was De Delftse Bedrijvendagen, the most popular

technical career fair for jobs, internships and graduation projects in the Benelux. This year, the presentation days were not just ‘large’, but larger than ever, with almost 3000 students and 125 companies attending. Needless to say an exceptional logistical challenge, considering how numerous resources of five participating study societies need to be combined efficiently into two days of excellence. A special thank-you goes to the DDB Board, BIT and all contributing members of the participating societies for making this flawless day possible.

Apart from these huge events we cannot forget to broaden our Aerospace wisdom every now and again with challenging lectures and excursions. These last months we have had the honour of inviting speakers to discuss current affairs on space debris, the structural integrity of the Airbus A350, weaponization of space and the flight-testing of the F-35 Lightning II. Next to that, we looked farther than the walls of our faculty and visited companies such as Airborne, Air Traffic Control the Netherlands, SRON and KLM Engineering & Maintenance.

Another essential event was of course this year’s aviation symposium. Combining our three pillars – education, social interaction and career orientation – into one day has in my opinion resulted in one of our greatest achievements this year. Firstly, the incredible speaker line-up allows us to observe the involvement of the VSV ‘Leonardo da Vinci’ in our much beloved aviation sector, and secondly, it shows the dedication and interest of the industry in our society, and with that all 2600 Aerospace students we represent. The striking aspect of the day’s line-up is its versatility. Never before have we had airports, airlines, manufacturers and air traffic control represented on one single day. An interesting combination that has most definitely resulted in in-depth discussions during the lunch and the concluding network drink. Congratulations go to the Aviation Department on this evident return on effort and to writing yet another piece of VSV-history.

A series of lectures that was revitalised last year is the CEO Interview series. We had the honour of adding two names to a list of renowned speakers from the industry, namely Arnaud de Jong, CEO of Airbus Defence & Space Netherlands and Hans Büthker, CEO of Fokker Technologies. While their thoughts on their companies and the future of the sector were debated, we also focused on personal backgrounds and the path that they followed to get where they are today. Today for our board is actually quite comparable to ‘Vlijtig Liesje’, the brand new Boeing 787 flight simulator; what we have right now is very extraordinary, but the best is yet to come! With winged regards, Sjoerd van Rooijen President of the 70th board of the VSV ‘Leonardo da Vinci’ April 2015 Leonardo Times

from leonardos desk 0415.indd 5

05

26/03/15 23:19


Current Affairs

Solar Impulse 2; New record

March 9 2015, Abu Dhabi, UAE

IXV Test: Success!

Solarimpulse.com

ESA

February 11, 2015, Korou French Guinea

S

olar Impulse 2 set a new world record for the furthest flight for an aircraft, simply powered by the sun. For the first part of its trip around the world, the Solar Impulse 2 took off in Abu Dhabi and landed twelve hours later in Muscat, Oman. The Swiss project has completed its very first objective; it has proven that it works. Since the aircraft is extremely slow, the world trip will have various stops. The biggest challenge for the aircraft and the pilots (Bertrand Piccard and Andre Borschbeg) will be to fly over the Pacific Ocean, which is expected to be a 5 day non-stop flight. During these stops the team will be able to maintain the aircraft, rest and promote clean technologies, such as the Solar Impulse 2. (V.G.)

T

he Americans did it with the space shuttle. Now we did it with the IXV. On February 11, 2015 a Vega rocket brought the first IXV (Intermediate eXperimental Vehicle) to space, where it deorbited and reentered the atmosphere. Starting at hypersonic speeds the IXV slowed down to supersonic speeds, after which it glided through the atmosphere. It was controlled by a pair of flaperon-like tails. Later it deployed parachutes, to slow down further, for a safe landing in the Pacific Ocean. Currently the used IXV module is being transferred to ESTEC in the Netherlands to be analyzed. This first success was only the start of potentially a new generation space shuttle. (V.G.) ESA

BBC News

FAA Small Drone Draft Regulation

ATV-5; The Last of its Kind February 15, 2015, ISS

T

he FAA unveiled its highly anticipated proposed regulation for the commercial use of small drones weighing less than 55 pounds on February 15, nearly four years later than expected. Under the FAA’s proposed rule, operators would be required to fly drones within their unaided line of sight, to a maximum altitude of 500 feet above ground level and during daylight hours. Operators would need a “newly created FAA unmanned aircraft operator’s permit” which they would earn “by passing a knowledge test focusing on the rules of the air,”. They must be at least 17 years old, would renew the operator’s certificate every two years, and there would be no separate requirement for a medical certificate. The FAA would not require that small drones be certified for airworthiness, only that they be maintained in a safe condition for flight. (H.H.) AIN

06

ESA

Matt Thurber

February 15 2015

A

trail of fire in the night sky marked the end of ESA’s ATV program. On February 15, 2015 the last ATV, ATV-5 undocked from the International Space Station and started deorbiting. Loaded with waste from the ISS it made its way back towards the atmosphere, where it burned out over the Pacific Ocean. This was the last ATV made by ESA. It marks the end of a space program, which was already conceived in 1987 to serve for an international Space Station, when UssR’s MIR station was shut down. Five successful ATV missions (originally Ariane Transfer Vehicle) have made its way to space since the first launch in 2008. Its legacy however remains as the experience and technology will be used for further space missions such as NASA’s Orion capsules. (V.G.) ESA

Leonardo Times April 2015

current affairs 0415.indd 6

02/04/15 00:25


Current Affairs

What is wrong with flying these days?

February 11, 2015

W

ith three major incidents, aviation seems to be more dangerous as it has ever been. On March 4 Turkish Airline flight TK726 performed an emergency evacuation, for it had overshot the runway in its attempt to land in Nepal’s Kathmandu. On March 10 in Argentina, two helicopters crashed while filming for a TV show; ten people died. On March 5th, Delta Airlines flight 1086, a McDonnell Douglas MD-88 crashed while landing. Three people were hospitalized, thankfully there were no casualties. These incidents will further increase the fear of flying for some people, although flying still is the safest means of travel. (V.G.) Aviation

NASA

Reuters

March 4, Nepal; March 10, Argentina; March 5, LaGuardia

Deep Space Weather Observatory

W

ith a Falcon9 SpaceX has brought the Deep Space Weather Observatory (DSCOVR) into space. The probe is cooperation between NASA, NOAA and the United States Airforce to monitor the weather in deep space. The probe will position itself between the Earth and the Sun. Its position is called Lagrange 1, which is interesting for scientists, because the gravity of earth and sun cancel out at this spot. Once reached its destination DSCOVR will observe the sun. The mission will act as a buoy, warning earth about solar storms, which could potentially damage earth. Even though the launch was executed by a Falcon9, SpaceX decided to not make another reentry test out of it. (V.G.) SpaceX

Iranian Space Mission

February 10 2015

O

n Tuesday, February 17, 2015 the Iran Space Agency (ISA) displayed a mock set up of their first manned space capsule. When Iran first expressed their intention to send men into space in 1990, people were looking at it with suspicion, claiming that Iran would not be able to independently carry out such a mission. However now it is clear, they will make it happen. For the coming Iranian year (starting March 2015) the ISA has planned a first launch of the Capsule. As announced the ISA wants to keep its goal of successfully sending an Iranian to space in the year of 2016. This would mark the fourth independent manned spaceflight of a nation, following Russia, USA and China. (V.G.) Tehran Times

BAE Systems / Northrop Grumman

ISA

February 17, 2015, Tehran

New Design for T-X Trainer

N

orthrop Grumman will propose a clean-sheet design for the U.S. Air Force’s T-X advanced trainer replacement program, departing from its partnership with BAE Systems to offer the latter’s Hawk jet trainer. It is building the “purposedesigned” aircraft with its wholly owned company, Scaled Composites. The T-X will replace the Air Education and Training Command’s (AETC) Northrop T-38C Talon twin-engine jet trainer, first introduced in 1961 and now averaging 45 years old across the fleet. In its Fiscal Year 2016 budget submission, the Air Force outlines plans to spend $1 billion through 2021 for what is expected to be a 350-aircraft and ground training system requirement. The Pentagon has approved proceeding to Milestone B engineering and manufacturing development, according to budget documents. (H.H.) AIN online

april 2015 Leonardo Times

current affairs 0415.indd 7

07

02/04/15 00:25


Helicopter torque sensing & power assurance Verification of torque sensor’s signal of a helicopter’s turboshaft engine Helicopter operations are restricted by the condition and available power of the engines. Therefore, power assurance tests are applied and the torque sensor’s signal is continually monitored during flight. However, experience teaches us that torque sensors could be inaccurate. Different ways to verify the torque sensor’s signal are proposed in this article. TEXT Ir. Bert van den Bos, Maintenance Engineer at the Royal Netherlands Air Force

T

he availability and operational readiness of a helicopter fleet depends on different factors, such as the throughput time of maintenance events and the condition of the helicopters. The condition of a helicopter is among others determined by the engines’ condition and successively by the available power. In helicopters, the power is usually delivered by one or two turboshaft engines; this power is required for driving the rotors via the transmissions. For a tandem rotor helicopter, an example of such a configuration is shown in Figure 1. The required rotor power depends on factors such as the helicopter gross weight, the ambient conditions and the manoeuvres to be executed. Power assurance tests To determine the condition of an engine, the pilots usually perform a maximum performance check, such as a Power Assurance Test (PAT) or a Health Indicator Test (HIT). During this check, the pilot determines safe margins at maximum power, because an engine is limited by factors such as the maximum allowed power turbine inlet temperature (PTIT) or maximum allowed gas generator speed. The ‘safe’

08

PTIT margin at maximum engine power, for example, is a measure of the engine’s condition and the time available until the next required engine wash or engine overhaul. Torque sensing During the test mentioned above, the pilot uses the measured engine torque or power as power setting parameter. The torque sensing system, which corresponds to the delivered engine power, supplies the torque value. The pilot monitors the measured engine torque during flight to prevent the helicopter from exceeding maximum torque limits, which could damage the helicopter’s drivetrain. Therefore, accurate torque sensing is important for several reasons. Different types of torque sensing systems are used in helicopters; the two most commonly used systems are: - Shaft Twist Torque Measuring Systems; - Reluctance Torque Measuring Systems. The latter set of systems is based on the change in the magnetic properties of a rotating ferromagnetic shaft due to torsional stress. This torque sensing system is for example installed in the T53 and T55

engine family, which is used in several helicopters. Details of this kind of torque sensing will not be described in further detail here. However, this torque sensing system is error-prone and very sensitive to environmental influences. Therefore, verification of the torque sensor’s signal is highly recommended. The torque sensor’s signal can be verified in several ways; this article focuses on two methods, namely by means of: - Gas Path Analysis (GPA) based engine simulations; - Aerodynamic power required analysis. Gas Path Analysis Gas Path Analysis (GPA) is a method to diagnose the condition of the components of gas turbine using performance measurements (Li, 2002). It can be used for several applications related to engine diagnostics. GPA-based methods are based on the thermodynamic relations and conservation laws applicable to gas turbines and aero-engines. The best way to deal with engine-related problems and predictions is to use a GPA-based software program to simulate a gas turbine engine. The National Aerospace Laboratory (NLR)

Leonardo Times APRIL 2015

helicopter torque.indd 8

27/03/15 14:33


Bert van den Bos

Honeywell

component maps are used and tuned to measured test-cell data. If estimated deterioration and installation effects are included, engine performance corresponding to actual in-flight performance can be simulated and used to verify the torque sensor’s signal.

has developed a program called the Gasturbine Simulation Program (GSP), which is a non-linear object-oriented component based modeling environment (Visser and Broomhead, 2000). In Figure 2, an example of a two-shaft turboshaft engine in GSP is shown, consisting of a gas generator section and a power turbine section.

To calculate the engine torque, accurate flight data is required to serve as input for the simulation models. Fortunately, a lot of data is measured during flight nowadays for every helicopter and stored at a Flight Data Recorders (FDR), among others for Health and Usage Monitoring (HUMS) purposes. Most accurate results are obtained if steady-state flight segments are selected from the FDR (e.g. Straight and Level (S&L) flight). This data supplies the ambient parameters and a power setting parameter (e.g. the gas generator speed (N1) or fuel flow), which serve as input of the GSP simulation models. In this way, output power or torque values can be obtained and compared with measured torque values to calculate the torque ‘error’.

A matching procedure is required to match the engine’s performance of the simulation model to actual engine performance. For this procedure, use can be made of test-cell data from an engine acceptance test, which is usually executed after engine overhauls, major engine repairs or before putting a new engine into service. The performance of every individual engine is unique, because of engine-to-engine dissimilarities due to design imperfections allowed by manufacturing and assembly tolerances (Stamatis, Mathioudakis and Papailiou, 1990). Furthermore, deterioration effects cause differences in engine-toengine performance. The matching procedure consists of several steps, during which the engine is tuned for both design and off-design conditions. During this matching procedure, a scaling procedure is applied to the component maps of the engine. A characteristic component map of a compressor or turbine is an accurate representation of the relationships between different performance parameters. Because the Original Equipment Manufacturer (OEM) does not often provide component maps, standard available NLR

e

Figure 1. Schematic overview of the power balance, which represents the required and delivered power, including accessory power demands and drivetrain losses.

Figure 2. GSP engine model for a helicopter’s two- shaft turboshaft engine

Power analysis A second way to determine the engine torque is by predicting the required rotor power. If the helicopter characteristics are known, the required rotor power can be calculated based on the aerodynamic relationships. Different methods can be used, varying from simple methods based on the actuator disk theory or the blade element theory to more advanced helicopter simulation models. Again, FDR data of steady-state conditions serve as input for the calculations. If accessory power demands and drivetrain losses are included, the power or torque delivered per engine can easily be calculated from the total required rotor power. Torque errors So the deviation of the engine torque sen-

sor’s signal from the actual engine torque can be determined by predicting the engine torque using simulation models or using power required analyses. A trend analysis can be performed to detect a possible relationship between the torque ‘error’ and one of the ambient or engine parameters. If such a trend has been found, the pilot can compensate the measured torque values for this error. Since both estimation methods have several limitations, the obtained torque values have only limited accuracy. Therefore, uncertainty margins should be applied to the obtained results. Solutions Before flight, the pilots perform a powerrequired prediction to determine what maximum payload they can deliver at their destination depending on the ambient conditions. By including a compensation for the maximum expected torque error, the pilots can assure that payloads can be delivered without exceeding torque limits. If the torque sensing system is suspected to be inaccurate, the pilots could use alternative methods to perform the power assurance check that do not require the torque sensor’s signal, but for example, the flight speed that corresponds to the required torque instead. Conclusions and future expectations Helicopter torque sensing systems could be error-prone and inaccurate. Therefore, it is recommended to verify the torque sensor’s signal, for example using Gas Path Analysis based methods or a power required analysis. In addition to the torque sensor’s signal verification, both methods can be used for many more applications, such as Condition Based Maintenance (CBM) related applications and fault analysis on engine component level. Therefore, it is required that sufficient engine parameters are measured. This will reduce maintenance costs, helicopter downtime, and consecutively increase the availability of the engine fleet. With the increasing demand for CMB and HUMS-based applications and maintenance policies, a bright future hopefully awaits us.

References [1] Li, Y.G., “Performance-analysis-based gas turbine diagnostics: A review” Proceedings of the Institution of Mechanical Engineers, Part A: Journal of Power and Energy, vol. 216, no. 5, pp. 363–377, 2002. [2] Visser, W.P.J., and Broomhead, M.J., “GSP. A Generic Object-Oriented Gas Turbine Simulation Environment”, National Aerospace Laboratory NLR, Flight Division, Technical Report NLR-TP-2000-267, Amsterdam, The Netherlands, 2000. [3] Stamatis, A., Mathioudakis, K., and Papailiou, K.D., “Adaptive simulation of gas turbine performance”, Journal of engineering for gas turbines and power, vol. 112, no. 2, pp. 168-175, 1990.

APRIL 2015 Leonardo Times

helicopter torque.indd 9

09

27/03/15 14:33


Hypersonic trajectory optimization From modified Newtonian theory to Navier-Stokes

It is the year 2030 and you are sitting in an ‘airplane-like’ vehicle. Is it really an airplane? You look outside the window and you see the Earth below you, but the view is not what you assume. You can actually see the curvature of the Earth. You look up and see a display saying ”Arriving in Tokyo in thirty minutes”. And then you realize that you just boarded the vehicle an hour ago from New York City. TEXT Ir. Jeffrey (J.) van Oostrom, Aerodynamics and Wind Energy

his is the future that Ronald Reagan envisioned when he held his State of the Union in 1986 (Reagan, 1986). A new Orient Express was to take off from Dulles airport, accelerate to at least twenty-five times the speed of sound, and land in Tokyo, just two hours later. This is what Ronald Reagan expected by the end of the nineties. The reality is, however, that this is still not the case. In a previous edition of the Leonardo Times (December, 2013), an article about the current state of hypersonic re-entry technology was featured. In this article, the conceptual design of a small experimental re-entry vehicle, designated Hyperion, was introduced (see Figure 1). The first studies of Hyperion trace their origin to the nineties. The studies are still being conducted as this is written. The goal of Hyperion is, amongst others, to develop the technology that can be used to make re-entry vehicles reusable, therefore also contributing to realize the vision of Ronald Reagan. Previous work In a previous study, the optimal re-entry trajectory of Hyperion-2 has been derived. The mission of the vehicle is to measure hypersonic boundary-layer transition, a phenomena in which the laminar boundary layer turns into a turbulent boundary layer. This is an important feature in hypersonic flow to investigate, as it introduces peak heating and increases drag. A constant Mach 10 flight has been performed, optimizing for flight time, whilst maintaining a large Reynolds number range, in

10

which transition occurs (Retrans = 1 × 106). The optimal flight time has been found to be 31 seconds, with a Reynolds number sweep of 1.17 x 107 (Dijkstra, 2013). The aerodynamic database (CL, CD and CM) that is generated in the work of Dijkstra (2013) is based on modified Newtonian theory. This theory assumes that the flow impinges an inclined surface, losing its momentum normal to the surface while preserving the momentum tangential to the surface. An elegant expression for the pressure coefficient can then be obtained, that can be used in panel methods to obtain the aerodynamic coefficients. In hypersonic flow, for simple geometries, this theory has been proven quite accurate, due to the nature of hypersonic flow, where the shock wave is close to the body surface and the streamlines are tangential to the surface. The modified Newtonian theory is still an engineering method, based on assumptions. Recent studies have aimed to im-

prove the aerodynamic fidelity compared to Dijkstra (2013), by using a Navier-Stokes based solution. With a new aerodynamic database, the optimal trajectory can be redefined and the effects on the flight mission of Hyperion-2 become clear. Stanford University Unstructured (SU2) The Navier-Stokes solution of the flow field around Hyperion-2 has been calculated by using the computational fluid dynamics (CFD) solver called Stanford University Unstructured (SU2) (Palacios, 2013). SU2 is an unstructured open-source finite-volume CFD solver. The software uses a configuration file in which all options are specified together with a mesh file to compute the Navier-Stokes solution. This makes the solver very easy to use. Since the solver is parallelized, multiple cores can be used to reduce computational time. SU2 offers a variety of spatial discretization and time integration schemes, has multigrid capaMooij, 1999

T

Figure 1. Isosurfaces for Mach 1 (blue) and Mach 10 (red) for freestream Mach 10 and angle of attack of zero degrees (left) and 16 degrees (right).

Leonardo Times APRIL 2015

hypersonic trajectory.indd 10

26/03/15 23:24


bilities and adjoint solvers. The latter feature is rather interesting to investigate the sensitivity of the solution with respect to a certain parameter. During the studies, the solver has been validated and verified on super- and hypersonic characteristics that may have an influence on the aerodynamic coefficients, such as shock characteristics and pressure distribution. Simple shapes such as flat plates or circular cylinders have been used to validate, in order to prevent interaction of various hypersonic flow features. Furthermore, many theoretical and experimental data is available for these shapes. During the validation of the solver, it was found that the heat flux computation cannot be fully trusted yet, and will therefore not be included in the database. Aerodynamic database For Hyperion-2, the aerodynamic database has been computed using SU2. One hundred thirty independent computations spanning Mach numbers from six to sixteen and angles of attack ranging from zero to 45 degrees have been performed. Only laminar flow is considered and chemical effects that are normally present in hypersonic flow are omitted. This aerodynamic database can be compared to the modified Newtonian database and the differences can be identified. Figure 2 shows the results for Mach 10, comparing the result of SU2 with the modified Newtonian theory. It has been found that the lift and drag coefficients are increased by five percent using CFD for angles of attack lower than ten degrees. For larger angles, the difference increases to fifteen percent and ten percent for the lift and drag coefficient respectively. The moment coefficient is fifty percent larger for low angles of attack using CFD compared to the modified Newtonian theory. At larger angles of attack, the difference is 28%. This very large difference in moment coefficients is due to the completely different pressure distribution computed by SU2 and demands the use of a fully trimmed trajectory, since only then the effect of the moment coefficient on the trajectory becomes clear. Trajectory optimization The trajectory optimization software (Dijkstra, 2013) is based on a three-degreesof-freedom flight-mechanical model, which uses angle of attack control only. The atmospheric model used is the 1976 United States Standard Atmosphere. The optimization algorithm is based on Differential Evolution. The entry conditions of the vehicle are the burnout conditions of the Brazilian VS-40 launcher. The optimal trajectory takes four constraints into account: controllability constraint to ensure flap effectiveness, stagnation-point heat flux constraint to prevent excessive ther-

Figure 2. Comparison between modified Newtonian database (Dijkstra) and Navier-Stokes database (SU2) for Mach 10

mal loads, flap deflection constraint to prevent shock wave boundary layer interaction and pitch trim constraint to ensure fully trimmed flight. Using either of the aerodynamic databases, results in a similar optimal trajectory. The difference in flight time is negligible and an optimal flight time of about 31s is achievable. The associated Reynolds sweep is 1.20 × 107, which is an increase of three percent compared to previous work. The controllability constraint cannot be satisfied in this trajectory, resulting in an ineffective upper flap and an over effective lower flap. The position of the center of mass for a trimmed flight is shifted three percent towards the nose of the vehicle using the database generated by SU2. It must be noted that when the controllability constraint is taken into account, there is a large shift forward in center of mass position (almost ten percent). In addition, the Reynolds sweep and flight time have been drastically decreased (close to ninety and sixty percent, respectively). The deflections of the flaps are limited due to the controllability constraint, such that the moment coefficient cannot be fully countered by the flaps and the constant Mach 10 flight cannot be sustained long enough to cover transition. Future research From the results obtained in the thesis one might wonder if using the modified Newtonian theory is justified when considering conceptual design of a hypersonic reentry vehicle, due to the large difference in moment coefficient. Further research should focus on including chemically reacting flows, which is dominantly present in the hypersonic flow regime. These ef-

fects might even cause a larger difference in coefficients compared to the modified Newtonian method. For now, this study indicates that more research in the field of hypersonic aerodynamics is necessary. Are you interested in the fields of hypersonic aerodynamics, reentry and computational fluid dynamics and want to improve this research work by including more and more features of hypersonic flows in the CFD computations? You can always contact dr. ir. E. Mooij (Astrodynamics and Space Missions) , dr. ir. F.F.J. Schrijer (Aerodynamics and Wind Energy) or ir. K.J. Sudmeijer (Structural Integrity and Composites) for graduate opportunities.

References [1] Reagan, R., “Address Before a Joint Session of Congress on the State of the Union.” The American Presidency Project, April 1986. [2] Mooij, E., Kremer, F., and Sudmeijer, K.J. Aerodynamic, “Design of a Low-Cost Re-entry Test Vehicle Using a Taguchi Approach.”, 9th International Space Planes and Hypersonic Systems and Technologies Conference, AIAA-99-4831, 1999. [3] Dijkstra, M., Mooij, E., and Sudmeijer, K.J., “Trajectory Optimization to Support the Study of Hypersonic Aerothermodynamic Phenomena.”, AIAA Atmospheric Flight Mechanics (AFM) Conference, AIAA-2013-4501, August 2013. [4] Palacios, F., Colonno, M.R., Aranake, A.C., Campos, A., Copeland, S.R., Economon, T.D., Lonkar, A.K., Lukaczyk, T.W., Taylor, T.W.R., and Alonso, J.J., “Stanford University Unstructured (SU2): An Open-Source Integrated Computational Environment for Multi-Physics Simulation and Design.”, 51st AIAA Aerospace Sciences Meeting, AIAA-2013-0287, 2013

APRIL 2015 Leonardo Times

hypersonic trajectory.indd 11

11

26/03/15 23:24


RVD

Caroline Trump, SpaceX

Making Falcon 9 Re-usable

SpaceX’s Reusable Launch System development program

On January 10, 2015, SpaceX managed to crash land a rocket on a barge. SpaceX sets the benchmark for the cost of putting payloads into orbit around Earth. They want to bring the cost of space access down even further by making their launch system fully reusable. Elon Musk thinks that the price can be decreased a hundred fold in the long term. That would eventually enable humans to settle Mars. TEXT Lukas Schreiber, BSc Student Aerospace Engineering, member Space Department

paceX is flying the only orbital rocket that has been completely developed in the 21st century. With their simple design they have already brought down the price of space access to just under 5,000$/kg for putting a payload into Low Earth orbit. Their Reusable Launch System development program aspires to reduce this price even further. They aim to make the Falcon 9, their workhorse rocket, fully reusable while keeping the design simple. To avoid the high costs associated with the Space Shuttle, SpaceX is keeping the required re-furbishing between launches to a minimum to achieve rapid turn-around times. Elon Musk, founder and CEO of Tesla Motors and SpaceX put it this way: “If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of a hundred. A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space.” In the short term, a more modest price reduction is expected. Fortune Magazine states that the launch price of a Falcon 9 could be cut in half by the end of this year (Fortune Magazine, 2015). A significantly lower cost of space access has the potential to fundamentally reshape the space industry; new players could afford to put satellites into orbit. If the rocket industry continues to innovate at the pace Elon Musk envisions and the price of rockets can really be brought down a hundred fold, this would

12

RVD 0415.indd 12

pave the way for a manned mission to Mars. On January 10, 2015, SpaceX attempted to land the first stage of the Falcon 9 on a platform floating in the Atlantic Ocean. It reached the target but wasn’t able to perform a soft landing because the control surface actuators ran out of hydraulic fuel. The following section explains how SpaceX got there and what’s next while the part after that lays out how simplicity and reliability ensure the success of Falcon 9. Of Grasshoppers, drone ships and DragonFlys Before explaining how to make Falcon 9 reusable, the original design of this rocket is described. The task of the first stage is to overcome the high drag of our thick atmosphere. This is accomplished with nine

Merlin engines burning rocket kerosene and liquid oxygen. To keep things simple the second stage is powered by the same kind of engine, which has been modified slightly for optimum performance in vacuum. On top of the rocket sits the pressurized Dragon capsule or a composite payload fairing that fits a tour bus. Dragon is intended but not yet certified as a crew capsule. SpaceX is conducting tests with the goal of making both Falcon 9’s first stage and the Dragon Capsule reusable. Their engineers are aiming to keep the time required for refurbishment of these as low as possible to achieve rapid turn-around times. Although Elon Musk has stated that SpaceX intends to make the entire rocket reusable, no efforts have yet been made to develop a retrieval mechanism SpaceX

S

Figure 1. Landing attempt of Falcon 9 on a floating platform on January 10

Leonardo Times April 2015

26/03/15 23:24


for the second stage. With the final goal of rapid re-deployment of Falcon 9 in mind, SpaceX is planning to return the spent first stages directly back to the launch pad. However, before the US government allows for a rocket to be directed at American soil, a number of tests are required to demonstrate that this can be done in a safe manner. The company has already completed low-altitude testing of the grasshopper technology demonstrator vehicle. In parallel, high-altitude testing of first stages after their regular mission has been under way since September 2013. The launch profile of current Falcon 9 launches is illustrated in Figure 2. The Grasshopper is a 32m long rocket developed for the sole purpose of demonstrating vertical landing. It was successfully tested during 2012 and 2013. In its eighth and final test, it reached an absolute altitude of 744m before returning to the launch pad. Following these tests, in 2014 larger scale tests were performed with a Falcon 9 first stage fitted with three Merlin engines, known as F9R Dev1. On this vehicle Falcon 9’s foldable landing legs and grid fins for attitude control were first tested. The former are clearly visible in the cover photo while the latter are shown in Figure 3. The top altitude reached was 1,000m. This rocket was destroyed when the flight termination system was activated during a test on the August 22, 2014. Elon Musk announced that a similar vehicle known as the F9R Dev2 would be tested at altitudes of up to 91km at Spaceport America in New Mexico. (NASA, 2014) In the meantime, five post-mission controlled-descent tests have been conducted since September 2013. In these tests, spent first stages are sent on a control descent trajectory to the Atlantic Ocean after completing their commercial mission. In the second test in April of last year, the first soft landing in the sea was ac-

SpaceX copyright

Jon Ross

Figure 2. Falcon 9 launch profile including first stage recovery

Figure 3. Grid fins for attitude control mounted on Falcon 9’s first stage

complished, including deployment of the landing legs. A simulated landing – zero velocity at zero altitude – was performed on the ocean surface in the fourth flight test. In January of this year the first landing on a barge floating in the Atlantic, the autonomous spaceport drone ship was attempted. The rocket pinpointed the 52m x 91m floating landing platform, which is an amazing feat in itself but failed to stabilize horizontally, resulting in a crash landing (Figure 1). At the time of writing of this article the next landing attempt is planned for February 2015. SpaceX is also planning to start lowaltitude powered landing tests of their Dragon capsule, pending approval of the Federal Aviation Administration. The test article is called DragonFly and it is set to perform up to sixty test flights (FAA, 2014). The knowledge gained from Grasshopper testing is adapted to the requirements of the Dragon capsule. Simpler, safer, faster, better How has SpaceX managed to revolutionize rocket technology in merely a decade? The company’s success stems from its ability to rethink the fundamental economics of rockets while applying a good deal of engineering creativity. When designing the Falcon 9 for recovery, SpaceX focused on three objectives: simplicity, reliability and incremental development. These three points can be clearly recognized in their design. Instead of adding complex wings like the designer of the Space Shuttle did, they only added landing legs, grid fins, extra fuel and new flight control software to their already working Falcon 9. This decreased the payload weight by only 30% (BBC, 2014). The focus on reliability is exemplified by the fact that the first stage can brush off two engine shut downs. The incremental development approach has ensured that the company could bring its product to market early. By continuous testing the

product is improved step by step until full re-usability is possible. This strategy also avoids late surprises. Conclusion SpaceX has learned a lot from the achievements and pitfalls of the Space Shuttle program. The company focuses on reliability, simplicity and rapid development, which have enabled it to conquer a big share of the launch market. By closely integrating economics and engineering, they have already undercut the price of space access. Their promising development of a reusable Falcon 9 has the potential to make space access even cheaper. This would open up business opportunities for new companies. By following their testing campaign, everyone can judge for themselves how close SpaceX is to its goal of making a re-usable rocket. References [1] SpaceX Company Website: http:// www.spacex.com [2] NASA http://www.nasaspaceflight.com [3] FAA; Final Environmental Assessment for Issuing an Experimental Permit to SpaceX for Operation of the [4] DragonFly Vehicle at the McGregor Test Site, McGregor, Texas [5] BBC http://www.bbc.com [6] Fortune Magazine http://fortune. com Space Department The Space Department promotes astronautics among the students and employees of the faculty of Aerospace Engineering at Delft University of Technology by organizing lectures and excursions.

April 2015 Leonardo Times

RVD 0415.indd 13

13

26/03/15 23:24


Testing and simulation of CFRP sandwiches during impact Fiber reinforced composite materials are gaining interest from car manufacturers. They can provide superior energy absorption performance over conventional metallic structures compared on a weight basis. Currently, BMW is investigating the use of composite sandwich structures in automotive applications, to improve the crash safety and to reduce the weight of future vehicles. TEXT Ir. Lourens Blok, Graduate Aerospace Structures and Computational Mechanics

he early goal of (structural) crashworthiness engineers was to avoid vehicle deformation as much as possible, also known as penetration resistance. Automobile structures have evolved to include crush zones, which absorb impact energy via permanent deformation of the structure. This principle allows tailoring the design such that the maximum deceleration on the passenger compartment is minimized and injuries due to high (de) accelerations are prevented. The role of a crush zone is therefore to absorb the impact energy by controlled vehicle deformations (Jacob et al, 2002). Currently, most vehicle frames consist of thin-walled metal columns, which show predictable plastic deformations in the form of progressive folding. When the specific energy absorption (SEA) performance of composites is compared to that of metals, see Figure 1, the possible benefit of using composite materials in crashworthiness applications becomes clear. In general, to be able to achieve these very high SEA (in the order of 100 kJ/kg and higher) with composite materials, a near perfect loading path is required to obtain optimal crushing of the material. However, crash situations are intrin-

14

automotive.indd 14

sically unpredictable which means that although a high SEA can be beneficial, it is also important that the crush zone is able to handle multiple load cases robustly. Sandwich composites provide a possible solution, where a relatively weak and lightweight core is used to stabilize the crushing of Carbon Fiber Reinforced Plastic (CFRP) facesheets. This concept is different from thin-walled profiles, which absorb energy through axial crushing or folding. It enables one to take advantage of the high SEA capability of composite materials in a larger variety of structures and load cases. It also allows tailoring the energy absorption performance to meet the acceleration requirements on the passenger compartment. Crushing of composites Typical FRP composite materials fail in a brittle manner, which can be a very efficient energy absorption mechanism (Carruthers, 1998). This brittle failure mechanism consists of a large amount of micro fracturing of the composite material and depends on the complex interactions between the fibers, the matrix and the impactor. This makes prediction of the energy absorption capabilities of these materials difficult. It is therefore important to identify the possible failure modes.

The overall crushing morphology of FRP composites has been extensively characterized over the last decades. Two main failure mechanisms in the crush zone of the progressive end-crushing mode have been identified (Hull, 1991). First, there is splaying of the FRP material (facesheets), which consists of long parallel-to-fiber cracks or delaminations and bending of the resulting lamina bundles. Secondly, there is fragmentation of the facesheet material through fiber fracturing. The crush zone is a combination of these two failure mechanisms, depending on the matrix and fiber properties relative to each other. The fragmentation mecha-

Ramakrishna and Hamada

T

Figure 1. Specific energy absorption during crushing of different materials (Ramak rishna and Hamada, 1998)

Leonardo Times april 2015

27/03/15 00:43

copyright Euro NCAP

Automotive composites for crashworthiness


Crushing of sandwiches For sandwich materials, the debonding of the facesheets with the core and the compression of the core material are extra failure mechanisms. Work performed at the BMW Research and Innovation Centre (FIZ) has focused on testing different sandwich systems on their energy absorption performance. From these experiments, it was identified that sandwiches with CFRP facesheets and polymer foam cores had a good energy absorption performance. Various sandwich systems were tested with different polymer foam cores and facesheet layups. Sandwich coupons were cut with a 14° taper to promote stable crushing as shown in Figure 2. The coupons were then dropped with a certain mass from a certain height to control the impact energy. The impact speed was fixed at 8.2 m/s and the mass varied between 20 – 60 kgs. During the impact, a high-speed camera recorded the side view to identify the failure mechanism. From the accelerometer, the force-displacement curve was obtained and the energy absorbed could be computed. Based on this large experimental data set, the failure modes of sandwich structures during progressive end-crushing were identified. It was found that typically, the facesheets debond from the foam core and subsequently bent and broke in a repetitive cycle. Two examples of this are shown in Figure 3. The [0/90]12 facesheet bends considerably less compared to the [0/90]3, but both show the debonding and bending failure modes as described above. The ability of the core to stabilize the facesheet and minimize the disbonding was found to be dependent on both the facesheet stiffness and the maximum strain of the core. For sandwiches with low stiffness facesheets, the type of foam core had a large influence on the resulting energy absorption. A better performance

Figure 3. Crushing of sandwich with PVC foam core with [0/90]3 facesheet (top) and PMI foam core with [0/90]12 facesheet (bottom)

was found for Polyvinyl Chloride (PVC) cores, which had a higher maximum strain compared to similar Polymethacrylimide (PMI) cores. This allows the PVC core to better stabilize the facesheets during crushing. For higher stiffness facesheets, the influence of the type of core on crushing performance became less, which was attributed to the higher intrinsic stability of the facesheets themselves. Mechanical characterization of the sandwich systems was done to obtain the principal mechanical properties of the sandwich and the facesheets. This allowed correlating the relevant material properties with various crushing characteristics. The main drivers for energy absorption were found to be the facesheet stiffness and the facesheet compressive strength. This further established that the bending and breaking of the facesheets were the main drivers for energy absorption. Modeling of sandwich crushing An analytical tool was developed to predict the crushing force by defining an initiation and failure phase. These were related to the repetitive cycle during crushing, which included facesheet debonding (initiation) and fracture of the partly debonded facesheet through bending (failure). The initiation was modeled as buckling

of the facesheet on an elastic foundation, representing the foam core. The failure phase consisted of the post-buckling behavior of the locally debonded facesheet part. The internal moment, slope and the displacement at the debonded tip were matched between the free postbuckled facesheet part and the part of the facesheet still supported by the core. Failure of the facesheet was then assessed with a first ply failure criterion. The crushing force was predicted using a constant ratio between the critical initiation force and critical propagation force. A correction factor SF was needed to into account the overall material knock down factors. The analysis assumed the material to be perfect but this is not the case as the loading conditions and boundary conditions are different. The dynamic response of the material may also be different due to shock waves propagation through the material. With these assumptions, the crushing could be predicted within 20%. The predicted and actual deflections showed good agreement with each other, as shown in Figure 4. This shows that the model is able to capture the bending behavior during crushing, which was found to be an important driver for the energy absorption. References

BMW Group

nism tends to absorb more energy as more fiber fracture occurs. The matrix properties must be high enough to allow for this, otherwise splaying can occur.

BMW Group

Lourens Blok

copyright Euro NCAP

Figure 2. Schematic overview of crushing test set-up

Figure 4. Predicted failure shape (red) over actual experiment for PVC core with [+-45]6 facesheets

[1] G. Jacob, J. Fellers, S. Simunovic, and J.M. Starbuck “Energy absorption in polymer composites for automotive crashworthiness,” Journal of composite materials, vol. 36, no. 7, pp. 813–850, 2002 [2] S. Ramakrishna and H. Hamada, “Energy Absorption Characteristics of Crash Worthy Structural Composite Materials,” Key Engineering Materials, vol. 141-143, pp. 585–622, 1998. [3] J. Carruthers, “Energy absorption capability and crashworthiness of composite material structures: a review,” Applied Mechanics Reviews, vol. 51, no. 10, pp. 635–649, 1998. [4] D. Hull, “A unified approach to progressive crushing of fibre-reinforced composite tubes,” Composites Science and Technology, vol. 40, pp. 377–421, Jan. 1991.

april 2015 Leonardo Times

automotive.indd 15

15

27/03/15 00:43


Which rotor concept will it be? Horizontal Axis Wind Turbine of the future

The tendency to increase the size of offshore Horizontal Axis Wind Turbines (HAWT) together with the trend of installing wind farms further offshore drives the search for more robust designs. However, different rotor concepts solutions options may be considered to meet the challenges of Wind Energy industry. The present article gives an overview of the different rotor concepts currently investigated at the TU Delft Wind Energy section. TEXT Ricardo Pereira, PhD Candidate, AWEP Chair, Aerospace Engineering, TU Delft

I

Ricardo Pereira

n recent years, the increasing size of Horizontal Axis Wind Turbines (HAWT) along with the tendency to install wind farms further offshore demand for more robust design solutions. Modern HAWTs deployed offshore are variable speed and pitch-controlled, but several rotor concepts are currently candidates for replacing the present design trend. Different rotor concepts offer different levels of complexity but also varying degrees of controllability, as illustrated in Figure 1.

Figure 1. Different rotor concepts compared in terms of Complexity and Controlabil ity, from [Ferede, 2014]

16

Generally speaking, as one considers a design solution which includes more devices to control HAWTs, whether at a blade section level or full rotor scale, it is becomes possible to keep the machine operating nearly always at ideal conditions. This in turn means the different HAWT components (blades, nacelle, generator, and tower etc.) might be designed employing smaller safety factors, ultimately leading to a lighter and cheaper structure. However, to achieve an increased controllability one needs to include more control elements, or actuators, possibly bringing about increased maintenance costs and reliability issues, as industry requirements impose HAWTs fail-safe operation. On the other hand, it is possible to design robust HAWTs that can cope with very distinct regimes of operation without the need to actively employ control devices at the cost of more sturdy HAWT components. This in turns will most likely result in heavier machines that will seldom operate at the (extreme) conditions they were designed for, leading to an economically unattractive solution. Ultimately, the challenge in

designing the wind energy rotors of the future lies in selecting the compromise solution between added complexity and improved controllability. We now have three possible HAWT designs currently being investigated at our faculty. Passive Stall Control We start by looking at the simplest HAWT rotor concept, so-called Passive Stall Control. As mentioned before, state-ofthe-art machines are variable speed and pitch-controlled; this means that at large (above-rated) wind speeds, the aerodynamic power extracted by the turbine is regulated by pitching the rotor blades as to decrease the angle of attack (pitchto-feather). However, if the HAWT is stallregulated the blades are not pitched at above-rated wind speeds and consequentially the angle of attack experienced by the blade section increases until the airfoils stall, thus regulating the aerodynamic power extracted. Both power regulation methodologies are illustrated in Figure 2. The Passive Stall Control concept has fixed-pitch and does not employ any ac-

Leonardo Times april 2015

which concept rotor.indd 16

27/03/15 16:01


Ricardo Pereira

Ricardo Pereira

copyright

Ricardo Pereira

Figure 2. HAWT Power Regulation Strategies - Power Production versus Wind Speed for Pitch Controlled (blue) and Stall Controlled (red) machines, adapted from [Ferede, 2014]

Figure 4. Schematic of a Dielectric Barrier Discharge Plasma actuator, from [Tanaka, 2014]

tuation for load control, meaning that above rated wind speeds, and as the angle of attack experienced by the blade section increases, the aerodynamic power and loads will increase beyond the rated value. These increased loads, both structural (for the blades and tower) and electrical (for the generator) will become design drivers, leading to more expensive components. However, in the Passive Stall Control framework it is possible to design a suitable HAWT blade, aero elastically tailored by adjusting the geometric and material characteristics such that the aerodynamic power above rated wind speed is regulated without employing actuation. Among aero elastic tailoring, one of the most promising strategies is twist coupling in which careful layout of the composite fiber angles, material thicknesses distribution and overall blade shape leads to a coupling between different deformation modes of the structure, as illustrated in Figure 3. Different deformation modes may include compression/extension, but bend-twist coupling appears to be more interesting and has gained increased attention over the last years, owing to larger mode coupling coefficient and hence larger controllability. If the blade local twist can be controlled by aero elastic tailoring, almost playing the part of a local pitch system, then it becomes possible to control the angle of attack of blade section and ultimately limit the loads experienced by the rotor over the wind speed operational envelope without pitching the rotor blades. An additional challenge

Figure 3. Illustration of bend-twist coupling as currently considered for Passive Stall Control, from [Ferede, 2014]

arises when considering Passive Stall Control since at above rated wind speeds one expects large loads on the blades, leading to considerable blade deflections, it is likely that (conventional) linear beamtheory does not apply, and accordingly it becomes necessary to consider geometrical non-linearities [Ferede,2015] to capture the behavior of the aero elastically tailored HAWT blade. Active Stall Control Climbing the ladder of complexity, we now introduce the HAWT rotor concept of Active Stall Control. This design solution is also fixed-pitch, but employs actuators to limit the loads above rated wind speeds, rather than changing the angle of attack of operation of the blade sections like a variable-pitch machine. Active Stall Controlled rotors need to be simple enough and reliable, while providing an added controllability compared to Passive Stall Controlled rotors. As such the actuators involved should by as simple, robust and reliable as possible, preferably not contain any mechanisms. Plasma actuators emerge as a possible candidate for application in Active Stall Controlled HAWTs. Owing to their low mass, reduced power consumption and large bandwidth of operation, they are an attractive option for wind energy employment and their effectivity has been tested on the field at an industrial-scale HAWT with remarkable results [Tanaka, 2014]. Most plasma actuators currently used in flow control applications are Dielectric Barrier Discharge (DBD) plasma actua-

tors, consisting of two asymmetrically arranged electrodes separated by a dielectric material. One electrode is mounted on the blade surface (exposed electrode) and the other is encapsulated under the dielectric material (covered electrode), as illustrated in Figure 4. By applying a large voltage (~ kV) at high frequency (~kHz) across the electrodes the surrounding air is ionized, thus creating plasma. And because of the asymmetric electric field there is a momentum transfer to the air due to the collision with ionized particles, creating the so-called ‘ionic wind’. This enables actually giving a small “push” to the air without employing any moving surface. The challenge remains though since the “push” is indeed small, i.e. the force or thrust transferred to the air per single plasma actuator is below 0.2 N/m. This means for effective operation the DBD actuator must be used intelligently, notably triggering/suppressing instabilities [Kotsonis, 2011], in pulsed operation [Tanaka, 2014], or placed such that its impact on the specific flow characteristics is maximized. Recent experimental work (Figure 5) carried out at our faculty has contributed to the quantification of the impact DBD plasma actuators have on the (boundary layer) flow around airfoil sections [de Oliveira, 2015]. This unlocks the potential to actually include DBD actuators in airfoil flow simulation, which in turn might facilitate the design of airfoil sections tailored for DBD actuator employment. The current idea is then that an airfoil tailored to april 2015 Leonardo Times

which concept rotor.indd 17

17

27/03/15 16:01


Ricardo Pereira

Ricardo Pereira

Figure 5. Experimental Set-up of DBD plasma actuators applied on an airfoil section used in Active Stall Control Research, from [de Oliveira, 2015]

Figure 6. Field-test conducted for investigation of Smart Rotor concept, in Sandia National Laboratories (USA), from [Bernhammer, 2015]

DBD actuation would allow for significant changes in the aerodynamic blade section loads even with the small momentum imparted by the plasma actuator, and thus provide the necessary controllability to achieve power regulation through Active Stall Control. In addition to airfoil design, full HAWT blade planform geometry should also be considered, both in terms of chord and twist radial distribution, to explore this HAWT rotor concept to the most. Another option is to jointly consider Active and Passive Stall Control, if aero elastic tailoring is combined with actuation employment to ensure full power regulation over the whole range of operational wind speeds.

Rotor research has gained increased attention, or at least more credibility among research institutions, also because of the dissemination of LIDAR devices in wind energy applications. LIDAR is a remote sensing technology that utilizes a laser and analyzes the reflected light, and may be used to measure the wind velocity profiles in real time. By determining the wind velocity field, resolved both in space and time, it becomes possible to (more) accurately predict the wind speed that is effectively experienced by the HAWT rotor. This in turn increases the potential, relevance and applicability of Smart Rotor HAWTs, both for load alleviation and increased power production.

Smart Rotor We now briefly describe the concept of Smart Rotor for HAWT; this design solution employs moving control surfaces to perform distributed actuation along the blade span, while using variable-pitch to regulate power production above rated wind speeds. The current concept makes use of trailing edge flaps to control the loads at the blade section level, as illustrated in Figure 6. When compared to the previous HAWT concepts, Smart Rotors have the disadvantage of increased actuation mechanisms and associated added complexity, but may provide a superior degree of controllability.

The Smart Rotor concept has a higher maturity level compared to the Stall Controlled options, and is being investigated for more than a decade, with very promising results. Pioneering work has been conducted at the TU Delft, together with a long-lasting collaboration with Sandia National Laboratories (in Texas, USA). Significant progress has been achieved in accurate modeling of the aero-elastic-servo system, notably by implementing a blade modal decomposition which captures the relevant physics of the problem using a reduced number of variables [Bernhammer, 2015]. The numerical approach was used to develop control algorithms experimentally tested in an industrial scale (diameter ~18m) Smart Rotor prototype, illustrated in Figure 6, which was success-fully fieldtested and operated for long periods of time. Experimental results confirm significant reliability of the Smart Rotor concept, which is crucial when considering operation and maintenance of a technology for industrial application. Moreover it was demonstrated the Smart Rotor concept is able to match fatigue allevia-tion damage that (state-of-the-art) individual pitch control HAWT rotors allow, which is indeed very encouraging.

Most Smart Rotor research efforts aim at HAWT fatigue load alleviation. Since the wind has a stochastic nature, with wind gusts permanently changing in space and time, the loads experienced by HAWT rotor blades fluctuate constantly, particularly at above-rated wind speeds. The main idea is then that if trailing edge flaps could be used in real time to compensate for these (small) load fluctuations, it would be possible to significantly decrease the HAWT fatigue damage, possibly leading to lighter, cheaper HAWTs components. The application of Smart Rotors however goes beyond load alleviation since the potential for power harnessing optimization has also been demonstrated [Smit, 2014]. Particularly over the last few years Smart

18

In Conclusion, the current article discussed different design solutions for the HAWT rotors of the future, while providing an overview of some of the concepts

studied at the TU Delft. If you have further ideas or want to contribute to this research as a graduate student feel free to come by to the Wind Energy section on the 5th floor of the Aerospace Faculty High Building! References [1] Ferede, E. and Abdalla, M., “Isogeometric Formulation of Geometrically Non-Linear Timoshenko Beams using Quaternions to Parametrize Rotation”, Journal of Computers&Structures , 2015 (to be submitted) [2] Ferede, E., Pereira, R. and Bernhammer, L.,“Working on the Rotors of the Future”, Wind Kracht 2014, Rotterdam, 2014 [3] Tanaka,M., Osako, T., Matsuda H., Yamazaki K., Shimura, N., Asayama, M., Oryu Y., Yoshida, S., “The World’S First Trial For Application of Plasma Aerodynamic Control on Commercial Scale Turbine”, EWEA 2014, Barcelona, 2014 [4] Kotsonis, M., “Dielectric Barrier Discharge Actuators for Flow Control: Diagnostics, Modeling, Application”, PhD Dissertation, TU Delft, Delft University of Technology, 2012 [5] De Oliveira, G., Pereira, R., Ragni, D. and Kotsonis, M., “ Modelling DBD Plasma Actuators in Integral Boundary Layer Formulation for Application in Panel Methods”, AIAA 45th Fluid Dynamics Conference, Texas, June 2015 (submitted) [6] Smit J., Bernhammer, L., Bergami, L., Gaunaa, M., “Sizing and Control of Trailing Edge Flaps on a Smart Rotor for Maximum Power Generation in Low Fatigue Wind Regimes”, 32nd ASME Wind Energy Symposium, Maryland, USA, 2014 [7] Bernhammer, L., de Breuker, R. and Karpel, M, “Wind Turbine Structural Model Using Non-Linear Modal Formulations”, AIAA Journal, 2014 [8] Bernhammer, L., de Breuker, R. and van Kuik, G., “ Aeroelastic Time-Domain Simulation of SNL Smart Rotor Experiment” , 33rd ASME Wind Energy Symposium, Kissimmee, USA, 2015

Leonardo Times april 2015

which concept rotor.indd 18

27/03/15 16:01


The sky is no limit! The Dutch Aviation Group (DAG) is an association consisting of companies and organisations supporting the aviation industry with their knowledge, expertise and experience on complex technical, economic and sustainable aerospace matters. The main purpose is to provide a network for sharing knowledge and experience, exchange information and enhance the cooperation amongst organisations working within the aviation and aerospace industry. We achieve these goals through the organisation of conferences, meetings and networking events.

www.dutchaviation.nl

JOIN THE CLUB! ABN-AMRO Pilot Desk • Accenture • Ad Cuenta • ADSE • AerCap • Area Development Twente • AeroVisie • AGT Group B.V. • AKD • ArkeFly • Aviation Consultancy Holland • Aviation Independent Consulting • Bemolt • Brown Paper Company • CAE • Capgemini • DDA Flight Support • Del Canho & Engelfriet • Ecorys Transport • EPST • Eurocontrol • Flight Simulation Company b.v. • FOX-IT • Groningen Airport Eelde • Hogeschool van Amsterdam • IBM • Incontrol Enterprise Dynamics • Historic Engeneering • Hogeschool INHOLLAND-Delft • Honeywell • JAA Training Organisation • Jan de Rijk Logistics • JetSupport • Jet Management Europa • KLM • KNMI • LPB Sight • LVNL (Luchtverkeersleiding Nederland) • Manhattan Consultancy BV • Marsh • Martinair • M3Consultancy • MI-Airline • MovingDot • NACO • Nationaal Lucht- en Ruimtevaartlaboratorium • Nationale Hogeschool Toerisme en Verkeer • Nayak Aircraft Services Netherlands • NextBlue Growth B.V. • Paceblade Technology • Policy Research Corporation • Rabobank Regio Schiphol • Robin Radar Systems • ROC Amsterdam Airport • Roland Berger • Rotterdam-The Hague Airport • Programmabureau Luchtvaart • Seppe Airport • Schiphol Group • SGI-Aviation Services BV • SIM-industries • Sodexo • Space Expedition Corporation • TNO • TOP BI • Transavia.com • Trigion •TU Delft Faculteit Lucht- en Ruimtevaarttechniek • VSV ‘Leonardo da Vinci’ • VTOC • W J Pim den Hartog • Xegasus Aviation Investments (Members March 2015 )

dag.adv.febr.20154.indd 1

28-02-15 10:56


LVD

The aftermath of disaster

What we have learned from major airplane crashes

Nowadays, flying is considered one of the safest means of transportation. However, we have come a long way to be where we are today. It seems a little bit like locking the stable door after the horse has been stolen, but many improvements and systems that are used for the safety of aircraft have been implemented as a result from major accidents and incidents in the history of aviation. TEXT Meander Leukfeldt and Niels Waars, Students Aerospace Engineering, members Aviation Department

20

LVD 0415.indd 20

these events, it is easy to conclude that accidents have a great impact on further developments in the aviation industry. Maintenance A well-known accident is the one that took place on Aloha Airlines Flight 243. On April 28, 1988, a part of the fuselage of

a Boeing 737-297 blew off after an explosive decompression. The structural failure happened during normal flight on an altitude of 24,000ft. The decompression removed the roof from a section just behind the cockpit to the wing. Fortunately, the remainder of the aircraft was able to withstand the decompression and the pilot copyright

The early days In the early days of aviation, many things were still unknown about aircraft safety. Unfortunately, one of the ways to learn that something is unsafe is to see it fail. One example is that of the Havilland Comet, the first production commercial jetliner. After about a year of service, three of these aircraft broke apart in mid-air in the early 1950’s. The crash resulted in adjustments to the aircraft structure. For example, rounded ones replaced square windows. Later, in 1956, a mid-air collision between two American aircraft resulted in the development of air traffic control (BBC, 2014). This early development was later followed by the introduction of radar control and the GPS system for aircraft tracking. To prevent mid-air and ground collision aircraft are now equipped with an EGPWS (enhanced ground proximity warning system), and TCAS (traffic collision avoidance system). These systems give warnings to the pilot and advice on evasive actions. Many smoke and fire incidents onboard of passenger aircraft were caused by people smoking, which was not banned until the late 1980’s. From all

Figure 1. The aircraft with the damaged fuselage section

Leonardo Times april 2015

26/03/15 23:38


In the report of the investigation, a lot of recommendations to the FAA and to the airliner were included in order to increase the safety level. As a result, the FAA began the National Aging Aircraft Research Program (NAARP) in 1991. This program was developed to ensure the structural integrity of high-time and high-cycle aircraft. One of the major elements in the NAARP was to research methodologies to assess the effects of widespread fatigue damage (FAA, 1991). The program included a new full scale Aircraft Structural Test Evaluation and Research Facility in New Jersey. This facility allowed for predictive testing for structural fatigue, corrosion and many aspects of the operation of aircraft. The NAARP tightened the inspection and maintenance requirements for high-use and high-cycle aircraft (BBC, 2014). Human Factors The deadliest air crash to date still remains the Tenerife crash that happened on March 27, 1977. Two Boeing 747’s, one from KLM and one from PanAm, collided on the runway of Los Rodeos airport, which resulted in the death of over 580 people. (1001 crashes, 2015) Both airplanes had been initially bound for Las Palmas Airport, the largest airport on Tenerife. However, that airport had been closed and a number of large passenger jets were diverted to Los Rodeos Airport. This airport had only one runway and one taxiway, and was not well equipped to deal with so many large jets. One of the implications of this was that the planes had to use the runway itself for taxiing into position. An additional concern was the fact that there was heavy fog, so visibility at the airport was low. This resulted in a series of events that lead to the collision of the two jumbo jets. The KLM flight was instructed to taxi to the beginning of the runway and line up for their departure. The PanAm flight had to taxi down the runway and leave at the third exit. It appears that the flight crew was not aware of the position of the third exit, as they were still on the runway beyond

AP

was able to land the aircraft. This resulted in one fatality, a flight attendant who was swept overboard during the accident. The investigation to determine the cause of the accident was done by the United States National Transportation Safety Board (NTSB). The NTSB concluded that the probable cause was: “The failure of the Aloha Airlines maintenance program to detect the presence of significant debonding and fatigue damage. Contributing to the accident were the failure of Aloha Airlines management to supervise properly its maintenance force as well as the failure of the FAA to evaluate properly the Aloha Airlines maintenance program and to assess the airline’s inspection and quality control deficiencies.” (NTSB, 1989)

Figure 2. Wreckage of the Boeing 747 after the Tenerife crash

the third exit point. In the meantime, the co-pilot on the KLM flight was told by his captain to request take-off clearance. The air traffic controller cleared them for a departure route. The last words of the controller saying they had to wait to take-off were lost together with the PanAm flight reporting they were still on the runway, as both calls were made simultaneously. Despite the co-pilots doubt, the KLM captain was convinced they were cleared for takeoff and started to accelerate down the runway. By the time the two flight crews were able to identify each other; it was too late to prevent a crash from happening. The crash had great implications for the aviation industry concerning standardized operating procedures and cockpit relations. Aviation authorities worldwide introduced standardized phrases and an emphasis on the English language. For example, the use of the word take-off was only to be used when referring to the actual moment of taking off. Before this, pilots and air traffic control should refer to departure. This should prevent a misunderstanding as happened in the Tenerife crash, on whether a plane is cleared for actual take-off or not. Furthermore, pilots are now required to read back important instructions, instead of responding with ‘OK’ or ‘Roger’, to confirm they have received and understood the message. The crash also resulted in changes in cockpit procedures. Nowadays, there is a large focus on what is called ‘Crew Resource Management’. Hierarchical relations in the cockpit are less commonplace and there is a lot of emphasis on communications and teamwork. A large part of the pilots training is related to the understanding of human factors and the practice of handling an aircraft as a multi-pilot crew. These events show that also a crash without technical issues can have an enormous impact on the aviation industry. These classic examples of crashes show the huge effect of a crash on the development of aviation systems and regulations.

Thorough investigations of crashes are highly important for the improvement of the level of safety in aviation. Recent crashes will have their own aftermath, which will be just as important. Safety is a top priority in aviation and flying may be considered the safest way of transportation, but as long as crashes happen, the safety level can be improved. People from all fields of aviation continuously face this challenge. References [1] The crashes that changed plane designs forever http://www.bbc.com/future/ story/20140414-crashes-that-changedplane-design, BBC, 2014. [2] Aircraft Accident Report http://libraryonline.erau.edu/onlinefull-text/ntsb/aircraft-accident-reports/ AAR89-03.pdf , NTSB (National Transportation Safety Board), 1989 [3] Aging Aircraft Structural Integrity Research, http://www.tc.faa.gov/its/ cmd/visitors/data/AAR-430/aastruc. pdf, FAA [4] The Tenerife crash - March 27th, 1977 http://www.1001crash.com/indexpage-tenerife-lg-2-numpage-4.html, 1001 crash, 2015. [5] The Tenerife Airport Disaster - the worst in aviation history http://www.tenerife-information-centre. com/tenerife-airport-disaster.html, Tenerife Information Centre, 2015.

Aviation Department The Aviation Department of the Society of Aerospace Engineering Students ‘Leonardo da Vinci’ fulfills the needs of aviation enthousiasts by organising activities, like lectures and excursion in the Netherlands and abroad.

april 2015 Leonardo Times

LVD 0415.indd 21

21

26/03/15 23:38


ADAC ( Abu Dhabi Airport Company)

Internship report

‘Deerns’ In Abu Dhabi Where the big money’s at

During my internship, I wanted to experience life abroad. With some luck, I spent two months in the middle of the desert. The desert where gasoline is cheaper than water, cities keep expanding, world’s biggest airports are being built and money is spent on the most extravagant things you can imagine. TEXT Floris Haasnoot, MSc Student Aerospace engineering

F

or my internship, I really wanted to go abroad to experience a whole different culture than we have in the Netherlands. After sending letters to companies abroad and getting no positive responses, I tried a different approach: applying at Dutch companies which have projects abroad. The first company I tried was Deerns, where I was immediately invited for an interview, at which they eventually sent me to work in Abu Dhabi.

biggest issue in the UAE, the main focus is to build extraordinary things. During my time, I heard a rumor about the airport. There was a plan to build the roof out of titanium. The only big problem with that plan was that there was not enough titanium in the whole world to accommodate such a large roof. Although, I am not entirely sure whether this rumor is true, it gives a clear indication of how extravagant the middle-east can be.

Deerns Deerns is a consultancy company working in the installation, energy consumption and building physics fields. Their headquarters are located in Rijswijk but the company has many offices around the world. They are involved in the construction of different types of buildings like data-centers, hospitals and airports. During my internship, I was working for the airport department, mainly on the midfield terminal project of the Abu Dhabi Airport. I started to work in Rijswijk and after a month I was sent to Abu Dhabi.

The current design of the airport has 49 gates eight of which are able to accommodate the Airbus 380. However, due to the growing fleet of the main carrier at the airport, it will probably still be too small when its construction is complete.

The midfield terminal project With the new terminal the airport will be able to cope with thirty million passengers a year. The costs of the project are around 2.2 billion euros. Money is not the

22

My work During the internship I worked on different aspects of this new terminal. One of those aspects was the security. The security of an airport is essential, especially after 9/11. Of course as an intern you are never the one who is responsible for the end result, however, there is some satisfaction in the feeling that you are doing something important. For the security the assignment was mainly to locate the cameras on the AutoCAD drawings. There are many different types of cameras. There

are normal cameras, just for standard monitoring of the passengers. However, there are also more complicated cameras, for example with face recognition, x-ray, microphones or a combination of those. For each room or facility, I had to decide the required type of cameras based on the purpose and activities of the room. This was sometime a hassle as the purpose was not always clear from the start. Subsequently, the cameras had to be placed such that the entire room can be checked by the staff. This was to be done with minimal costs to the terminal owner, of course. I also had a contribution to various other facilities in the terminal. The location of the speakers, for voice evacuation and public addresses had to be rearranged multiple times because the specification changed almost weekly. Outlets had to be located to connect every system in the building, which was sometimes a big puzzle, as it was still unknown which systems would be placed and where. All this had to remain as cheap as possible, complicating the task even further. Working on different facilities of such a large building makes you realize that an airport is as complex, if not even more, as the airplanes that land there. You would

Leonardo Times april 2015

internship 0415.indd 22

27/03/15 15:06


ADAC ( Abu Dhabi Airport Company)

Figure 1. The construction of the airport in the middle of the desert.

think that in such huge building everything would fit easily, however, this is not the case. Huge open areas are for the passengers and only a small part is left for the technical systems. Therefore, a lot of different parties are involved during the building to realize all this, who have their own ideas on how the building should look like at the end. This leads to interesting ideas, for example: once there was a proposal of putting antennas in metal boxes in the wall or ceiling because hanging them in open area is unattractive. This may seem a good idea; however, this results in a wireless system with almost no reach, rendering the entire system completely useless. UAE What made my internship really interesting was that I was located in Abu Dhabi for three months. The UAE is a Muslim country and the law is based on the reli-

gion. This means that alcohol is forbidden and you are not allowed to sleep in the same room with a woman, except if you are married. However, the country wants to appear more appealing to attract more tourists and a lot of activities are therefore tolerated. In a lot of hotels there are bars where people can drink alcohol and there is no problem when renting a room with your girlfriend. If you are drinking you still should try to avoid the police. A friend of a colleague of mine was just crossing a road when the light turned green. A car was driving through red light and hit the man with his side mirror. The police came to check what had happened. You would expect that the driver would end up in trouble, however, because the pedestrian had a couple of drinks, the driver was let go and the pedestrian ended up in jail for three days.

Figure 2. Sheikh Zayed Grand Mosque in Abu Dhabi

Besides these anecdotes, the UAE is a really safe country, you can leave your car open on the road and no one will steal it. One of the reasons is that there are cameras on the corner of each road. Also, most of the people will not dare stealing your car, because if they get caught, they will lose their job and most likely be kicked out of the country. There is only one thing that is dangerous in the UAE and this is the traffic. The roads are always crowded with expensive cars and people there drive dangerously. I experienced two cars racing each other in the middle of the day and it feels like every day there is another car that tries to hit you. As a tourist, the UAE is also great place to go to and I think everyone should go at least once in their lifetime. There are a lot of landmark which everyone knows that are worth seeing like the Burj Kalifa, Burj Al Arab, the mall of Dubai and the fastest rollercoaster of the world. These can be seen in one weekend, but if you want to stay longer there are many other less known landmarks to visit. It only rains two days a year, so you should not be afraid of getting cold outside. I would recommend visiting in the winter, as the temperatures fall to a reasonable 25 degrees instead of the scorching 50 degrees that can occur in summer. I really enjoyed my time in the UAE and will definitely come back when the airport is finished. I would recommend to everyone to go abroad during their internship, if they have that opportunity, as it is an experience you shall never forget. april 2015 Leonardo Times

internship 0415.indd 23

23

27/03/15 15:06


“We vlogen met een knal...”

A pioneering leap of faith Col. Joseph Kittinger Jr. opened the door to Space.

“I was ready to go, for more reasons than one. For about an hour—as the balloon rose from 50,000 to 102,800 feet above sea level—I had been exposed to an environment requiring the protection of a pressure suit and helmet, and the fear of their failure had always been present. If either should break, unconsciousness would come in 10 or 12 seconds, and death within two minutes.” TEXT Haider Hussain, Student Aerospace Engineering, Editor Leonardo Times

he year was 1960, the then captain in the US Airforce, Joseph Kittinger, had hopped into a gondola attached to a high altitude balloon. Then he jumped. “I stepped to the edge, said a prayer and jumped,” he said. Col. Joseph Kittinger Jr. held the record for highest altitude jump for over 50 years, when he jumped from an altitude of 102,800 feet in August 1960 as part of Project Excelsior. The record was only surpassed in October 2012 by Felix Baumgartner. The feat, that was part of the Red Bull Stratos project, was widely acknowledged and watched. Baumgartner jumped from an altitude of 127,852 feet. More recently, in October 2014, Alan Eustace, a Senior Vice president at Google, broke Baumgartner’s record by jumping from an altitude of 135,889 feet.

velopment Center at Holloman Air Force Base. On June 2, 1957, while stationed at the AFMDC, Kittinger made a balloon flight to 96,000 feet in the first flight of the Air Force’s “Project Man High”.

Joseph Kittinger Jr. was born in Tampa, Florida in July 1928. He entered the U.S. Air Force in March 1949 as an aviation cadet. He was commissioned a second lieutenant in March 1950. From 1950 to 1953, he served as a jet pilot in the 86th Fighter Bomber Squadron in Germany and then was assigned to the Air Force Missile De-

The name Excelsior was chosen because it meant “ever upward” in Latin. Kittinger planned to use a balloon to reach the stratosphere. Then, he would jump from the aerostat and delay opening his main parachute until reaching 18,000 feet. One of the challenges facing Kittinger was to find a technique that could be used by pi-

24

Project Excelsior After being assigned to the Aerospace Medical Research Laboratories, WrightPatterson AFB in Ohio, Kittinger was appointed test director of “Project Excelsior”, investigating escape from high altitude. Project Excelsior was established in 1958 to study and solve these high altitude escape problems. As jet aircraft flew higher and faster, the Air Force became increasingly concerned with the hazards faced by flight crews ejecting from these high performance aircraft.

lots who were not trained skydivers. The solution came from Francis Beaupre of the Aerospace Medical Division of the Wright Air Development Center. Beaupre devised

Volkmar K. Wentzel, National Geographic

T

Figure 1. Kittinger floats in his parachute before landing

Leonardo Times april 2015

We vlogen met..0415.indd 24

27/03/15 15:07


Wanting to prove operational pilots could use the system; Kittinger wore an Air Force MC-3 partial pressure suit covered by insulated winter flying coveralls. This was one of the most severe tests ever put on the pressure suit. If either the suit or helmet failed, unconsciousness would come within 10-12 seconds, followed by death 2-3 minutes later. In addition to the pressure suit and parachute system, Kittinger carried a box containing oxygen, instruments and cameras. Wearing all his equipment, Kittinger tipped the scales at 320 pounds - more than twice his normal weight. The first test, Excelsior I, was made on November 16, 1959. Kittinger ascended in the gondola and jumped from an altitude of 76,400 feet. In this first test, the stabilizer parachute was deployed too soon, catching Kittinger around the neck and causing him to spin at 120 revolutions per minute. This caused Kittinger to lose consciousness, but his life was saved by his main parachute that opened automatically at a height of 10,000 feet. Despite this near-disaster on the first test, Kittinger went ahead with another test only three weeks later. The second test, Excelsior II, was made on December 11, 1959. This time, Kittinger jumped from an altitude of 74,700 feet and descended in free-fall for 55,000 feet before opening his main parachute. The third and final test, Excelsior III, was made on August 16, 1960. There was a sign on the gondola that read, “This is the highest step in the world”. During the ascent, the pressure seal in Kittinger’s right glove failed, and he began to experience severe pain in his right hand from the exposure of his hand to the extreme low pressure. Afraid that he would be ordered to jump early because of the malfunction, Kittinger did not report the problem until he was at altitude.

Jörg Mitter/Red Bull Content Pool

U.S. Air Force U.S. Air Force

a three-stage parachute system. After leaving the gondola, Kittinger would fall for 16 seconds to build up speed. Then, a spring-loaded 18-inch diameter pilot chute would deploy. Building up adequate airspeed before deploying the pilot chute was critical because if it deployed too early, it would flop around due to insufficient dynamic pressure in the thin air. The pilot chute, in turn, deployed a six-foot diameter drogue chute that stabilized Kittinger in a feet-to-earth position. Along with the drogue chute, about onethird of the 28-foot diameter round main canopy was released from the parachute pack. Once he reached 18,000 feet, the rest of the main was released. Because a pilot ejecting from a crippled airplane could not be counted on to manually pull his ripcord, the entire activation sequence was automatic. Figure 2. Kittinger and Baumgartner

From Kittinger’s personal account of incident, “Burdened by heavy clothes and gear, I begin to pay the physical toll for my altitude. Every move demands a high cost in energy. My eyes smart from the fierce glare of the sun. When it beams in the gondola door on my left side, I feel the effect of strong radiation and begin to sweat. On my right side, mostly in shadow, heat escaping from my garments makes a vapor like steam. Circulation has almost stopped in my unpressurized right hand, which feels stiff and painful.” After stepping off “the highest step in the world”, he fell on his right side for about eight seconds, and then rolled over on his back. Suddenly, Kittinger felt as though he was being choked - the helmet was rising again. Fortunately, as his descent continued, the sensation eased. The main parachute deployed four and a half minutes after he left the balloon, after a fall of just over 16 miles. He emerged from the clouds at 15,000 feet. Two helicopters circled around him as he descended to the desert floor. Prior to landing, Kittinger was supposed to release the instrument box beneath his container. Only one side released, so he landed with its additional weight. The landing was hard and the seat kit inflicted a severe bruise on his leg. Kittinger was otherwise unhurt. The helicopters landed at almost the same instant as Kittinger and medical technicians rushed to his aid. “I’m very glad to be back with you all”, was how Kittinger greeted the recovery team. The total time since leaving the balloon was 13 minutes, 45 seconds. The pioneering skydive from edge of space, as a part of Project Excelsior, by Col. Kittinger becomes even more important when one considers that he achieved the feat in 1960, just at the start of the decade

remembered for the Space Race. No man had ventured into space until that time, Yuri Gagarin would do so the following year only. Kittinger had then remarked, “When I think of the great possibilities of the balloon, I marvel that it has been so little utilized in man’s bid to enter space. I earnestly hope we will not fail to take advantage of the lessons high-altitude balloon flights can teach us before we commit a man to the infinite reaches beyond the world we know.” The pioneer didn’t stop at this record; he then joined the Project Stargazer, an astronomy experiment to study highaltitude astronomical phenomena from above the Earth’s atmosphere. He later returned to his fighter pilot background and did three combat tours in Vietnam flying over 1000 hours and 483 missions, and he even spent time in captivity. Over his entire career, he flew over 80 different planes and retired as a Colonel in 1978. Kittinger was also part of the Red Bull Stratos project and served as mentor and Campcom 1, the voice that linked the ground station to Felix Baumgartner. References [1] http://news.nationalgeographic. com/news/2012/10/121008-josephkittinger-felix-baumgartner-skydivescience [2] https://archive.today/20121212013431/http://www. af.mil/information/heritage/person.asp [3] http://www.nytimes. com/2010/03/16/science/16tier. html?src=me&ref=general [4] http://stratocat.com.ar/artics/excelsior-e.htm [5] http://airman.dodlive.mil/2012/03/ no-ordinary-joe

april 2015 Leonardo Times

We vlogen met..0415.indd 25

25

27/03/15 15:07


Putting the wind in wind turbines Bridging the knowledge gap between engineering and meteorology

State-of-the-art wind turbine rotors measure up to 164m in diameter, and may grow up to 200–250m in the near future. Apart from the challenges that lie on the pure engineering side, such large turbines reach parts of the atmosphere where the wind climate is much more complex compared to what is found close to the surface. TEXT ir. René Bos and ir. Maarten Holtslag, PhD candidates at the Wind Energy Research Group/DUWIND

N

othing is as variable as the weather”, and this is certainly a challenge for the design of offshore wind turbines. Over “their twenty-year design life, these machines have to withstand storms, gusts, and high waves (and preferably produce electricity in the process). Although we do not want structures to fail within their lifetime, the manufacturing, transportation, and offshore installation of each additional kilogram has a price tag that is directly added to the price of energy. Therefore, to bring down the costs of offshore wind energy, it is key to design structures that are strong, but rather not too strong.

cool down, which causes the sea surface temperature to always lag behind the air temperature. As a result, the offshore wind climate does not know a strong diurnal cycle like we are used to on land, but rather a seasonal cycle. The difference between the air and sea

surface temperature has a big impact on atmospheric stability (see Figure 1). When the seawater is relatively warm, it heats up the air close to the surface. This warm air is less dense and has the tendency to ascend to higher altitudes, mixing heat and momentum with the colder air. We

Wind turbine designs are evaluated by using synthetic wind fields. In the past, it was sufficient to capture effects such as wind shear (i.e., the gradient of mean wind speeds) and gusts with simple empirical models. However, using the same approach for larger wind turbines now leads to gross over-dimensioning. To tackle this problem, wind turbine designers require wind models that accurately capture the important processes in the relevant parts of the atmosphere. Atmospheric stability If you ever visited the beach during the first warm days of spring, you might have noticed that the seawater is still surprisingly cold. This is because large bodies of water take a long time to heat up and

26

Figure 1. The effect of sea surface temperature on turbulence and wind shear. Also shown is a compari son of Monin-Obukhov surface layer similarity theory andthe proposed wind profiles (Holt slag et al., 2015) with observations. Added for scale is the largest offshore wind turbine cur rently in production.

Leonardo Times april 2015

putting the wind.indd 26

27/03/15 15:07


Aaron Crowe (acrowephotography.com)

call this an unstable boundary layer, which, due to the turbulent mixing, is more homogeneous with a weaker wind shear. Vice versa, when the seawater is cold compared to the surrounding air, vertical transport is suppressed. Such a stable boundary layer has less turbulence, but a strong wind shear. So, what does this have to do with wind turbines? Well, over the course of its design life, a wind turbine blade will make more than 108 load cycles due to its own weight, due to turbulence, and due to the rotational sampling of the wind shear profile. Having the right information about the wind climate at the relevant heights is therefore very important to accurately predict the design loads. Offshore wind shear profiles First things first, a crucial point of attention is the definition of the atmospheric boundary layer. Whoever studies at Aerospace Engineering is aware of boundary layers over wings and airfoils. The atmosphere has something similar, a layer of air that is influenced by the characteristics of the surface. Just a slight difference: the atmospheric boundary layer is approximately 1km thick, whereas airfoil boundary layers are in the order of several mm up to a few cm. So, how do we describe the mean flow of air in such a thick layer? We have to go back to the 1940’s to find an answer. Somewhere during World War 2, two Russian meteorologists, Andrei Monin and Alexander Obukhov, developed a theory

to describe the air “sufficiently close to the surface”, commonly known as surface layer similarity theory. According to them, the wind speed and temperature at any given place in the atmosphere would be a function of only two parameters: the shear stress at the surface and, no surprise, atmospheric stability (Monin and Obukhov, 1954). Although the formulation of this theory marked the dawn of modern micrometeorology, it took until the 1970’s before measurements were accurate enough to validate it. However, the underlying assumptions of this theory invalidate the resulting wind shear profile beyond the lowest 10% of the boundary layer. In practice, this means up to an altitude of 100m onshore and up to 80m offshore. Moreover, under very stable conditions, wind profiles are only accurate up to about 40m. This is no problem for most applications, but it is not enough for modern wind turbines. It was not until 2007 that a complete boundary layer wind profile for over land was proposed by the Danish professor Sven-Erik Gryning (Gryning et al., 2007). Now, as the focus of wind energy shifts to offshore, we have proposed an altered formulation of Gryning’s wind profile for over sea. The new profiles have been validated with measurements up to 315m—well above the height where wind turbines operate—and results are promising, as shown in Figure 1.

Wind gusts Other than the mean wind speed, a wind turbine suffers from gusts. From a human perspective, gusts are brief increases in wind speed that last for a few seconds. A severe gust will completely envelop a person and can knock you off your bike if you are not careful enough. In fact, a common Dutch word for gust, “windstoot”, translates roughly to a “push of wind”. This more or less matches the common perception of a gust: being something of a jet, much like the air exiting your mouth when blowing out a candle. For most applications, such as a person on a bike, you will not need much more information than the increase in horizontal wind speed at a single point. However, little attention has been given to the fact that gusts also have spatial dimensions. Wind turbines are different from most applications, because they are often much larger than the turbulent eddies in the atmospheric boundary layer. Depending on the state of the atmosphere, gusts will only cover part of a Figure 2. Generation of stochastic wind gusts from wind turbine blade. Clearly, the larger the the turbulence energy spectrum. gust, the more kinetic energy it contains

and the more damage it can potentially cause. Therefore, predicting not only the intensity, but also the size of gusts is an important part in determining the design loads for a wind turbine. Gusts are a direct result of the velocity gradients created by turbulent eddies. Eddies transport the turbulence kinetic energy, created at large scales, down to the viscous level where it is dissipated into heat. A turbulence spectrum, sketched in Figure 2, shows how the total amount of turbulence kinetic energy (TKE) is distributed across wave numbers (i.e., the scales of motion). In principle, shuffling these wave numbers, by giving them a random phase and amplitude, will yield a statistically correct turbulent velocity field. This is the essence of spectral turbulence models. In comparison to computational fluid dynamics (CFD), spectral models are very fast and still capable of reaching acceptable accuracy. Therefore, for wind turbine design, they are generally preferred over CFD, since a broad range of design conditions have to be addressed. The process of randomizing the wave numbers can also be constrained, which allows one to generate extreme gusts that occur rarely, but still within the design life of a turbine. With the help of random field theory, we can connect the size, intensity, and probability of severe gusts to the state of the atmospheric boundary layer (Bos et al., 2015). This allows us to quantify the risk associated with gust loading and helps us understand what conditions are design-driving for wind turbines. Contact information René Bos – r.bos-1@tudelft.nl Maarten Holtslag – m.c.holtslag@tudelft.nl References [1] Holtslag, M.C., W.A.A.M. Bierbooms, and G.J.W. van Bussel (2015). “Extending the diabatic surface layer wind shear profile for offshore wind energy”. Manuscript submitted for publication. [2] Monin, A.S. and A.M. Obukhov (1954). “Basic laws of turbulent mixing in the ground layer of the atmosphere” (in Russian). Trudy, Akademiia Nauk SSSR, Geofizicheskii Instituta 24, pp. 163–187. [3] Gryning, S.-E., E. Batchvarova, B. Brümmer, H. Jørgensen, and S .Larsen (2007). “On the extension of the wind profile over homogeneous terrain beyond the surface boundary layer”. Boundary-Layer Meteorology 124(2), pp. 251–268. [4] Bos, R., W.A.A.M. Bierbooms, and G.J.W. van Bussel (2015). “Generation of Gaussian wind gusts in a 3D domain”. Manuscript submitted for publication.

april 2015 Leonardo Times

putting the wind.indd 27

27

27/03/15 15:07


How well does it heal? Quantification of the true healing potential of new self-healing elastomers

Several approaches have been developed to add a so-called self-healing functionality in elastomeric materials by chemical modifications. However, there is no generally accepted method for evaluating healing efficiency, and most researchers only use some form of tensile testing. TU Delft researchers recently proposed a fracture mechanics protocol able to provide more reliable measurement of the healing performances exhibited by elastomers with this new functionality. TEXT Antonio M. Grande, Postdoc researcher, Santiago J. Garcia, assistant professor, Sybrand van der Zwaag, professor, Novel Aerospace Materials, Faculty of Aerospace Engineering, TU Delft

I

n the field of Aerospace Engineering, damage tolerance design is extensively used in the development of advanced structures: damages are accepted, monitored or estimated and maintenance programs are periodically planned. However, in a number of situations, including non-structural applications, presence of damage may remarkably impair the functionality or lifetime of the component. The development of new materials with improved strength and durability linked to new functionalities can answer to the request for safer, lighter and more reliable structures in all advanced engineering fields. Researchers of the NovAM group at TU Delft Aerospace Engineering constantly work to design and investigate the properties of such new materials. Their goal is to improve the performance and add new functionalities to advanced materials (polymers, metals, ceramics and composites) with attractive characteristics for aerospace applications. Focusing on elastomers (i.e. rubber like polymeric materials combining hyper-

28

plastic behaviour and decent strength), these materials are widely employed in various engineering fields such as aerospace, automotive and civil engineering. Although their engineering role is sometimes undervalued, systems based on these materials can be subjected to severe complex load conditions as multiaxial, fatigue, impact, abrasion, and wear. It is then conceivable that damages occurring during their service life can potentially lead to catastrophic failure of the entire structure (such as in the case of the Challenger disaster in 1986). In the aerospace field, elastomeric materials are employed in various critical applications such as space suits and space living habitats, liquid propellant containers, inflatable objects and also protective coatings, optical surfaces, conductive or resistive materials, where material continuity is a stringent requirement. When applied in space applications, human intervention for the repair of the damaged components is dangerous, costly or even impossible. In this view, the implementation of the

self-healing principle to elastomers would be beneficial to increase reliability and reduce maintenance costs. The self-healing concept is based on a damage management principle: once damage has occurred the material should be able to repair itself partially or totally with none or low external intervention so that its main functionality is restored (van der Zwaag, 2007). Although several chemical concepts have been used to develop self-healing elastomers no commercially available elastomer has been reported combining both high mechanical properties and a substantial healing efficiency. One of the reasons for such gap is the relatively low understanding of the healing phenomena and the lack of robust techniques capable of characterizing the true healing potential of self-healing elastomers. For these reasons, the European Union decided to fund an industrial and academic consortium with four Dutch partners, amongst which the Faculty of Aerospace Engineering at the TU Delft. This consortium, known as SHINE, aims at developing, scaling up and validating a

Leonardo Times april 2015

how well does it heal.indd 28

26/03/15 23:41


To date, most researchers have used the recovery of tensile strength of broken samples as the measure of healing; this testing method fails to capture both the relevant chemical processes at the healing surfaces and the effective restoration of mechanical integrity. Our team recently proposed a fracture mechanics testing procedure, based on the J-integral parameter, to investigate the selfhealing behaviour of healable elastomers (Grande, 2015). In recent years, the J-integral method, has been widely applied to characterise the fracture properties of pristine elastomers obtaining information on both the resistance to crack initiation and that to crack propagation; hence its adaptation to healing quantification. The new protocol includes damage and healing steps based on low temperature fracture and environmentally controlled healing, respectively. Furthermore, standard tensile experiments following the same fracture/healing steps are required in order to prove the validity of the proposed fracture protocol. By following such an approach, more quantitative information on the healing behaviour across the interface between two former fracture surfaces can be obtained. This newly developed testing procedure for the evaluation of the healing performances was first applied to a commercially available self-healing supramolecular elastomer (Reverlink® HR) provided by ARKEMA, one of the SHINE members and producer of the first commercial self-healing elastomer in the market. This polymer contains both strong irreversible covalent cross-links and weak reversible chemical bonds (hydrogen bonds) responsible for the self-healing functionality. Representative stress-strain and force-displacement curves obtained in tensile and fracture experiments for different healing times are shown in Figure 1(a) and in Figure 1(b), respectively. As can be observed, a gradual recovery of the mechanical properties of the damaged specimens as a function of healing time can be detected. In the case of tensile testing, for long healing times (≥24h), an almost complete recovery of the mechanical properties is observed; pristine and after full healing samples show the same tensile strength and only a slight reduction in failure

 [MPa]

2 1.5

35

Virgin Healed, 30 min Healed, 1 h Healed, 24 h Healed, 72 h

30 25

Load [N]

2.5

1

0 0

20

Virgin Healed, 30 min Healed, 1 h Healed, 24 h Healed, 72 h

15 10

0.5

5 1

2

3

 [-]

4

5

6

(a)

0 0

10

20 30 40 Displacement [mm]

50

60

(b)

Figure 1. Stress-strain (a) and load-displacement (b) curves for virgin and healed samples repaired for different times obtained in tensile and fracture experiments

Analyzing the data obtained in tensile and fracture experiments in terms of healing efficiencies, results that the degree of recovery depends on the test method (Figure 2). While a maximum healing efficiency of around 80% is obtained in tensile mode (stress at break, σb, recovery), a healing efficiency of only 40% is obtained in fracture. However the properties measured in the two tests all depend on the same chemical, physical and topological characteristics of the polymeric network at the interface.

stress exceeds a critical level, a deformation zone with a marked anisotropy, containing mixtures of partially interpenetrating or constrained chains, is generated. Subsequently fracture propagates and its rate is dominated by bond rupture of the chains that interdiffused from one side to the other during the interfacial healing process. In this condition, the healed interface can exhibit mechanical properties close to the pristine material. On the other hand, in fracture tests crack slowly propagates along the crack plane and competition between chains scission and chains pull-out at the interface is more prominent. Since the healing mechanism in the studied polymer is mainly due to chain interdiffusion and hydrogen bonds with no new chemical bonds being formed at the repaired interface, the fracture test seems to capture better the real state of the new (healed) interface. Furthermore, the fracture approach much better fits the damage tolerance design philosophy applied in the field of aerospace engineering.

The different healing performances exhibited by the supramolecular elastomer in tensile and fracture experiments can be addressed to the loading conditions and damage evolution process; a difference in the failure mode of the repaired material can be assumed taking into account that crack evolution at the healed interface occurs primarily by polymeric chains pullout or by chains scission. Nevertheless there are differences depending on the test type. In tensile tests, polymer chains are globally stretched and, when the local

It becomes thus clear that the selection of a proper test or/and the combination of phenomenologically different tests is a necessary step for the development of new self-healing polymers with improved properties. NovAM is now working on the validation of such protocol to quantify the healing behaviour in other self-healing polymer classes containing alternative reversible chemical bonds. We also will look at the quantification of healing over much wider ranges of temperature, moisture and strain rates.

strain. On the other hand, straight from the load-displacement curves of the fracture tests, a lower healing efficiency can be assumed. This preliminary observation is supported by the fracture resistance at crack initiation (critical J-integral, JIC) evaluation: an average value of 1.26±0.15 kJ/ m2 is calculated for the virgin material, significantly lower values were instead determined for the repaired specimens, even for the longest healing time (0.57±0.07 kJ/ m2, 72 h healing time).

100

Healing efficiency [%]

novel generation of elastomers with adequate mechanical and physical properties able to exhibit a spontaneous self-healing behaviour (www.selfhealingelastomers. eu). The main role of our research team in this consortium is to come up with new characterization approaches that give us more relevant information about the healing mechanisms in order to help us and other researchers in the world with developing new self-healing polymers.

References

80

[1] S. van der Zwaag (editor): “Self Healing Materials: an alternative approach to 20 centuries of Materials Science”; Springer (2007)

60 40 20 0 0

Tensile (b)

Fracture (J IC) 20

40 Healing time [h]

60

Figure 2. Healing efficiency calculated for tensile and fracture experiments as a function of healing time.

[2] Grande, A.M., Garcia, S.J., van der Zwaag, S., “On the interfacial healing of a supramolecular elastomer”, Polymer, 56, pp. 435-442, 2015. [3] SHINE project, http://www.selfhealingelastomers.eu/, 2015.

april 2015 Leonardo Times

how well does it heal.indd 29

29

26/03/15 23:41


WWW.NATS.AERO

Metropolis: Urban Airspace Design ATM for extremely high traffic densities

Unmanned package delivery has sparked keen interest recently. Quite a few companies are working on personal air transport vehicles like PAL-V is testing a flying motorcycle in Netherlands and the MIT-linked Terrafugia. How can we ensure that future airborne vehicles share the airspace in a safe and efficient manner? If this technological, environmental and financial puzzle could be solved, it could create the next revolution in aviation, that most of us may not envision at this point in time. TEXT Prof.dr.ir. Jacco Hoekstra, Chairholder Communication, Navigation, Surveillance & Air Traffic Management (CNS/ATM), Faculty or Aerospace Engineering, TU Delft

European Innovative Research project The Metropolis project is a 7th framework, Level 0 project funded by the European Commission. ‘Level 0’ projects need to investigate future technologies, which are not yet feasible but could be tangible in the future. Their technology readiness level (TRL, a NASA scale) is still zero: it still needs scientific understanding to reach level 1. Therefore, these are really innovative projects in the vanguard of technical development. Context In today’s Air Traffic Management (ATM), the workload of the air traffic controller, the runway availability or allowed noise levels determine the limit of the airspace capacity. In case of a de-centralized Air Traffic Control concept, these limits may not apply. Currently, we do not know when the sky is really full from a flight mechanical point of view and if a structured airspace design with tubes, layers and/or tubes provide more capacity or if spreading the traffic would yield more airspace capacity. Or does this depend on the traffic density like phase change? In the Metropolis project, named after the classic Metropolis, a 1927 science fiction movie, we assume that Personal Air Vehicles and unmanned package delivery drones have become a reality in the second half of this century. Assuming they have become cheaper due to mass production techniques, just as in cars, we

30

assume a huge demand for these vehicles resulting in extreme traffic densities in urban areas. Since the goal of personal air transport is to reduce the block time, this demands a departure very close to home: personal vehicles should literally fly from door to door, maybe using very short runways, perhaps even at higher levels of the buildings. And similar to what happened to road traffic a century earlier, the cities will become the bottlenecks for these vehicles. This scenario provides the storyline for the simulation to answer the more fundamental questions of airspace capacity. Research questions Would a better organization of the airspace increase its capacity? Surprisingly enough, in today’s air traffic densities, the opposite is true: a higher density can be achieved by having a completely unstructured airspace by using free routing and decentralization [2][3]. Spreading the traffic over the available airspace reduces the probability that two aircraft meet. The advantage of de-centralization for air traffic control for high traffic densities can easily be understood by using simple combinatory mathematics. The collision probability per vehicle increases linearly with the traffic density while from a global perspective this conflict probability is a quadratic function (see Figure 1). In the Metropolis scenario, the air traffic densities are of one to three orders of

magnitude higher than today (1000x). It has been suggested that with these very high traffic densities, the reverse of the decentralization principle is true, and more airspace structure and/or central control would be required. This throws up questions like: Is this true? Why is more structure needed? Will decentralization still be feasible or not? And if not, what causes this reversed dependency? The research questions to be addressed by this project are: • Which CNS/ATM concepts will work for these urban scenarios and densities , considering both personal air transport vehicles and airborne parcels? • Where on the density axis, is the transition from structured to free routing airspace? What are the dependencies? (mass, restrictions, speeds, vehicle performance, surveillance)

Figure 1. Decentralization reduces the number of potential conflicts to a linear function with traffic density (P2= probability two aircraft meet, Pc = conflict probability)

Leonardo Times APRIL 2015

urban airspace 2.indd 30

27/03/15 15:07


WWW.NATS.AERO

• Where on the density axis is the transition from local to central control? What are the dependencies? (vehicle mass, restrictions, speeds, vehicle performance, surveillance) We used 7 assumptions for the scenario: 1. European urbanisation continues 2. Need for personal transport persists beyond 2050+ 3. Need for cargo transport persists but in a different form 4. Energy efficiency & weight still a driver, so not all PAV are VTOL 5. Vehicles for personal air transport have become available 6. Personal Air vehicles used for nonlocal transport 7. Individual flying cargo UAVs are used locally The goal of the project is to deliver: • Airspace design options for the urban environment including both personal air transport as well as autonomous flying parcel deliveries • Understanding of the relation between air traffic density and the need for a structured airspace design and/or centralized control by studying the range from today’s to the extreme densities of the Metropolis scenario • Simulations and visualisation of radically new Conceptual Designs for Urban Airspace For this we tested four different concept ranging from very unstructured (Full mix) to extremely structured (Tubes): Simulations & Initial Results A complete Metropolis city was simulated, based on Paris as design inputs for scenarios, using urbanisation and road traffic studies. A technology review was part of the definition of expected vehicle properties. For the study, many fast-time simulations were run. A total of 6.5 million flights were simulated. All concepts were flown, with and without separation assurance to distinguish effect of structure from the tactical solution of conflicts. Three patterns were flown depending on the time of day: morning, afternoon and evening. Then each of these combinations was re-

Figure 2. Screenshot of one of the simulation runs of the Zones concept

Concept 1: Full mix In this design, all vehicles share the same airspace, without any structure or nonphysical constraints. Via a prescribed airborne separation assurance algorithm, supported by automation, the vehicles avoid each other while flying an optimal route.

Concept 2: Layers In this design, every altitude band corresponds to a heading range in a repeating pattern. The aim is to allow maximum freedom of routing while lowering the relative speeds, facilitating the separation and increasing the safety. In between are transition layers and descending or climbing trough the layers is allowed when the path is conflict free.

Concept 3: Zones Most close to the principle of airspace design today, different zones for different types of vehicles as well as global directions have been defined to aid the separation by the structure of the airspace. A radial-circle structure forms the basis of the airspace design to allow efficient routing.

Concept 4: Tubes For each flight a tube is defined, forming a 4D clearance, a bubble moving through a tunnel. As long as the vehicle stays inside the pre-described bubble, no conflicts should happen.

peated. Each run was simulated for several hours, but only the last hour (around 25,000 flights) was used for the data, in order to make sure that the scenario was in a steady state. Each airspace concept and each scenario was run for 4 different traffic densities and each pattern was repeated. On top of the 192 nominal runs, another 96 so-called non-nominal scenarios were run, to test the robustness against unforeseen weather, rogue aircraft, etc.

be true for route structures with less unidirectional distribution of routes. Still, the Layered concept, basically an extension of the exisiting hemisphere rule, might be worthwhile further investigation. Additional information on the results from the further analysis will be published on the Metropolis website [4] as soon as they become available.

The first impression is that the Layers concept strikes the best balance between efficiency (allowing at least direct routing laterally) and safety (reducing relative velocities and hence conflict rates). However, the non-nominal scenarios could change that as, when avoiding an blocked part of the airspace, this Layers concept might lead to a local high concentration of traffic. This is due to the fact that the same avoidance heading forces them onto the same altitudes. Thus, the non-nominal results might still favour the completely free airpsace design, the Full mix. The same could

[1] Article by Dr. Mary (Missy) Cummings (MIT) in Scientific American January 2013 [2] Hoekstra, J.M., R.C.J. Ruigrok, R.N.H.W. van Gent, “Free Flight in a Crowded Airspace?”, June 2000, presented at FAA/Eurocontrol 3rd USA/Europe Air Traffic Management R&D Seminar [3] Hoekstra, J.M., Bussink, F.J.L., “Free Flight: How low can you go?”, Digital Avionics Systems Conference paper IEEE, Irvine/USA, 2002 [4] Metropolis website: http://homepage.tudelft.nl/7p97s/Metropolis/

References

APRIL 2015 Leonardo Times

urban airspace 2.indd 31

31

27/03/15 15:07


THE SPACEFLIGHT MINOR

In the first semester of the third year of the TU Delft bachelor education, students perform a minor program of their preference. The Spaceflight minor is a completely new minor, which will start for the first time in September 2015 and will be provided annually.

Introduction

Technology

E3530 - Intro to spaceflight (3EC) - Non-AE students ET3604LR - Electronic circuits (3EC) - AE students

AE3534 - Spacecraft technology (5EC) AE3535 - Satellite tracking & communications (4EC)

Missions

Development

AE3531 - Space exploration (7EC) CT3532 - Earth observation(3EC)

AE3536 - Spaceflight assignment (7EC)

The Spaceflight minor is a broad thematic minor which can be followed by all students with a proper physics-mathematics background. It is expected that many students throughout the TU Delft and beyond will be interested in a minor in the exciting field of Spaceflight. The department of Space Engineering of the Faculty of Aerospace Engineering (AE) provides a major part of the course. Other faculties also contribute significantly with several courses and assignments. These groups are members of the recently established TU Delft Space Institute (DSI), which enables more intensive cooperation internally as well as externally. Thus, the Spaceflight minor can be regarded as the educational counterpart of this research institute. According to a survey performed with the help of the Space Department (RVD) of the VSV, many AE students have indicated to be interested in this new minor but also that they would like one, which covers a broad range of topics and aspects of Spaceflight. This introduced a special challenge to the developers, as this new minor should be broad as well as complementary to the existing course content in the bachelor and master education. To overcome this, completely new courses were developed which addresses unique content and skills of the students. Additionally, two courses of the minor will be provided online in which students can enjoy a lecture at home in their favorite seat while guided exercises can be performed in the class room and the lecturer is available to assist them with the content.

The minor consists of four main parts: Introduction, Missions, Technology and Development. Introduction will provide the basics of Spaceflight to non-Aerospace by compression of the core space engineering content of the first two years of the Aerospace bachelor in a 3 EC course ‘Introduction to Spaceflight’. Aerospace students will follow the course on electronic circuits as alternative, which is considered to be a good introduction course to the emerging field of avionics. In Missions Earth observation and space exploration missions are elaborated from the user and developer perspective. Exciting examples will be provided and guide students through all topics. The relation between orbits and the application will be shown. Technology will be discussed, with a focus on the instrumentation and the requirements to the spacecraft. Within Space Exploration a key mission example is the Jupiter Icy Moon Explorer (JUICE) mission, which will study three moons of Jupiter in a few decades from now. TU Delft researchers are involved in this ESA mission, which is one of the most exciting missions of the next decades. Within Earth Observation the TerraSar-X mission is one of the examples. This radar mission, of which the first satellite was launched in 2007, is used for high resolution determination the local elevation on Earth for science as well as disaster monitoring and prediction purposes. Under technology, the focus is on engineering and operation of key technologies, thatfacilitate space missions, some

of them showing the low level details of the example missions provided on the mission’s part. These are satellite bus platforms , propulsion to travel to and in space and the ground segment (including ground station and data processing). Real life examples are provided, students will perform a concurrent engineering workshop of a CubeSat and there is a practical with communication hardware as well as a practical where students will operate the TU Delft ground station and receive signals from active satellites. In the Development part, students will perform a 7 ECTS assignment of their preference, which is defined by tutors affiliated with the content of one or more of the courses. The well-defined assignments allow individuals or small groups of students to develop a concrete spaceflight product and critically reflect on the results and process. One example of such a product is a sun sensor). Three students will work on this small project each having a clear individual assignment on electronics, software and structure respectively. The students will work out their design in schematics, procure the components, produce and assemble the pieces, test the sensor and reflect back on the results and the processes. It does not necessarily have to be hardware: a piece of software to process telemetry data of a real mission is also a spaceflight product. For all kinds of assignments the students will experience the whole endto-end development of a product, which is unique in the field of (Spaceflight) education.

ESA

Delft University Technology

A half year educational curriculum on the field of Spaceflight

For more information contact: Jasper Bouwmeester at jasper.bouwmeester@tudelft.nl

33Minor.indd 2

02/03/15 21:37


Samen het beste uit jezelf halen

Bij NS kun je je carrière beginnen als trainee. Nog niet afgestudeerd? Kom dan stage lopen. Wij zoeken mensen die steeds weer het beste uit zichzelf willen halen. Want je krijgt bij ons veel verantwoordelijkheid. Elke dag doen we ons uiterste best om meer dan 1 miljoen reizigers veilig, op tijd en gemakkelijk naar hun bestemming te brengen. Iets voor jou? Kijk voor meer informatie op werkenbijns.nl

3202.1055 A4 en A5 NS recruitment advertentie_WT2.indd 1

13-02-15 10:50


Dennis Dolkens

Deployable Earth-Observation Telescope Obtaining sub-meter resolutions from MicroSatellite Platforms

High resolution Earth Observation data serves a very important role in applications ranging from environmental protection, disaster response and precision farming to defense and security. With a deployable telescope, it is now possible to reach high resolutions, while maintaining a compact launch volume. TEXT Dennis Dolkens, PhD Student and Saish Sridharan, MSc. Graduate, Department of Space Engineering, Faculty of Aerospace Engineering, TU Delft

ommercial satellite imagery with ground resolutions smaller than half a meter can currently be obtained using satellites such as Worldview, GeoEye and Pleiades. These systems are large and heavy, weighing several thousands of kilograms. Due to their high mass and large launch volume, high resolution Earth observation systems are very expensive to build and launch, costing hundreds of millions of Euros. As a result, the cost per image is very high for these systems. In addition, high resolutions systems typically have narrow swath widths. Thus, the coverage that can be obtained with these systems is small. As a result, for many regions on Earth, affordable and up-to-date high-resolution satellite data is simply not available. A solution to this issue would be to increase the number of high-resolution Earth observation satellites. However, when relying on conventional technology, this is economically infeasible. The main reason for the large volume and mass of high-resolution systems is that a very large aperture is needed to reach sub-meter resolutions. This is needed to reduce the effects of diffraction, which is the spreading of light as it passes through

34

a small opening. A deployable synthetic aperture telescope potentially offers a solution to this problem. By splitting up a large telescope into smaller elements that can be stowed in a compact volume during launch, the same resolutions can be obtained using only a fraction of the launch volume of a conventional system. OPTICAL DESIGN To be compliant with current state-of-the art systems, such as GeoEye-2 and WorldView-3, the deployable telescope was designed to reach a ground sampling distance of 25 cm from an orbital altitude of 500 km. In the design process, two types of synthetic aperture instruments were analyzed: the Fizeau synthetic aperture, a telescope with a segmented primary mirror, and the Michelson synthetic aperture, a system which uses an array of smaller telescopes to simulate a larger aperture. For this application, a Fizeau set-up is the most suitable, since it allows for the smallest stowed volume as well as the best optical performance. The final optical design of the deployable telescope is shown in Figure 1. The design has been based on a full-field Korsch Three Mirror Anastigmat [1]. The design

has been optimized for a compact stowed volume and can deliver a diffraction limited performance over the full 5 km swath width. The entrance pupil of the instrument consists of three rectangular mirror segments that span a pupil diameter of 1.5 meters when deployed. In the stowed configuration, the three segments can be folded alongside the main housing of the instrument. The deployment sequence of the telescope has been illustrated in Figure 2.

Dennis Dolkens

C

Figure 1. The optical layout of the deployable telescope

Leonardo Times april 2015

deployable telescope.indd 34

27/03/15 14:38


MECHANICAL DESIGN The deployable telescope is very sensitive to misalignments of the mirror panels. To maintain a good image quality, the position of the mirror segments must be controlled in 6 degrees of freedom with accuracies smaller than a micrometer. Therefore, to ensure that the instrument can deliver the required optical performance while operating in a harsh and dynamic space environment, a robust thermo-mechanical design is required. To ensure that small temperature fluctuations have a limited effect on the optical performance, low expansion materials will be used for critical components of the instrument. The primary mirror segments will be made from Silicon Carbide, a stiff material with a low coefficient of thermal expansion (CTE) and a high conductivity. The support structure of primary mirror and the three foldable arms supporting the secondary mirror will be made of Invar, an alloy with a low CTE, ensuring a high thermal stability. In addition, the main housing of the instrument will have an active thermal control system to ensure stable operating conditions.

the system will meet the performance requirement while operating in-orbit. The calibration procedure of the instrument will consist of two phases: a post-launch phase and an operational phase. Following the launch and deployment of the telescope, a metrology system will be used to characterize the system. The metrology system uses a combination of an interferometer and a number of capacitive sensors [2]. The interferometer sends out a beam of light to a number of target points on the mirror, as shown in Figure 3. The target points reflect that light and return it to the interferometer, where the returning beams are compared to a reference beam. With the measured phase differences, offsets in the position of the mirror can be calculated.

In the stowed configuration, the total volume of the instrument is expected to be 0.36 m3, which is eight times as small as a conventional telescope designed for similar ground resolutions. The mass of the instrument is expected to be just 75 kg, which can be achieved by applying lightweighting techniques to the mirror segments and structure.

The capacitive sensors are placed between the mirror segments and the main housing. They can measure the distance between the inner side of the mirror segments and a reference point on the main housing of the instrument. Together, the sensors are able to measure offsets of the mirror segments in 6 degrees of freedom, as well as detect changes in the radius of curvature that may result from temperature fluctuations. The results from the theoretical study show that the sensor set-up can measure offsets ranging from 10 mm to 10 nm. The measurements that are obtained using the metrology system can serve as an input to the actuators below the primary mirror segments, allowing for the correction of the deployment errors.

CALIBRATION A robust mechanical design alone is not sufficient to ensure that the deployable mirror segments can be positioned with sub-micron accuracies. A calibration strategy is therefore proposed to ensure that

During operations, a passive system will be used. This system relies on a phase diversity algorithm that can estimate residual wavefront errors by analyzing images obtained with the telescope. Two images are required; one that has been obtained with

Saish Sridharan

Dennis Dolkens

Dennis Dolkens

Figure 2. Deployment sequence of the deployable telescope (clockwise from top-left)

Figure 3. Target points for the interferometer on the primary mirror

the primary detector of the instrument and one that is obtained with an additional detector that is deliberately placed outof-focus with a known defocus distance. The two images, as well as knowledge of the defocus distance and the shape of the pupil are used to iteratively estimate the unknown wavefront. Once an estimate of the wavefront has been obtained, it can be used to reconstruct the image using an image deconvolution filter. Although the used algorithms still have a lot of room for improvement, extensive simulations show promising results. Thanks to its calibration system and robust thermo-mechanical design, the telescope will be able to deliver a good image quality, despite its very high sensitivity to alignment errors. FUTURE WORK The development of the deployable telescope and its calibration systems will continue as a part of a PhD project in cooperation with TNO. Amongst other things, the work will address the high alignment sensitivity of the optical design. Furthermore, the mechanical design and calibration systems will be worked out in more detail. ACKNOWLEDGEMENTS The authors of this article would like to thank Hans Kuiper, the daily supervisor of the two graduation projects on which this article was based.

References [1] Dolkens, Dennis, “A Deployable Telescope for Sub-meter Resolutions from MicroSatellite Platforms”, MSc. Thesis TU Delft, 2015. [2] Sridharan, Saish, “Sensor measurement and error budget analysis for a deployable multi-aperture space telescope”, MSc. Thesis TU Delft, 2014.

april 2015 Leonardo Times

deployable telescope.indd 35

35

27/03/15 14:38


Student project

The Lambach HL II & Lambach Aircraft 25 years of hands-on experience

In the spring of 1989, some students conceived the idea that it would be truly spectacular if they could build a flying aircraft to honour the lustrum of VSV ‘Leonardo Da Vinci’ in 1990 (Nijhuis, 1996). Eventually, they chose to build a replica of the Lambach HL II because of its link with the faculty of Aerospace Engineering. Now, 25 years later, students of Lambach Aircraft still maintain this masterpiece of Delft ingenuity, while continuing to expand its legacy. TEXT Dries Decloedt, Student Aerospace Engineering, Volunteer Lambach Aircraft

1937 Championships, again after the Germans. Considering that the aircraft was designed and built in only five and a half months and had made her maiden flight only two weeks earlier, this was a tremendous achievement. Afterwards the HL II served as an advanced trainer at the “Nationale Luchtvaartschool”. Unfortunately, the lifespan of the aircraft was cut short when it was destroyed at the beginning of

World War II during German bombing raids at Ypenburg airfield (see Figure 1). Only a set of drawings would survive the war (Nijhuis, 1996). The Replica Production of the replica started as late as September 1989, when volunteers began working on the production drawings. It would still take until January 19, Lambach

The Lambach HL II For the original Lambach HL II all started in 1936, when in a response to the German dominance during the Dutch National Aerobatic Championships, the Dutch government made funds available for an aerobatics aircraft capable of competing with the Germans. The Delft graduate, Hugo Lambach, took up the challenge and his Lambach HL II finished third at the

Figure 1. The original Lambach HL II at Ypenburg

36

Figure 2. Maximum displacement of the instrument panel`

Leonardo Times April 2015

student project 0415.indd 36

27/03/15 14:39


1990, however, until the society supporting the project, Lambach Aircraft, was founded. After five years of building, the replica was presented to the public during the roll-out hosted at the faculty on April 24th 1995. A couple of months later (April 24th 1995) at the airbase of Gilze-Rijen the maiden flight followed. The Lambach HL II replica flew then for a couple of years until fatigue issues arose in the upper wing bracket, which grounded the aircraft and resulted in the loss of the proof of airworthiness. Currently the replica is stationed in the T2 Hangar of the Aviodrome Museum were it receives regular maintenance (Lambach Aircraft, 2015). The Instrument Panel Over the years, major progress has been booked with the fatigue issue in the bracket. For instance, it was found that the cracking of the bracket was caused by engine-induced vibration and a redesign of the bracket has been made. This article will focus on the instrument panel, which has been one of the major projects of the last half-year. The HL II replica, just like the original, does not include a radio and transponder. Changes in the Dutch regulations, however, impose the aforementioned equipment to be installed in the aircraft. After some research, it was decided to install the Dittel KRT2 VHF Transceiver and the Trig TT21 Mode S Transponder. Both are commercially available and were select-ed as the most cost effective solution. Next, the additional instrumentation required to look again at the structural design of the panel and to update it where necessary. The new panel will have to comply with CS-23 certification basis, which requires it to remain intact under all circumstances. The most critical conditions are encountered during crash landing for which CS-23 states that the panel should be able to withstand 3g in the upward, 18g in the forward and 4.5g in the sideward direction. Under these loads, the panel may bend, but there may be no risk of instruments coming loose as these could injure the pilot. Just like the old panel, the updated version will be made from a single sheet of 2mm thick ALU6065 T6 Aluminium. The panel will be cut out of the Aluminium sheet by using a CNC mill, which will also be used for cutting the holes in which the instruments will be placed. In order to eliminate the possibility of cracks propagating from the cut edges, these will be sanded smooth. Moreover, the panel will be reinforced with an ALU6065 T6 extruded stringer. Like the panel sheet, these are also 2mm thick and will be attached to the panel sheet with blind rivets. Finally, the instruments will be mounted to the panel using 3.55mm (in diameter) screws.

Figure 3. Volunteers pushing back the Lambach HL II replica at Lelystad

Validating the Panel Design A first step into validating this design was to create a FEM model in Patran and to analyze it with Nastran. Special care was taken to simplify the panel’s model in order to decrease simulation effort and time, while maintaining a sufficient level of accuracy. First of all, the model assumes that the panel is a 2D shell instead of a full 3D model. This is a reasonable assumption because the plate is relatively thin (2mm) and thin plates can be better modeled using shell elements rather than solid elements. Secondly, the stiffeners are modeled by 1D beams, which have the same cross-sectional characteristics of the 3D version. Finally, the model excludes the material outside of the stiffeners, as this material does not contribute to the load carrying capability of the panel, rather it is purely there for aesthetics. The loads are applied to the model by using Lumped Point Masses at the CoG’s of the instruments. Since the instruments are almost completely cylindrical, it is assumed that their CoG is at their geometric center. The load itself obviously is the mass of the instrument, multiplied by the load factor as specified in CS-23. Furthermore, the masses have been connected to the hole edges by means of RBE2 elements. As such, the resulting force and moment is transferred to the panel when applying different load cases during the analysis. With the simulation software the critical Von Mises stresses and location could be determined as well as the maximum deformation. As expected, the highest stresses and deformations were encountered for the 18g forward load case. The highest von Mises stress in the panel sheet is estimated to be 47MPa and will be found around the edges of the transponder and radio control units. As shown on Figure 2, the highest displacement will be 1.66mm, which is just below to oil temperature indicator and the radio control unit. All in all, this is considered satisfactory behaviour of the panel.

Now, to validate the results of the FEM analysis a test will be performed on a dummy panel. As the test plan has been prepared and the test panel and test setup have been built, the HL II team is ready for testing the new panel in the next quarter. The Next Steps With the panel project nearing completion, the project coordinator can look further to new challenges. For instance, one volunteer is now already working on replacing the streamline tubes supporting the top wing of the Lambach HL II. During production, small holes were placed at the end of these tubes to allow gasses created in the welding process to escape. These holes have never been closed, which, however, allows corrosion to accumulate at those ends. It was decided to fully replace the streamline tubes. Moreover, to prevent any further production errors Fokker was contacted for the assistance in the welding and chroming of the new streamline tubes. Lambach Aircraft Besides working on the projects itself, Lambach Aircraft dedicates itself to provide its volunteers with hands-on experience on various aspects of aircraft design, production and operation. In the latest months, the society has arranged various practicums, ranging from milling and turning practicums to a course on how to use multi-body dynamics software. Moreover, as part of the lustrum the society will arrange a set of activities for its volunteers with as absolute highlight the excursion to Great Britain, including visits to the Duxford Imperial War Museum and the Flying Legends air show. References [1] W.A.S Nijenhuis, F.C. Spek and P.J.J Moeleker., “De Lambach HL II: De geschiedenis van het origineel en de bouw van de replica”, Lambach Aircraft, 1996. [2] Lambach Aircraft, www.LambachAircraft.nl, 2015

april 2015 Leonardo Times

student project 0415.indd 37

37

27/03/15 14:39


Grid-stiffened composite structures Global and local buckling analysis

There is a renewed interest in grid-stiffened composite structures, which have a high specific strength and stability, however multiple types of buckling modes and their interactions complicate the buckling analysis of grid-stiffened structures. In this work, the authors developed a new method in order to calculate both global and local buckling loads efficiently with an acceptable accuracy [Wang and Abdalla, 2015]. TEXT Dan Wang, Postdoctoral researcher, Faculty of Aerospace Engineering, TU Delft

Grid-stiffened composite structures Fibre Reinforced Plastic composites (FRPc) are widely used in aerospace engineering due to the high stiffness and strength to weight ratios. The lattice is an efficient structure to match the highly directional material properties of FRPcs. Combining the lattice with a skin may enhance the structural stability and provide additional functionality. These kind of structures are referred to as grid-stiffened composite structures. Usually, the shape of the characteristic lattice distinguishes gridstiffened structures, where orthogrid-, anglegrid-, isogrid- and anisogrid- stiffened structures are made up of a repeated rectangle, parallelogram, triangular and arbitrary lattice, respectively. Grid stiffened Aluminium structures have been used in aerospace applications for several decades and still play an important role in space launch vehicles as a kind of reliable and efficient structure, for instance in a rocket liquid oxygen tank on a Delta II [Oliveira et al, 2007]. By replacing Aluminium with composites, more efficient structures can be designed. Grid-stiffened composite structures were first investigated by government research groups of the USA and the USSR for space applications, but their applications were limited due to the expensive manufacturing cost and precarious quality of the labour intensive manufacturing [Huybrechts and Meink,

38

1997]. As the automated manufacturing technology matures, there is a renewed interest in grid-stiffened composite structures as potential cheap substitutions of other composite candidates, such as widely used skin-stringer and sandwich composite structures. A comparison between composite grid and composite sandwich shows that grid-stiffened composite structures have superiority in the performances of stiffness, acoustic behaviour and global buckling [Meink, 1998]. Moreover, grid-stiffened structures have the ability to resist moisture and corrosion because of the open construction and benefit from a high damage tolerance due to the integrated configuration. All of these advantages make grid-stiffened structures an attractive topic with increasing attentions both in theoretical analysis and engineering applications. The research efforts are spent not only on space launch vehicles but also on aircraft structures [Wegner et al, 2002; Vasiliev et al, 2001]. Moreover, applications of grid-stiffened composite structures are spreading into ground vehicles and wind turbine blades. The challenge Buckling is a state that structures go through large deformation or collapse suddenly, which is an important failure mode for thin-walled structures. Gridstiffened composite structures have the possibility of buckling globally as a

whole structure or buckling locally as skin pocket buckling or stiffener crippling. The detailed finite element model with skin and stiffeners assembled together can capture both global and local buckling loads accurately but with an expensive computational cost. A smeared stiffener model [Jaunky et al, 1996] with equivalent material properties is often used in simulating the global behaviors of gridstiffened composite structures with a high efficiency but is deficient to capture local buckling mode [Wodesenbet et al, 2003]. In engineering applications, skin and stiffeners are usually separated by imposing approximated boundary conditions in order to obtain local buckling load. This semi-analytical method is clearly superior in computational efficiency but can’t provide accurate results because of the excessive simplifications. In this method, the dimensions and stiffness of stiffeners have no influences on the local buckling load of the skin, which is not correct in practice. In authors’ work, a new method is proposed base on a global equivalent model with homogenized material properties and a local model of characteristic cell configurations. By tailoring the Bloch wave theory into the FE system of characteristic cell configurations, the local buckling load can be calculated on the global/ local model for a general grid pattern with a good balance between efficiency and accuracy.

Leonardo Times april 2015

grid stiffened.indd 38

27/03/15 14:16


Wegner P. M., Higgins J. E. & VanWest B. P.

Figure 1. AFRL’s composite launch vehicle fairing, Side-view and End-view [Wegner et al, 2002]

Local buckling load calculation Actually, the Bloch wave theory provides an alternative to capture the local instability in a periodic medium. In the Bloch wave theory, displacements, as the ei-

genvector in instability, of a particle in a periodic medium can be expressed as a product of a periodic component with the same periodicity as the medium and a periodic function related to an arbitrary wavelength. It’s proved that the Block wave theory can determine the onset of instability of periodic solids [Geymonat et al, 1993]. The existing applications of the Bloch wave theory are mainly on the material failure of periodic solid [Gong et al, 2005]. The minimum of the surface made up of buckling loads with different wavelengths is the critical instability point. Different from periodic solid, grid-stiffened structures, as a type of thin-walled panels, with infinite structural lengths will buckle at zero loading. This makes the global buckling dominate the critical buckling mode. In that case, the minimum of the buckling load surface will always be zero and located at an infinite wavelength. In practice, the grid-stiffened composite structures have finite dimensional lengths and the global buckling load is determined by the smeared stiffened method. The local minimum in the buckling load surface, which has a much shorter wavelength than the structural length, will be the critical local buckling load of investigated grid-stiffened composite struc-

tures. Calculations of the buckling load surface for the local buckling load follow a similar expression as the local stress calculation but in a plural format. Based on an assembly of the skin and stiffeners in a characteristic cell, the critical local buckling load is captured no matter if it is skin pocket buckling, stiffener crippling or a coupling of both. Numerical validations The simplest semi-analytical method for skin pocket buckling calculation is to impose the simply supported boundary condition at the interface of the skin and stiffeners. Numerical results of the proposed method show that the skin pocket buckling load of grid-stiffened composite structures increases with the height of stiffeners and is always between results of the simply supported boundary condition and the clamped boundary condition, which is validated by comparing with results of detailed finite element models. The comparison on one way proves that the simply supported boundary condition is conservative in calculating the skin pocket buckling load and on the other way shows that the proposed method is much more accurate with both the stiffener stiffness and the influence of

Wang, D., & Abdalla, M. M.

Global buckling load calculation Grid-stiffened composite structures are a special application of periodic structures. According to the homogenisation theory [Hassani, 1998], these structures can be investigated from two different viewpoints. In the macroscopic scale, the response functions are slowly changed and remain the same value in a characteristic cell. In the microscopic scale, the response functions fluctuate sharply in a characteristic cell but repeat periodically in space. Therefore, displacements of grid-stiffened composite structures are made up of a macroscopic part and a microscopic part with only the macroscopic part affecting the boundary conditions due to the periodicity of the microscopic part of the displacement. By using the homogenization theory, the smeared stiffener method is used to calculate the global buckling load. The concept is to treat the stiffened structure as a homogenous panel with equivalent material properties. The equivalent material properties are calculated based on a characteristic cell configuration as an assembly of the skin and stiffeners by a superposition of contributions from the skin and stiffeners or with the strain energy in the unstiffened cell kept the same as that of the original stiffened cell configuration. The later one is used in the present work. By using the equivalent material properties, i.e., the equivalent ABD matrix, the global buckling load can be calculated based on a homogenous panel. With global strains and curvatures of the equivalent unstiffened model as inputs, the local stresses of the stiffened structure are also available for material failure calculation [Michel et al, 1999]. The output local stress and moment resultants for the global & local model are also important inputs for the following local buckling load calculation.

Figure 2. Global and local buckling modes of a grid-stiffened composite cylinder

april 2015 Leonardo Times

grid stiffened.indd 39

39

27/03/15 14:16


Wang, D., & Abdalla, M. M.

References [1] Wang, D., & Abdalla, M. M. (2015). Global and local buckling analysis of grid-stiffened composite panels. Composite Structures, 119, 767-776. [2] Oliveira, J., Kirk, D. R., Chintalapati, S., Schallhorn, P. A., Piquero, J. L., Campbell, M., & Chase, S. (2007). The Effect of an Isogrid on Cryogenic Propellant Behavior and Thermal Stratification. TFAWS Conference. [3] Huybrechts, S., & Meink, T. E. (1997, February). Advanced grid stiffened structures for the next generation of launch vehicles. In Aerospace Conference, 1997. Proceedings., IEEE (Vol. 1, pp. 263-270). IEEE.

Figure 3. Switch of local buckling mode in a flat isotropic panel under shear

Wang, D., & Abdalla, M. M.

neighboring skin taken into account. Investigations on panels under shear shows that the local buckling load increases first and then decreases with the stiffener height. The phenomenon is caused by the mode switch from skin pocket buckling to stiffener crippling at a specific stiffener height. These numerical examples show that the proposed method has the ability to capture the critical local buckling load no matter if it is skin pocket buckling, stiffener crippling, or a coupling of both. A larger error occurs in the stiffener crippling dominated stage compared with the detailed finite element model, which is caused by using a beam model for stiffeners in the proposed method to simulate practical plate-like behaviors of stiffener. Improvement of the stiffener model should improve accuracy of the local buckling prediction. More practical flat panels and circular hollow cylinders are also investigated to prove the effectiveness of the proposed method. For uniformly distributed stress and moment resultants, the proposed method has a high accuracy with an error of about 5%. However, at the stress concentrated corner, the error will increase because of violation of the homogenization theory and a different boundary condition at the corner.

Figure 4. Buckling load surface

40

Conclusions and future work A new method is proposed for global and local buckling calculations of grid stiffened composite structures. The calculations are implemented based on a global & local coupled model. The global model is established on an unstiffened panel with homogenized material properties. The local model is established on a characteristic cell configuration with both the skin and stiffeners. The local model will provide homogenized material properties for the global model and at the same time the global model will provide average stress and moment results for the local model. The Bloch wave theory is tailored in the finite element system of a characteristic cell and the local minimum of the buckling surface with a short wave length is the local buckling load. The method can predict both global and local buckling loads efficiently with an acceptable error. Moreover, instead of modeling the skin and stiffeners separately, the assembly of the skin and stiffeners in a characteristic cell configuration can capture the critical local buckling mode; no matter if it is skin pocket bucking, stiffener crippling, or a coupling between them. With a comparison of the detailed finite element model, the proposed method is validated by typical numerical examples of flat panels and hollow cylinders. In fact, grid-stiffened structures with a repeated cell configuration are obviously not the optimal for a general loading case. Applications of optimization methodologies and techniques in the design of grid-stiffened structures will improve the structural performances by allowing steering stiffeners. The proposed method for global and local buckling calculation will well match design optimization by providing efficient results since the optimization itself has already been a heavy iteration.

[4] Meink, T. E. (1998, March). Composite grid vs. composite sandwich: a comparison based on payload shroud requirements. In Aerospace Conference, 1998 IEEE (Vol. 1, pp. 215-220). IEEE. [5] Wegner P. M., Higgins J. E. & VanWest B. P. (2002). Application of advanced grid-stiffened structures technology to the Minotaur payload fairing. 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Material Conference, 22-25 April 2002, Denver, Colorado. [6] Vasiliev, V. V., Barynin, V. A., & Rasin, A. F. (2001). Anisogrid lattice structures– survey of development and application. Composite structures, 54(2), 361-370. [7] Jaunky, N., Knight Jr, N. F., & Ambur, D. R. (1996). Formulation of an improved smeared stiffener theory for buckling analysis of grid-stiffened composite panels. Composites Part B: Engineering, 27(5), 519-526. [8] Wodesenbet, E., Kidane, S., & Pang, S. S. (2003). Optimization for buckling loads of grid stiffened composite panels. Composite Structures, 60(2), 159-169. [9] Hassani, B., & Hinton, E. (1998). A review of homogenization and topology optimization I—homogenization theory for media with periodic structure. Computers & Structures, 69(6), 707-717. [10] Michel, J. C., Moulinec, H., & Suquet, P. (1999). Effective properties of composite materials with periodic microstructure: a computational approach. Computer methods in applied mechanics and engineering, 172(1), 109-143. [11] Geymonat, G., Mßller, S., & Triantafyllidis, N. (1993). Homogenization of nonlinearly elastic materials, microscopic bifurcation and macroscopic loss of rank-one convexity. Archive for rational mechanics and analysis, 122(3), 231-290. [12] Gong, L., Kyriakides, S., & Triantafyllidis, N. (2005). On the stability of Kelvin cell foams under compressive loads. Journal of the Mechanics and Physics of Solids, 53(4), 771-794.

Leonardo Times april 2015

grid stiffened.indd 40

27/03/15 14:16


A

GREAT IDEA

to help

YOUR CAREER

TAKE OFF

Meet Désiree She joined AkzoNobel after a Masters in Strategic Management and currently works at our Aerospace Coatings business. In her role as Market Segment Manager she acts as a portfolio manager servicing all segments of Aerospace Coatings, like OEM, Maintenance and Defence. “I have direct contact with different functions in the organization, such as RD&I, Marketing, Sales and Production. We are truly customer focused and everyone puts their heads together to find the best solution for our clients”, she says. “There’s a good reason that one out of every three airplanes worldwide is coated with our products.” She encourages Masters students to explore their personal boundaries. “If your career takes off at AkzoNobel, you will find a dynamic and open platform where you are free to maximize your abilities. And we are a multicultural organization present in more than 80 countries worldwide, including mature and emerging markets. So there is plenty of scope for travel and international work.” Désiree Oldenburger Market Segment Manager

To find out about our career opportunities, please visit our website www.akzonobel.nl/careers

501.10.162 AN Desiree-aerospace-ad.indd 1

07462_171013

Where your ideas go far

20-02-14 15:42


Rik Geuns

Plasma Enhanced Aerodynamics An experimental study on plasma characteristics

Active flow control by the use of dielectric barrier discharge (DBD) plasma actuators has been proven to be a promising concept for the delay and even elimination of boundary layer separation. Both the simplicity of the system as well as the potential for flow and even flight control make plasma actuators increasingly interesting for research in aerodynamics worldwide. TEXT Rik Geuns, Graduate Aerospace Engineering, MSc thesis at Ecole Polytechnique FĂŠdĂŠrale de Lausanne

Plasma actuators for flow control In the past few years, the term plasma actuator has made its entry into the aerodynamics’ flow control jargon. In this field, two different types of dielectric barrier discharge actuators are currently investigated. Alternating current (AC-DBD) and nanosecond pulsed direct current (ns-DBD) plasmas differ in terms of input voltage signal. Consequently, the main flow control working mechanism of both discharges is distinct. Where the AC-DBD working principle is related to an increase of momentum in the boundary layer, well known as a wall-jet, the ns-DBD is assumed to operate on a thermal effect. The rapid localized adiabatic heating of the near-surface gas creates compression waves that emerge from the surface, adding pulsed energy to the flow. A dielectric barrier discharge is an electrical discharge between two electrodes that are separated by a dielectric layer. When the electric field between the elec-

42

trodes is increased, free charge carriers in the gas are accelerated towards the electrodes. These charge carriers will undergo inelastic collisions with molecules that are present in the gas, leaving these molecules in a specific excited energy state. This process is known as gas ionisation. Further increasing the potential between the electrodes will lead to plasma streamers on the dielectric surface. Experimental plasma characterization In order to develop safer and more efficient applications of DBD actuators, a better understanding of the plasma state and behaviour under different operating conditions is needed. The thesis work presented here consisted of a thorough investigation of the plasma characteristics of AC-DBD and ns-DBD under different operating conditions such as voltage amplitude, ambient pressure and dielectric thickness. The relevance of this research relates to real-life applications of DBD devices for active flow control on aircraft.

During take-off, the ambient pressure is nearly five times as high compared to the pressure at cruise altitude, where the plasma behaves differently. Electrical measurements of voltage and current signals reveal information about the strength and duration of the plasma streamers. Phase-locked fast-camera plasma imaging shows the temporal and spatial evolution of the plasma in the electrode gap. The most interesting plasma diagnostics technique used is optical emission spectroscopy, which allows the unambiguous identification of excited species in the plasma sheet. Molecular excitations are created in the plasma by electron impact processes and de-excite by molecular collisions or radiative decay, thereby emitting a photon. The wavelength of the light that is emitted during this de-excitation corresponds to a unique excited state. Electrical measurements Electrical measurements of the current signal on an AC-DBD actuator are shown

Leonardo Times april 2015

plasma enhanced.indd 42

26/03/15 23:46


4

150 100

2

50

0

0 −50

−2

40

60

Time [%]

80

g

2

385

390

395 400 Wavelength [nm]

405

410

415

100 Rik Geuns

20

3

Figure 2. Spectrum at 380nm to 415nm. Measurement performed at atmo spheric pressure, 15kV and 2.5kHz.

−200 0

u

2 + X Σg,v’’=0)

3

0 380

−150

−6

+ 2 + N2(B Σu,v’=0

1

−100

−4

3

N (C Π ,v’=0,1,2 − B Π ,v’’=3,4,5) 2

4 Intensity [a.u.]

Voltage [kV]

4

5

Rik Geuns

200

6

−8

Rik Geuns

x 10

Current [mA]

Rik Geuns

8

Figure 1. Current (green), measured voltage (red) and reference voltage (blue) versus cycle. Measurement performed at atmospheric pressure, 15kV and 2.5kHz. x 105

4

x 10

1.8

Rik Geuns

6

1.6 5

C v=0 C v=1 C v=2 C v=3 B+ v=0 × 10

3

2

Excited species population [a.u.]

Excited species population [a.u.]

1.4 4

1.2 C v=0 C v=1 C v=2 C v=3 B+ v=0 × 100

1 0.8 0.6 0.4

1 0.2 0

125

250

500 Pressure [mbar]

1000

0

125

250

500 Pressure [mbar]

1000

Figure 3. Excited species population of nitrogen and N2+ versus ambient pressure during rise (left) and fall (right) time of the voltage pulse. Measurement performed at constant voltage amplitude and dielectric thickness.

in Figure 1. The current, represented by the green line, consists of capacitive current and current disturbances due to the presence of plasma. Two plasma streamers are present in one voltage cycle, respectively during rise and fall time of the voltage. The strength, duration and timing of the streamers are retrieved from these measurements (Geuns, 2014). Similarly, plasma streamers are only present during the (very short) ascending and descending phases of the voltage pulse in ns-DBD. The time in which plasma streamers are present in this case only equals several nanoseconds. Even though power peaks up to 200kW are noticed, the average power consumption of the actuator is found to be low (about 5W). Optical emission spectroscopy By analyzing light emission from the plasma, the molecular excitations that are present in the gas can be determined. Figure 2 shows a measured spectrum at the 380nm to 415nm wavelength range, involving transitions of nitrogen and N2+ ground state molecules. A higher peak in the emission spectrum indicates a higher intensity and thus a higher population density of the corresponding excited state. The amount of molecular excitations directly relates to the degree of ionization and consequently the strength of the plasma. As can be seen from Figure 3, the amount of excites species in the plasma decreases

Figure 4. NACA 0015 model with actuator on leading edge installed in wind tunnel test section.

with rising ambient pressure. The plasma appears to be stronger and brighter at lower pressures when all other parameters are constant. Having fewer molecules in air at lower pressures, the mean free path and energy gain of the free electrons increases, thereby increasing the likelihood of transition during electron impact collisions. Similar downward trends of excited species are measured with increasing dielectric thickness and decreasing voltage amplitude. Here, the measured trends can be related to the strength of the electric field between the electrodes. A higher electric field (higher voltage, smaller distance) leads to a faster acceleration of the free charge carriers. The plasma strength and composition highly depend on the operating conditions. Wind tunnel experiments The influence of voltage amplitude and pulsing frequency on flow separation control are investigated on a NACA 0015 profile with an integrated leading edge nsDBD actuator in a low-speed wind tunnel (Figure 4). Results indicate the existence of an optimal dimensionless frequency of about two. Consequently, the optimal pulsing frequency is directly related to the flow speed, but remains rather low. Large zones of vorticity are observed over the airfoil at these low frequencies, which are ensuring flow reattachment. At higher frequencies, the vorticity zones are much smaller, having less influence on the flow behaviour.

Also, a voltage threshold seems to exist which is dependent on the angle of attack of the profile. As soon as plasma is present between the electrodes, an effect on the pressure distribution around the airfoil can be seen. However, more energy needs to be transferred into the flow in order to achieve full reattachment at higher angles of attack. Areas of research The use of plasma actuators as active flow control devices has been widely investigated during the last decade. The potential is enormous, and results are promising. Separation postponement, lift enhancement, drag reduction and laminar-turbulent transition control pave the way to more sustainable flow and even flight control. Also in the internal flow domain, plasma actuators are currently being investigated. The presence of excited atoms and molecules in the plasma, the ultra-fast local heating of the gas and the creation of ozone molecules opens perspectives for so-called Plasma Assisted Combustion (PAC) in modern engines. References R. Geuns, S. Goekce, G. Plyushchev and P. Leyland. Understanding SDBD actuators: An Experimental Study on Plasma Characteristics, 45th Plasmadynamics and Lasers Conference, Atlanta, June 2014.

april 2015 Leonardo Times

plasma enhanced.indd 43

43

26/03/15 23:46


It’s the energy, stupid!

A new look at fatigue damage growth No one wants to board an aircraft that might break into pieces during the flight. Thus, a lot of research has been performed on the topic of fatigue and damage tolerance. Nevertheless, the underlying physics is still poorly understood. To allow future aircraft to be lighter and thus more efficient a theory capable of understanding and predicting the damage progression is key. The SI&C group is paving the way towards such a theory. TEXT René Alderliesten, Associate professor Lucas Amaral, John-Alan Pascoe & Liaojun Yao, PhD students, Structural Integrity & Composites Group, Faculty of Aerospace Engineering, TU Delft

This idea was further developed by George Irwin [2], who realized that the amount of energy available could be quantified by what he termed the ‘strain energy release rate’ (SERR, G). This is the amount of elastic energy released from a structure by an infinitesimal increment of crack growth. Furthermore, Irwin showed that the SERR is equivalent to the stress intensity factor (SIF, K), which tells you how strongly

44

the stress is concentrated at the tip of a crack. The SERR, and therefore also the SIF, depend on the geometry of the object and also the load. Hence, during the course of a fatigue the cycle ranges from a minimum value (Gmin, Kmin) to a maximum value (Gmax, Kmax) and back again. Since the SERR and the SIF are equivalent, the SERR can both be seen as a measure of energy dissipation (it’s original definition) and as a measure of the stress state at the tip of a crack (due to the equivalence with the SIF). Paul Paris applied Irwin’s results to fatigue crack growth, showing that if you plotted the crack growth rate (da/dN) against the range of the SIF (ΔK = Kmax-Kmin) you could fit a power law through the data, i.e. [3]: da/dN = CΔKn Here, C and n are curve-fitting parameters. This ‘Paris relationship’ is the foundation for nearly all current crack growth models. Paris’ motivation for using ΔK in his model was that K describes the stress state surrounding the crack tip, which he saw as the driving force for crack growth. For growth of cracks and delaminations in adhesive joints or composite materials the Paris relationship is also commonly used. However, in these materials, the SERR is easier to compute than the SIF, and thus ΔK is often replaced by either Gmax or ΔG

in the equation. In all cases though, G and K are taken to represent the stress state in the vicinity of the crack tip, rather than considering the energy as originally prosed by Griffith. Shortcomings of the historical understanding An important point in reviewing the work on fatigue crack growth is in the fact that people tend to confuse the development of a theory describing the physics of all mechanisms with a prediction model. For instance, the SERR is commonly used in prediction models such as the Paris relationship, as the ‘driving force’ in crack growth analysis, even though the SERR represents the release of energy. Thus, it is only a consequence of crack growth, instead of the cause of it [2]. As a result, most studies on the fatigue crack growth L. Yao

Historical views on crack growth Just being able to predict the life of an undamaged structure is not sufficient to ensure safety. You also need to be able to predict how fast a pre-existing defect (e.g. induced during manufacturing) will grow. To do this, looking at the applied stresses is not enough. It was soon found that the geometry of the specimen and the defect also matter, but the link between stress and crack growth was still unclear. Alan Griffith, an engineer at the Royal Aircraft Establishment in Farnborough, suggested an alternative to the stress-based approaches. Griffith showed that rather than the stresses, energy was what really determined when cracks would grow [1]. Growing a crack takes energy, because atomic bonds have to be broken and new surfaces formed. By determining how much energy is available and how much is required, the crack growth can be predicted.

Figure 1. Example of a CFRP double cantilever beam specimen with fibre bridging.

Leonardo Times april 2015

it's the energy.indd 44

26/03/15 23:47


In this approach the fatigue crack growth rate da/dN is not related to the SERR, but rather to dU/dN, the strain energy dissipated during each fatigue cycle. What is taken into consideration is not the stress state in the vicinity of the crack tip, but the energy that goes into the system and the energy dissipated during crack growth. In addition, when da/dN is related to the SERR, a measured value (the crack length) is related to a theoretical model, the SERR, in order to obtain a model for fatigue crack growth. In other words, a model is being used to obtain another model. This results in mathematical relations without physical meaning. However, when one relates da/dN to dU/ dN, two measured values are being related in order to obtain a model for fatigue damage propagation. Thus, a model is obtained strictly from measured data. In this way the energy balance during crack growth is explicitly examined. Experimental validation A comparison between the previous Paris correlation and the energy balance principle for interpretation of fatigue data is illustrated in Figure 2 [4]. All data derives from fatigue tests conducted on carbon fibre reinforced polymer (CFRP) double cantilever beam (DCB) specimens with the same stress ratio, but distinct pre-crack lengths. During these tests, fibre bridging can occur: this is the phenomenon that individual fibres can remain attached to both faces of the crack, pulling the crack faces together again. The longer the precrack, the more fibre bridging will occur. An example of a DCB specimen with fibre bridging is shown in Figure 1. In Figure 2(a), the resistance curve shifts from left to right with the increase of crack length. In other words, the longer the precrack, the lower the crack growth rate at the same value of ΔG. If what has been done is a material characterization and

Figure 2. Fatigue data analysed with the Paris correlation, i.e. using the SERR as a correlating parameter. (Yao et. al 2014) / Elsevier

To overcome the aforementioned disadvantage in fatigue crack growth research, an alternative approach should be used in future research work. In nature, crack growth is an energy dissipation procedure and should obey the law of energy conservation, as shown by Griffith. From this view the new approach developed at SI&C and based on the energy balance principle seems to be reasonable in fatigue delamination growth study.

(Yao et. al 2014) / Elsevier

in composites are based on the concept of SERR, either using the maximum SERR Gmax or the SERR range ΔG. These models are generally no more than curve fits and lack clear physical meaning. This shortcoming will lead to the misinterpretation of fatigue cracking behaviour of different categories of materials and hinder further understanding of fatigue damage mechanisms.

Figure 3. Fatigue data analysed with the energy balance principle, i.e. using the energy dissipation per cycle as a correlating parameter.

not just a simple curve fitting, then why are different results found for different test set-ups, i.e. different pre-crack lengths? If the crack growth rate is plotted against the energy dissipation per cycle, as done in Figure 2(b) all the resistance curves converge to a narrow strip. This shows that the phenomenon illustrated in Figure 2(a) is artificial and related to the inaccurate application and calculation of the SERR during fatigue delamination analysis. The actual amount of energy required to grow a crack does not depend on the amount of fibre bridging. Rather, bridging changes the stress state at the crack tip and therefore the model used to calculate ΔG is no longer correct. Figure 2 only provides a typical case to highlight the disadvantage of the application of the SERR in fatigue crack growth analysis, and the advantage of using an energy balance to discover the physical principle behind the crack growth phenomenon.

Conclusion Providing insight into the fundamental mechanisms of a certain phenomenon is the first stage in investigating it. Based on the observation of the physical mechanisms, prediction models can be proposed to answer engineering questions.

However, the opposite should not happen in scientific studies. This principle is also true for the case of fatigue delamination growth study. To truly understand what’s going on, don’t think in terms of stress, but in energy! If you are intrigued by the research described in this article and would like to know more, or maybe do your MSc thesis on this topic, feel free to contact one of the authors.

References [1] Griffith, A. A. (1921). The Phenomena of Rupture and Flow in Solids. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 221, 163-198. [2] Irwin, G. R. (1957). Analysis of stresses and strains near the end of a crack traversing a plate. ASME Journal of Applied Mechanics, 24, 361-364. [3] Paris, P., & Erdogan, F. (1963). A Critical Analysis of Crack Propagation Laws. Journal of Basic Engineering, 85(4), 528-533. [4] Yao, L., Alderliesten, R. C., Zhao, M., & Benedictus, R. (2014). Discussion on the use of the strain energy release rate for fatigue delamination characterization. Composites Part A: Applied Science and Manufacturing, 66(0), 65-72.

april 2015 Leonardo Times

it's the energy.indd 45

45

26/03/15 23:47


ESO

LOUPE Observing the Earth as an Exoplanet

TU Delft Aerospace Engineers are working, together with astronomers from Leiden University, on the design of spectropolarimeter LOUPE (Lunar Observatory for Unresolved Polarimetry of Earth) that will observe the Earth from the Moon as if the Earth were an exoplanet to provide scientists with unique reference data. TEXT Thijs Arts, MSc Student Space Engineering, TU Delft

ince the first discovery of a planet orbiting another star in 1992, almost 1900 of these so-called exoplanets have been confirmed (The Extrasolar Planets Encyclopedia, 2015). Because exoplanets are very faint compared to their star, they are very difficult to observe. Most exoplanets have therefore been discovered using indirect methods, for example by looking at variations in the movement of their parent star. Most known exoplanets are large gas giants in close orbits around their stars. Only a tiny fraction constitutes rocky planets in their star’s habitable zone, the region where liquid surface water could be possible. In the media, these planets are often called Earth-like and habitable, while only their mass and orbit are known and no information about the composition of the atmosphere and surface is available. Transit spectroscopy, in which starlight that has passed through an exoplanetary atmosphere is analysed, has provided data on close-in giant exoplanet atmospheres, but their thin atmospheres and short measurement periods make it very hard to gather enough signal for small, rocky planets. Data on the atmosphere and surface of a rocky exoplanet in a habitable zone is key to confirming whether these planets are indeed similar to Earth and not to, for in-

46

LOUPE.indd 46

stance, Venus, where carbon dioxide and sulphur in the atmosphere have made the planet inhabitable even though it is orbiting in the inner part of the Sun’s habitable zone. To obtain such data, direct imaging methods are needed (Karalidi et al., 2012). In particular, spectropolarimetry promises to be a valuable method for characterizing atmospheres and surfaces. When unpolarised starlight is scattered by particles in an atmosphere or reflected by a surface, the light becomes polarized. The polarization allows identifying the light coming off the planet. The strength of the polarization signal depends on the atmospheric particles and the surface. Especially biomarkers like water cloud droplets and vegetation show distinct polarization

features (Stam, 2008). Measuring polarization at several wavelengths (hence the name spectropolarimetry) helps to characterize the atmospheres and surfaces of exoplanets. Polarimetry (without much spectral information) is part of a set of new instruments for the detection of giant exoplanets, like the SPHERE instrument on the Very Large Telescope (VLT) (ESO, 2014). The European Extremely Large Telescope (E-ELT) will have instruments for the detection and characterization of rocky exoplanets. However, in order to optimize the design of such instruments and to be able to understand the data they deliver, spectropolarimetric reference data is needed of known habitable planets like the Earth. Karalidi et al., 2012

S

Figure 1. The expected flux (Ď€Fn) and degree of linear polarization (PL) on the Moon for different surface types.

Leonardo Times april 2015

26/03/15 23:47


Going to the Moon The Moon is the ideal observation platform for LOUPE, because from there LOUPE can see the Earth as a whole and with the illumination and viewing geometries that mimic those of exoplanet observations. From the Moon, the Earth can be observed continuously by LOUPE, as it remains within a field of view of about 20° in the lunar sky, and LOUPE can thus capture the daily rotation and all phase angles of the Earth. Also, if the lunar mission that carries LOUPE, possibly as piggyback payload, survives longer than a few months, seasonal variations could be observed. Because no platform has yet been selected, there is a lot of freedom in the instrument design, while at the same time the instrument should be easily adaptable when a launch opportunity arises. This means that the instrument should be designed such that it imposes as little constraints on potential platforms as possible, so LOUPE should be small, have a low mass, require as little power and data-rate as possible and should pose very little risk to the mission goals of the platform, while still be able to do the necessary spectropolarimetric measurements. Optical design LOUPE will measure sunlight that is reflected by the Earth from 400 to 800nm, because most interesting planetary spectral features, like the O2A band, fall within this visible spectral band (see Figure 1) and because the stellar flux is largest there. For its spectropolarimetry, LOUPE uses the socalled SPEX technique (van Harten, 2014): a set of retarders manipulates the polarization of the incoming light according to its wavelength, see Figure 2. In combination with a polarizer and a dispersive element, a recorded spectrum is composed of a continuum flux that is modulated with the degree of polarization. Both the total (unmodulated) flux and the polarization can easily be retrieved from this spectrum. This technique allows LOUPE to be very small, lightweight and (almost) free of any moving parts. A rocky exoplanet will always appear as a single point of light to us. From the moon, LOUPE can see the Earth as a whole. To obtain accurate information about contributions of different surface types, like oceans, forests, deserts and ice, to the Earth’s total signal, it is important that LOUPE can spatially resolve oceans and continents. To do this, LOUPE has a microlense array (MLA) after its SPEX optics, see Figure 3, which splits up the field of view and thus determines the spatial resolution. A dispersive element then creates a spectrum of each MLA element and a

van Harten, 2014

ESO

LOUPE will be able to provide such data by doing spectropolarimetry of the Earth from the Moon.

Figure 2. The SPEX technique principle.

Figure 3. The prototype of LOUPE that was built at Leiden University

camera lens focuses all spectra onto the detector. Because of the MLA, LOUPE will not perform one measurement of the entire Earth, but instead it will perform measurements of several different patches on the disk of the Earth, from which the signal of the whole will be reconstructed.

spheres and surfaces. To make this happen, a proper first design is being worked on, and a suitable lunar mission is being searched for. This might take some time, but LOUPE on the Moon could be the stepping-stone to finding life on other planets.

Status instrument design LOUPE is currently in an early design phase: all instrument requirements are available, but there are still many conflicts, especially between the science and the instrument requirements. Due to the large freedom in design, there are few key requirements that drive the design in order to get LOUPE on the Moon in the first place. A lot of trade-offs will thus have to be made to arrive at a first design. For example, there is the trade-off between a 20° field of view and no pointing mechanism, and a smaller field of view with a pointing mechanism. The first option is less risky due to the lack of moving parts and requires less power, but it gives a lower spatial resolution and might result in not seeing the Earth at all if there is any error in the platform attitude. The second option obviously poses more risk, because it adds moving parts, but allows for a higher spatial resolution, thus better data, and poses less risk to not seeing the Earth.

If you would like to know more about LOUPE please do not hesitate to contact us. Thijs Arts – m.l.j.arts@student.tudelft.nl Dr. Daphne Stam – d.m.stam@tudelft.nl

In conclusion, LOUPE is an interesting and promising instrument that will provide scientists with the necessary reference data for the research of exoplanet atmo-

References [1] “The Extrasolar Planets Ecyclopedia”, http://exoplanet.eu/, 2015 [2] Karalidi, T., Stam, D.M., Snik, F., Bagnulo, S., Sparks, W.B., Keller, C., “Observing the Earth as an exoplanet with LOUPE, the Lunar Observatory for Unresolved Polarimetry of Earth”, Planetary and Space Science, Volume 74, Issue 1, p. 202-207, 2012. [3] Stam, D.M., “Spectropolarimetric signatures of Earth-like extrasolar planets”, Astronomy and Astrophysics, Volume 482, Issue 3, pp. 989-1007, 2008. [4] “Sphere – Spectro-Polarimetric Highcontrast Exoplanet Research”, http:// www.eso.org/, ESO, 2014. [5] van Harten, G., “Spectropolarimetry for Planetary Exploration”, PhD thesis, Leiden University, 2014. [6] Sparks, W.B., Germer, T.A., MacKenty, J.W., Snik, F., “Compact and robust method for full Stokes spectropolarimetry”, Applied Optics, Volume 51, Issue 22, p.54-95, 2012.

april 2015 Leonardo Times

LOUPE.indd 47

47

26/03/15 23:47


Wired/ Daniel Herbert

The Drone Threat The security implications of the widespread availability of private, small-sized unmanned aerial vehicles.

The general public appears to have discovered the possibilities of blue skies in the form of small-scale unmanned aerial vehicles. However, such devices have not come without downsides. Aside from privacy concerns about onboard cameras, there is a seriously mounting opposition against such devices from a security perspective. TEXT Manfred Josefsson, Student Aerospace Engineering, Editor Leonardo Times

n a January morning this year at 3 o’clock in the morning, a possibly intoxicated national intelligence employee decided to take out his friend’s drone for a flight out of his window in central Washington, D.C. In the dead of night, however, the inexperienced pilot lost sight of the drone. He called his friend and both decided it would be best to call off the search for the missing DJI quadcopter until the morning. However, they would not need to search for it: Security at the White House did first, discovering the battered drone crashed in the gardens outside the home of the president, sparking a massive security alert and nationwide media coverage. The drone costing only approximately $500 could, quoting Obama, be “purchased at RadioShack”. [1]

new creation. DJI electronics, one of the leading names, was founded in 2006, growing from 20 to 2,800 employees with a revenue of 131 million dollars in 2013. Naturally, the creation of remotecontrolled vehicles is much older, but the wide-scaled private spread has only occurred recently. Particularly, the possibility of capturing high quality aerial footage appears to be a very popular use for most hobbyists, especially for photographers

who always seek the best point of view. [3] One of the main concerns about drones is safety; presenting a risk or danger to people and property. While the problem has often been associated with private houses, it also extends to sensitive governmental and industrial entities. A chain of drone sightings around French nuclear power plants in the recent months caused concern and raised important security questions in the security world. There was AP Photo

O

The drone had flown half a mile and had eluded the White House radar that was unable to detect such small objects. The incident proved that even highly secure installations are very vulnerable to drones. Just a week earlier, a government conference involving several government agencies demonstrated the use of the newer version of a similar quadcopter carrying 3 pounds (1.4kg) of explosives and their very real use as weapons. [2] Small-scale, private drones are a relatively

48

Figure 1. The crashed White House Drone

Leonardo Times APRIL 2015

drone threat.indd 48

27/03/15 14:41


In addition, there are governmental concerns about security with implications such as illustrated in the next example. In advance of a January 2015 Major League Baseball League game, the Department of Homeland Security (DHS) decided to spot offenders using radar. Several of them where successfully detected, including models of the DJI Phantom used at the White House. Unfortunately, that was roughly the extent of their activities. Since drones do not carry transponders, it is not possible to see the origin and owner of such objects. There was confusion about whether one drone belonged to the news channel ESPN. In advance of the Super Bowl, the Federal Aviation Administration set up a 30-mile (48 km) boundary for drones. They also released a clip reminding people that they should ‘leave their drones at home’. In both cases, however, they again stood without a defensive capability, threatening only conventional legal action if they got hold of people. [5] [6] If the problem could be well regulated and controlled, it would not cause so much trouble, but this is not the case. Generally, two topics exist here: regulation and enforcement of regulation. To some degree, drones could be said to epitomise the more problematic aspects of the Internet: it is principally impossible to disable due to its size and the volume of users. Furthermore, it is very difficult to track troublemakers down if they are smart. Attempts at absolute control are rather fruitless. A single legal entity banning drones will hardly be effective; people who wish to acquire one of these UAVs can do so across the border or order them online. The author of this article himself picked up his low-end video-equipped quadcopter in Bangkok. Following the White House incident, manufacturer DJI was quick to block the White House within their software, adding it to a wide band of other locations, such as airports. This of course does not help people who use custom setups and software. While it might prevent the unintentional breeches, should somebody want to they can still bypass it. The Federal Aviation Administration in the United States, where the fledging drone industry is most active, is being looked at for getting the clutter sorted out. While it did realise that it was impossible to regulate private drones, it is trying to force the requirement of pilots’ licenses onto commercial drone operators. It is also trying to limit the range to within line of sight. However, this does not solve security concerns, merely improving safety in daily use.

TU Delft

much discussion about who was to blame, although the police have since found where the drones had been launched from: fields outside the power plants. [4]

Figure 2. A more positive mentality: the Ambulance Drone

Enforcement is also difficult. As it has been illustrated repeatedly, it is hard to catch those responsible for flying. A relatively new method of drug smuggling has emerged along the US-Mexico border using drones. The sneaky vehicles are estimated to do 150 drug runs a year, carrying their cargo to pre-set GPS coordinates for drop-off. In late January, a model was found crashed having presumably been overloaded; it was carrying 3kg of Methamphetamine with a street value of an estimated $48,000 at the time. US Customs and Border Protection admitted to date they had never caught anybody flying drones. In fact, the concept appears to have been so successful that cartels started to build their own drones that are capable of transporting even more than the commercially available ones can. [7] There are some solutions but none that completely solve the problem. Systems are available to alert the user when a drone is detected within a certain area. But advance warning is only helpful when the risk can be avoided, such as at airports. A small group of developers in California are designing the Rapere drone system that might provide solutions. Designed to hunt offending drones down, it trails a tangle line to disable it, causing it to crash before returning to base. Due to its short mission time, its makers say that it can outperform normal drones. Needless to say of course, it still remains to be seen if this will work with the necessary accuracy at speeds of up to 15m/s (DJI Phantom 2) and will most certainly still remain a costly operation. While it is said to work in laboratory settings, no photos or videos are available. [8] With incredible numbers of these vehicles being sold, it is only a question of time before things do go seriously wrong. Time

will tell if that occurs in an aircraft turbine or as a terrorist act. While this might seem like a very negative attitude, the truth remains that drones are very powerful tools that are widely available and underestimated by the general public. TU Delft caught considerable media attention last year with Alec Momont’s ambulance drone, demonstrating a mobile defibrillator that could be onsite within minutes. While such a concept is of course subject to further analysis, the fact remains that drones can fulfil various useful and important functions within society. It is unfortunate that there is such a difficulty with regulations and preventing abuse, causing much uncertainty for an industry in development.

References [1] http://wapo.st/1BdP8hj [2] http://www.wired.com/2015/02/ white-house-drone/ [3] http://www.independent.co.uk/ news/world/europe/french-government-on-high-alert-after-unexplaineddrone-flights-over-nuclear-powerstations-9850138.html [4] http://www.japantimes.co.jp/ news/2015/01/08/asia-pacific/chinatakes-the-lead-in-fast-growing-dronemarket/ [5] http://www.nytimes. com/2015/01/30/us/for-super-bowland-big-games-drone-flyovers-arerising-concern.html [6] http://www.cnn.com/2015/01/28/ us/super-bowl-no-drones/ [7] http://edition.cnn.com/2015/01/22/ world/drug-drone-crashes-us-mexicoborder/ [8] http://www.cnet.com/news/raperethe-drone-to-hunt-and-disable-otherdrones/

APRIL 2015 Leonardo Times

drone threat.indd 49

49

27/03/15 14:41


Column

Women in Aerospace The “Lacking� Species

Most of us know names like Amelia Earhart, the first female pilot that flew solo across the Atlantic in 1932, and Valentina Tereshkova, cosmonaut, engineer and the first women to fly in space in 1963. However, the majority of the students and professionals in the field of aerospace have predominantly been males. What is the reason for that and can it be changed in the future? TEXT Martina Stavreva, Student Aerospace Engineering, Editor Leonardo Times

Some Statistics Every year Delft University of Technology provides with statistical numbers, considering the percentage of international students and also the ratio of female to male students (TU Delft, 2013). Intuitively, one would easily guess that the majority of students in a technical university would be male. More specifically in 2013, there were 79% of men in all the faculties. Furthermore, the faculty of Aerospace Engineering is represented with an even smaller percentage of female students. In 2004, only 9% of the students were female, in 2005 – 8%. These percentages have been improved ever since and nowadays there are around 12% of women pursuing Bachelors degree and 10% - Masters. However, one should not think that those low numbers mean that girls are performing worse than guys in their studies. Quite the opposite, female and male students have almost the same rate of obtaining their BSA, 59% to 55%, respectively. Additionally, the female students obtain slightly more ECTS on average, around 44.9, than male students, who succeed in gaining averagely 44.5 ECTS per year. In global context, there are only 13% of women undergraduates in engineering (HESA, 2008) and just 27% of them pursue a career in Science, Technology and Engineering (SET Women, ETB). Moreover, a study of the Royal Academy of Engineer-

50

ing shows that only 6% of engineers are women.

considered a reason for the low percentage of female enrollments.

What could be the reason behind this and how could it be changed?

How Can This Be Improved One may think: why would this be considered as a problem? Lately there has been a lack of qualified labor in the engineering field. Therefore, it is of high importance that new capable and educated cadres are found. Thus, it would be beneficial that more women are involved in this sphere.

Reasons Young people have been showing a lower interest in pursuing a Science and Technology career in the recent decades. Various studies have been conducted and tried to come up with the reason for this. One of them, performed by Siemens in Germany, has shown that the technology and science are not well covered as early as in pre-school age group. This has led to the initiative, called Generation 21, which has also been adopted later by Siemens UK and aims to familiarize young girls and boys with the ideas of Science, Technology and Engineering. However, this lack of knowledge still affects young people. Moreover, the lack of information on the opportunities that arise after obtaining a degree in technical fields tends to keep females away. The general perception of engineers is that of people, it is usually the men who wear overalls and walk around with toolboxes, repairing machines. However, the work of engineers has changed and nowadays it involves a lot of diagnostics or analytical problem solving, in the design aspect of a product or at a later stage. Finally, the lack of women role models in engineering is

Currently, there are a number of companies aiming to introduce more female work power in their ranks. Moreover, there are many female organizations globally, such as Women in Aerospace, and locally, for example the female committee of VSV Leonardo da Vinci - Women with Wings, that provide support and information to young female engineers. Recently, there has been an event organized by the aforementioned, where two successful women in aerospace, the dean of the Aerospace Faculty - Hester Bijl and Luisella Giulicchi, a systems engineer in European Space Agency, gave a talk about how have they became successful in their profession and what that has given them. Additionally, activities like company dinners and other lectures by successful aerospace women encourage young female engineers to continue with a career in the technical sphere and provide them with role models of the present.

Leonardo Times april 2015

Column 0415.indd 50

27/03/15 14:48


Global environmental concerns call for future innovative products. Currently, the aircraft industry is seriously considering to install Contra-Rotating-Open-Rotors (CROR) on mid-range 150-200 seater aircraft by the year 2020. Today, NLR (National Aerospace Laboratory) specialists work in close coรถperation with aircraft & engine manufactures to investigate noise, vibration and safety aspects of these novel aircraft concepts.

www.nlr.nl Advertentie_NLR.indd 1

8-12-2009 15:59:05


Come on board.

11017 SPR ADV Stork Fokker - Come on board - A4 - ENG.indd 1

07-11-11 15:29


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.