IJITCE May 2011

Page 1


UK: Managing Editor International Journal of Innovative Technology and Creative Engineering 1a park lane, Cranford London TW59WA UK E-Mail: editor@ijitce.co.uk Phone: +44-773-043-0249 USA: Editor International Journal of Innovative Technology and Creative Engineering Dr. Arumugam Department of Chemistry University of Georgia GA-30602, USA. Phone: 001-706-206-0812 Fax:001-706-542-2626 India: Editor International Journal of Innovative Technology & Creative Engineering Dr. Arthanariee. A. M Finance Tracking Center India 261 Mel quarters Labor colony, Guindy, Chennai -600032. Mobile: 91-7598208700

www.ijitce.co.uk


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

IJITCE PUBLICATION

International Journal of Innovative Technology & Creative Engineering Vol.1 No.5 May 2011

www.ijitce.co.uk


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

From Editor's Desk Dear Researcher, Greetings!

Research articles in this issue discusses about Clustering Method, Evaluation and Selection of Optimal Parameters, and Improving the Efficiency and Energy Routing Algorithm. In observation of news from the month of May, we can drive following research opportunities. Recently HewlettPackard lowered its forecast for lousy consumer PC sales and an underachieving services business. It leads us as a researcher how we can deliver our end result into hand held devices. Regarding security, recent attack on the PSN store is well documented. Here, one of the world’s largest and most powerful digital giants were unable to handle the attack of a few teenagers. A whopping 99.7% of Android smartphones are leaking login data for Google services, and could allow others to access information stored in the cloud. This news proves that there is lot of room for improvement in the field of security. Social networking company, LinkedIn Corp., raised the expected price range of its initial public offering by 30 percent. It shows the opportunity for networking, information sharing tools, and technology related to linking minds across globe.

It’s that time again, when the future slowly begins to encroach on standard society. Austrian research facilities have now come up with a way of weaving antennas into workwear via NFC clothing tags. This may just give us a future of interactive clothing garments which are tied to the functionality of our mobile phones. It has been an absolute pleasure to present you articles that you wish to read. We look forward to many more new technology-related research articles from you and your friends. We are anxiously awaiting the rich and thorough research papers that have been prepared by our authors for the next issue.

Thanks, Editorial Team IJITCE


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Editorial Members Dr. Chee Kyun Ng Ph.D Department of Computer and Communication Systems, Faculty of Engineering, Universiti Putra Malaysia,UPM Serdang, 43400 Selangor,Malaysia. Dr. Simon SEE Ph.D Chief Technologist and Technical Director at Oracle Corporation, Associate Professor (Adjunct) at Nanyang Technological University Professor (Adjunct) at Shangai Jiaotong University, 27 West Coast Rise #08-12,Singapore 127470 Dr. sc.agr. Horst Juergen SCHWARTZ Ph.D, Humboldt-University of Berlin, Faculty of Agriculture and Horticulture, Asternplatz 2a, D-12203 Berlin, Germany Dr. Marco L. Bianchini Ph.D Italian National Research Council; IBAF-CNR, Via Salaria km 29.300, 00015 Monterotondo Scalo (RM), Italy

Dr. Nijad Kabbara Ph.D Marine Research Centre / Remote Sensing Centre/ National Council for Scientific Research, P. O. Box: 189 Jounieh, Lebanon Dr. Aaron Solomon Ph.D Department of Computer Science, National Chi Nan University, No. 303, University Road, Puli Town, Nantou County 54561, Taiwan Dr. Arthanariee. A. M M.Sc.,M.Phil.,M.S.,Ph.D Director - Bharathidasan School of Computer Applications, Ellispettai, Erode, Tamil Nadu,India Dr. Takaharu KAMEOKA, Ph.D Professor, Laboratory of Food, Environmental & Cultural Informatics Division of Sustainable Resource Sciences, Graduate School of Bioresources, Mie University, 1577 Kurimamachiya-cho, Tsu, Mie, 514-8507, Japan Mr. M. Sivakumar M.C.A.,ITIL.,PRINCE2.,ISTQB.,OCP.,ICP Project Manager - Software, Applied Materials, 1a park lane, cranford, UK Dr. Bulent Acma Ph.D Anadolu University, Department of Economics, Unit of Southeastern Anatolia Project(GAP), 26470 Eskisehir, TURKEY Dr. Selvanathan Arumugam Ph.D Research Scientist, Department of Chemistry, University of Georgia, GA-30602, USA.

Review Board Members Dr. T. Christopher, Ph.D., Assistant Professor & Head,Department of Computer Science,Government Arts College(Autonomous),Udumalpet, India. Dr. T. DEVI Ph.D. Engg. (Warwick, UK), Head,Department of Computer Applications,Bharathiar University,Coimbatore-641 046, India. Dr. Giuseppe Baldacchini ENEA - Frascati Research Center, Via Enrico Fermi 45 - P.O. Box 65,00044 Frascati, Roma, ITALY. Dr. Renato J. orsato Professor at FGV-EAESP,Getulio Vargas Foundation,S찾o Paulo Business School,Rua Itapeva, 474 (8째 andar) ,01332-000, S찾o Paulo (SP), Brazil Visiting Scholar at INSEAD,INSEAD Social Innovation Centre,Boulevard de Constance,77305 Fontainebleau - France Y. Benal Yurtlu Assist. Prof. Ondokuz Mayis University Dr. Paul Koltun Senior Research ScientistLCA and Industrial Ecology Group,Metallic & Ceramic Materials,CSIRO Process Science & Engineering Private Bag 33, Clayton South MDC 3169,Gate 5 Normanby Rd., Clayton Vic. 3168


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011 Dr.Sumeer Gul Assistant Professor,Department of Library and Information Science,University of Kashmir,India Chutima Boonthum-Denecke, Ph.D Department of Computer Science,Science & Technology Bldg., Rm 120,Hampton University,Hampton, VA 23688 Dr. Renato J. Orsato Professor at FGV-EAESP,Getulio Vargas Foundation,São Paulo Business SchoolRua Itapeva, 474 (8° andar), 01332-000, São Paulo (SP), Brazil Lucy M. Brown, Ph.D. Texas State University,601 University Drive,School of Journalism and Mass Communication,OM330B,San Marcos, TX 78666 Javad Robati Crop Production Departement,University of Maragheh,Golshahr,Maragheh,Iran Vinesh Sukumar (PhD, MBA) Product Engineering Segment Manager, Imaging Products, Aptina Imaging Inc. doc. Ing. Rostislav Choteborský, Ph.D. Katedra materiálu a strojírenské technologie Technická fakulta,Ceská zemedelská univerzita v Praze,Kamýcká 129, Praha 6, 165 21 Dr. Binod Kumar M.sc,M.C.A.,M.Phil.,ph.d, HOD & Associate Professor, Lakshmi Narayan College of Tech.(LNCT), Kolua, Bhopal (MP) , India. Dr. Paul Koltun Senior Research ScientistLCA and Industrial Ecology Group,Metallic & Ceramic Materials,CSIRO Process Science & Engineering Private Bag 33, Clayton South MDC 3169,Gate 5 Normanby Rd., Clayton Vic. 3168 DR.Chutima Boonthum-Denecke, Ph.D Department of Computer Science,Science & Technology Bldg.,Hampton University,Hampton, VA 23688 Mr. Abhishek Taneja B.sc(Electronics),M.B.E,M.C.A.,M.Phil., Assistant Professor in the Department of Computer Science & Applications, at Dronacharya Institute of Management and Technology, Kurukshetra. (India). doc. Ing. Rostislav Chotěborský,ph.d, Katedra materiálu a strojírenské technologie, Technická fakulta,Česká zemědělská univerzita v Praze,Kamýcká 129, Praha 6, 165 21 Dr. Amala VijayaSelvi Rajan, B.sc,Ph.d, Faculty – Information Technology Dubai Women’s College – Higher Colleges of Technology,P.O. Box – 16062, Dubai, UAE Naik Nitin Ashokrao B.sc,M.Sc Lecturer in Yeshwant Mahavidyalaya Nanded University Dr.A.Kathirvell, B.E, M.E, Ph.D,MISTE, MIACSIT, MENGG Professor - Department of Computer Science and Engineering,Tagore Engineering College, Chennai Dr. H. S. Fadewar B.sc,M.sc,M.Phil.,ph.d,PGDBM,B.Ed. Associate Professor - Sinhgad Institute of Management & Computer Application, Mumbai-Banglore Westernly Express Way Narhe, Pune - 41 Dr. David Batten Leader, Algal Pre-Feasibility Study,Transport Technologies and Sustainable Fuels,CSIRO Energy Transformed Flagship Private Bag 1,Aspendale, Vic. 3195,AUSTRALIA Dr R C Panda (MTech & PhD(IITM);Ex-Faculty (Curtin Univ Tech, Perth, Australia))Scientist CLRI (CSIR), Adyar, Chennai - 600 020,India Miss Jing He PH.D. Candidate of Georgia State University,1450 Willow Lake Dr. NE,Atlanta, GA, 30329 Dr. Wael M. G. Ibrahim Department Head-Electronics Engineering Technology Dept.School of Engineering Technology ECPI College of Technology 5501 Greenwich Road - Suite 100,Virginia Beach, VA 23462 Dr. Messaoud Jake Bahoura Associate Professor-Engineering Department and Center for Materials Research Norfolk State University,700 Park avenue,Norfolk, VA 23504

Dr. V. P. Eswaramurthy M.C.A., M.Phil., Ph.D., Assistant Professor of Computer Science, Government Arts College(Autonomous), Salem-636 007, India.


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011 Dr. P. Kamakkannan,M.C.A., Ph.D ., Assistant Professor of Computer Science, Government Arts College(Autonomous), Salem-636 007, India. Dr. V. Karthikeyani Ph.D., Assistant Professor of Computer Science, Government Arts College(Autonomous), Salem-636 008, India. Dr. K. Thangadurai Ph.D., Assistant Professor, Department of Computer Science, Government Arts College ( Autonomous ), Karur - 639 005,India. Dr. N. Maheswari Ph.D., Assistant Professor, Department of MCA, Faculty of Engineering and Technology, SRM University, Kattangulathur, Kanchipiram Dt - 603 203, India. Mr. Md. Musfique Anwar B.Sc(Engg.) Lecturer, Computer Science & Engineering Department, Jahangirnagar University, Savar, Dhaka, Bangladesh. Mrs. Smitha Ramachandran M.Sc(CS)., SAP Analyst, Akzonobel, Slough, United Kingdom. Dr. V. Vallimayil Ph.D., Director, Department of MCA, Vivekanandha Business School For Women, Elayampalayam, Tiruchengode - 637 205, India. Mr. M. Rajasenathipathi M.C.A., M.Phil Assistant professor, Department of Computer Science, Nallamuthu Gounder Mahalingam College, India. Mr. M. Moorthi M.C.A., M.Phil., Assistant Professor, Department of computer Applications, Kongu Arts and Science College, India Prema Selvaraj Bsc,M.C.A,M.Phil Assistant Professor,Department of Computer Science,KSR College of Arts and Science, Tiruchengode Mr. V. Prabakaran M.C.A., M.Phil Head of the Department, Department of Computer Science, Adharsh Vidhyalaya Arts And Science College For Women, India. Mrs. S. Niraimathi. M.C.A., M.Phil Lecturer, Department of Computer Science, Nallamuthu Gounder Mahalingam College, Pollachi, India. Mr. G. Rajendran M.C.A., M.Phil., N.E.T., PGDBM., PGDBF., Assistant Professor, Department of Computer Science, Government Arts College, Salem, India. Mr. R. Vijayamadheswaran, M.C.A.,M.Phil Lecturer, K.S.R College of Ars & Science, India. Ms.S.Sasikala,M.Sc.,M.Phil.,M.C.A.,PGDPM & IR., Assistant Professor,Department of Computer Science,KSR College of Arts & Science,Tiruchengode - 637215 Mr. V. Pradeep B.E., M.Tech Asst. Professor, Department of Computer Science and Engineering, Tejaa Shakthi Institute of Technology for Women, Coimbatore, India. Dr. Pradeep H Pendse B.E.,M.M.S.,Ph.d Dean - IT,Welingkar Institute of Management Development and Research, Mumbai, India Mr. K. Saravanakumar M.C.A.,M.Phil., M.B.A, M.Tech, PGDBA, PGDPM & IR Asst. Professor, PG Department of Computer Applications, Alliance Business Academy, Bangalore, India. Muhammad Javed Centre for Next Generation Localisation, School of Computing, Dublin City University, Dublin 9, Ireland Dr. G. GOBI Assistant Professor-Department of Physics,Government Arts College,Salem - 636 007 Dr.S.Senthilkumar Research Fellow,Department of Mathematics,National Institute of Technology (REC),Tiruchirappli-620 015, Tamilnadu, India.


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Contents 1. Thermal Interface Materials used for Improving the Efficiency and Power Handling Capability of Electronic Devices: A Review ……….[1]

2. Performance Evaluation and Selection of Optimal Parameters in Turning of Ti-6Al-4V Alloy Under Different Cooling Conditions…..[10]

3. A Trust Model for Secure and QoS Routing in MANETS……[22] 4. Investigating the Particle Swarm Optimization Clustering Method on Nucleic Acid Sequences …….[32]

5. An Optimized Energy Efficient Routing Algorithm For Wireless Sensor Network…….[41]


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Thermal Interface Materials used for Improving the Efficiency and Power Handling Capability of Electronic Devices: A Review Priyanka Jaiswal and C.K. Dwivedi* Department of Electronics and Communication Engineering, J.K. Institute of Applied Physics and Technology, University of Allahabad, Allahabad-211002, U.P., INDIA

Abstract- Thermal interface material (TIM) is used to enhance heat flow by reducing thermal resistance across the thermal interface between the heat source and the heat sink, and to minimize the variance of the interface resistance as compared to just surface-tosurface contact. In this mini review, the recent developments in the thermal properties of various used and advanced thermal interface materials are discussed with the new technologies employed for improving the performance of TIMs. Also, a brief account of the applications of thermal interface materials is presented.

liquid cooling and refrigeration are being considered. All liquid cooling systems have significant drawbacks, including parts and labor costs, reduced reliability and increased weight. In addition, the fluid pump consumes power. Use of materials with extremely high thermal conductivities can extend the use of convection cooling, potentially overcoming the need for liquid cooling. Thermoelectric Coolers (TECs), which require power input, are widely used for temperature control of laser diodes and micromechanical devices. Materials with high thermal conductivities can improve thermo electric cooling efficiency with reduction in power consumption.

Keywords: Thermal interface materials, heat flow, heat sink, interface resistance, surface-to-surface contact

I. INTRODUCTION

TIMs are used to eliminate interstitial air gaps from the thermal interface by conforming to the rough and uneven mating surfaces. Since TIMs have significantly greater thermal conductivity than the air they replace, the resistance across the interface decreases, and the component junction temperature will be reduced. The member of TIMs families include: Elastomeric Pads/Insulators, Thermally Conductive Adhesive Tapes, Phase Change Materials, Thermally Conductive Gap Fillers, Thermally Conductive “Cure in Place” Compounds, Thermal Compounds or Greases and Thermally Conductive Adhesives (http://www.chomerics.com/tech/Therm_mgmt_Artcl s/TIMarticle.PDF). The success of any thermal

New gadgets are being introduced almost daily in today’s world of electronics with smaller size and faster speed. A well-designed thermal management program is necessary in order to optimize performance and reliability for the smooth functioning of these electronic devices. A heat sink is joined with the semiconductor for efficient transfer of heat from the source to the environment. A Thermal Interface Material (TIM) is required to improve heat flow across this thermal interface by eliminating air voids (http://www.chomerics.com/products/documents/ther mcat/heat_transfer_fund.pdf). In response to the need for improved heat dissipation, various forms of

1


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

interface will depend on the design, the quality of the interface material and its proper installation. There is a key difference between thermal greases, phase change materials (PCMs), thermal pads and films, adhesives, and alloy composite materials (http://www.gasketfab.com/tipdf/thermalinterface.pdf ). In principle, a phase change material is a solid at room temperature and then as the device heats up, the PCM changes from a solid phase to a flowable phase, where the material can fill in all the surface roughness similar to a grease. The phase change materials tend to be much harder than thermal pads, and thus the pressure required to meet thermal targets often presents a mechanical challenge for design engineers. Manufactures developed thermally conductive compounds and thermal adhesives for the application of heat sinks to highpowered electronic components like Arctic Silver 5 (AS-5), Matrix, Arctic Alumina, CĂŠramique, ArctiClean, Arctic Silver Thermal Adhesive and Arctic Alumina Thermal Adhesive.

based on synthetic diamond as the semiconductor material. Chemical Vapor Deposition (CVD) process allows single crystal CVD diamond to be manufactured to a high purity and consistency. CVD Diamond is resistant to heat, radiation and acid attack and boasts the best thermal conductivity of any material near room temperature. It is a good electrical insulator but can be doped like silicon to create semiconductor devices. It is available in single crystal and polycrystalline forms. The heat generated within the electronic device is most often concentrated in a small area and temperatures in this region rise much higher. By spreading the localized heat generated by the device, DiaTherm™ (a diamond heat spreader) can improve cooling capability of laser die in assembled devices by 30% to 100% (Website: www.sp3diamondtech.com). In this mini review, the updates in the field of thermal properties of various used thermal interface materials are discussed with the new technologies employed for improving the performance of TIMs. Also, a brief account of the applications of thermal interface materials is presented.

The research people found that Carbon has more advantages than Silicon. By using carbon as the manufacturing material, we can achieve smaller, faster and stronger chips. A electronic Chip is manufactured using carbon as the wafer. A diamond chip works at Higher Temperature up to 1000 degrees Celsius, while silicon chips stop working above 150 degrees Celsius. Diamond can also resist voltages up to around 200 volts, compared to around 20 volts for a silicon chip. Due to this power electronics, such as an inverter, can become made much smaller in size. Due to this power electronics, such as an inverter, can become made much smaller in size. One drawback is that electricity cannot travel smoothly through diamond. As compared to Barium Oxide (BaO) substrate package, diamond substrate package shows a better mean-time-to-failure by a factor of two [1]. A novel integral dielectric heat sink material consisting of diamond deposited carbon/carbon composite with metallic layer(s) on the diamond surface is developed that provide the characteristics including microstructure, thermal conductivity, diamond coating quality and thickness of metallised layer [2].Advanced electronic devices are developed

II. THERMAL INTERFACE MATERIALS A. Selection of Thermal Interface materials

An integral part of thermal design process is the selection of the optimal TIM for a specific product application. Having a basic understanding of their strengths, weaknesses, and applicability is a key to successful selection of the best interface material. The selection of proper TIM depends on many factors like: power density, heat dissipation, process requirements, Bond Line thickness (BLT), rework ability and user preferences. B. Advanced thermal interface materials

A wide range of standard and specially designed thermal management materials which includes Silicone and Non-Silicone heat sink compounds, Pads, Gap fillers and Epoxies are available,

2


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

worldwide. Nano-Crystalline Diamond (NCD) in an amorphous carbon matrix deposited in a nanocomposite film as well as Carbide Nano-Films (CNF) combines the properties of diamond with very low friction, high toughness and biocompatibility. A new class of Nano-Thermal Interface Material (NanoTIM) with adhesion functions is developed. Nanoparticles such as nano-silver particles, NanoCarbon Nanotubes (CNT) and Nano-Silicon Carbide (NSC) particles can be embedded into the NanoFibers (NF) to enhance the thermal conductivity and to reduce the thermal resistivity (Website: johan.liu@chalmers.se). A very thin TIM composed of carbon nanotubes, silicon thermal grease, and chloroform was developed which has very low thermal impedance [3]. TSE328-G silicone adhesive is a low viscosity, thermally conductive material, which will be in an uncured state at room temperature and cure to an elastomer after application of heat (http://asia.matweb.com/search/SpecificMaterialText .asp?bassnum=PGESIL071).

advanced thermal greases used for power electronics have been illustrated by Narumanchi, 2007 [30]. In high performance flip-chip ball grid array (FCBGA) packages for ASIC and microprocessor devices, silver filled Thermal Paste Adhesives (TPA) are used [6]. The CarbAl™ heat transfer material has unique properties than conventional passive thermal management materials [7]. Only exotic and expensive synthetic diamond films, diamond composites and Highly-Oriented Pyrolytic Graphites (HOPGs) have higher thermal conductivity and closer Coefficient of Thermal Expansion (CTE) than the CarbAl™ material. Thermal conductivity as a function of coefficient of thermal expansion has been demonstrated by Narumanchi, 2007 [30]. Carbon Nanotubes (CNT) are sheets of graphite rolled up to make a tube. The dimensions are variable (down to 0.4 nm in diameter) and it exist also nanotubes within nanotubes, leading to a distinction between multi-walled and single-walled carbon nanotubes. The free-standing CNT arrays can be very good thermal interface materials under moderate load compared to indium sheet and phase-change thermal interface materials [8]. Further, combinations of CNT arrays and existing thermal interface materials can improve these materials’ thermal contact conductance. The Carbon Nanotubes and polymer composites can form foams. These lightweight materials foams will be produced with improved electrical, mechanical, and thermal properties. A double layer structure of aligned carbon nanotubes array was successfully developed as a result of the sandwitch growth via thermal Chemical Vapor Deposition (CVD) method, for enhancing the performance of nanotubes-based devices [9].

Expanded flexible graphite sheet materials with high thermal conductivity have ability to conform well to surfaces [4].These sheets are porous. The graphite material containing synthetic oil reduces the contact resistance because of decrease in materials elastic modulus and increase in its plastic deformation at low pressures (<700 kPa) , shows high thermal stability than graphite material containing mineral oil. For enhancing the heat dissipation for high brightness LED (light emitting diode), thermal interface material with aluminum and graphite plate are fabricated and integrated in power LED. A large variation in junction temperature and light output among the different heat-sink modules were investigated [5]. The need to dissipate the sizable amounts of heat being generated in low-power electronic devices creates problem. To solve this 3M company provides Thermally Conductive Adhesive Transfer Tapes, Interface pads and epoxies (Website: www.3m.com). Adhesive Research offers several sophisticated bonding technologies designed for bus bar, solar cell junction box, and encapsulation applications in crystalline and thin film photovoltaic (PV) modules (Website: www.adhesivesresearch.com). Experimental thermal resistance results of some

Thermal tapes as a TIM function as a strong attachment between a heatsink and heat-generating component with proper heat conduction [10]. They are designed to meet the microelectronic packaging industry’s strict requirements. The drawback of this tape is that sometimes the thermal resistance for a thick tape can be only slightly better than a dry joint

3


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

or air gap. Thermal performance would be carefully evaluated along with shear-creep resistance data of the tapes.

conducting properties that can be harnessed in removing dissipated heat from electronic devices

III. PROPERTIES OF TIMS A nanostructured thermal tape is created that conducts heat like a metal while allowing the neighboring materials to expand and contract with temperature changes (metals are too stiff to allow this). This ability to reduce chip temperatures while remaining compliant is a key breakthrough for electronic packaging. The nanotape performs so well is due to a combination of binder materials surrounding carbon nanotubes. This allows for very high thermal conduction while remaining flexible to adjust as required on the condition. The nanotape can replace solder pads with a thin lightweight material that improves thermal energy management. A process has been developed to integrate the high thermal conductivity of carbon nanotubes into a polymer matrix, thus creating a material with high thermal transport properties for laser and IC chip packaging.

The high Coefficient of thermal expansion (CTEs) cause high thermal stresses when they are attached to the semiconductors and ceramics used in electronic and optoelectronic applications. The CVD diamond, Kovar, copper / tungsten, copper / molybdenum, and the new composites and monolithic thermal management materials show low CTE. The CTE differences may not be significant when device dimensions are small. Reliable hard solder die attach techniques is used to bond large high CTE compound semiconductor dice directly to copper packaging without compromising either reliability or thermal performance [12]. The low density of TIM materials (like CVD diamond) is useful in most packaging materials, which is important in weight-critical applications, such as aircraft and spacecraft electronic systems, notebook computers and other mobile devices. In addition, density is important even for stationary applications, because stresses arising from shock loads during shipping (50g is a common requirement) depend directly on component mass.

Graphene behaves as a strong heat conductor, which could mean lower temperatures and a concrete possibility for chip manufacturers to reach higher processing speeds with relative ease. Graphene is a one atom thick planar sheet of sp2 bonded carbon atoms that are densely in a honeycomb crystal lattice packed [11]. Graphene is highly conductive, conducting both heat and electricity better than any other material, including copper, and it is also stronger than diamond. It is almost completely transparent, yet so dense that not even helium, the smallest gas atom, can pass through it. Graphene electronic devices are predicted to be substantially faster, thinner and efficient. The theoretical mobility of electrons in pure Graphene is 200 times that of silicon. This will make it extremely interesting material for future high speed electronics and sensors. Graphene Nano Ribbons (GNR) band zig-zag type structure can be semiconducting or metallic depending on width. The multiple layers of graphene show strong heat

Thermal conductivity, describes the material’s ability to conduct heat. Thermal resistance is a measure of how a material of a specific thickness resists the flow of heat. Thermal conductivity and thermal resistance describe heat transfer within a material once heat has entered the material. The thermal impedance, of a material is defined as the sum of its thermal resistance and any contact resistance between it and the contacting surfaces. This type of data can be used to generate information about the ability of a material to conform to surfaces to minimize contact Resistance. One study shows that the thermal conductivity of TIM changes with temperature. When temperature increases, surface contacts between two surfaces where TIM are applied could be better or worse, and consequently the thermal resistance variation of a package may increase or decrease. The thermal conductivity variation in the LED chip

4


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Adhesive research unveils new bonding method in a name of Pressure Sensitive Adhesive (PSA) technology for Photovoltaic(PV) manufacturers which deliver added functionality for enhancing the capabilities and performance of a variety of PV modules including ease-of-handling, continuous roll formats, and no messy clean-up (Website: www.adhesivesresearch.com). Three different cooling methods are used using heatsink, thermoelectric cooler (TEC) and fan in order to solve the heat dissipation problem [16]. By this technique the thermal resistance reduces and the cooling capacity improves.To built three dimensional (3-D) MultiChip Modules (MCMs) for high-power highdensity light-weight electronic systems, large area free-standing CVD-diamond is deposited on Si and Cu substyrates as heat spreading material [17]. The relative properties of diamond and certain package materials have been presented by Xie et al., 2005 [17].

area changes the thermal resistance coefficient of the LED.

IV. TECHNOLOGIES USED FOR IMPROVING THE PERFORMANCE OF TIMS The CVD processes used by Element Six allow the growth of polycrystalline and single crystal CVD diamond films with consistent characteristics and few defects. By controlled manufacturing process, the synthesized material can be tailored to a particular application at attractive costs. The diamond-like carbon (DLC) film is deposited by radio frequency plasma-enhanced chemical vapor deposition (rf-PECVD) on silicon (100) wafers using methane (CH4) and hydrogen (H2) as the precursors. Two new microwave plasma reactors, excited with a hybrid “TM013 plus TEM001” mode are designed and developed which are operated in the 200-400 Torr pressure regime. High quality single crystal diamond (SCD) and polycrystalline diamond (PCD) is synthesized with deposition rates that exceed 100 microns/hr and reactor power efficiencies range from 6 – 25 mm3 of diamond per kW-h over a 25 mm diameter synthesis area as pressure, power density and gas chemistry are varied [13].

For the purpose of graphite foam thermosyphon design in electronics cooling, various effects such as graphite foam geometry, sub-cooling, working fluid effect, and liquid level were investigated [18].The use of graphite foam as the evaporator in a thermosyphon enables the transfer of large amounts of energy with relatively low temperature difference and without the need for external pumping [19]. The graphitized carbon foam used is an open-cell porous material that consists of a network of interconnected graphite ligaments whose thermal conductivities are up to five times higher than copper. While the bulk graphite foam has a thermal conductivity similar to aluminum, it has one-fifth the density, making it an excellent thermal management material.

An isolated Nanocrystalline Diamond (NCD) fibers are produced at high growth rates by high-power microwave plasma enhanced chemical vapor deposition technique (MPECVD) [14]. The developed nanocrystalline nature of the diamond material has perfect crystallinity of the sample. The applications of this novel material are in coldcathode devices, heat sinks in microelectronics and structural materials in microand nanoelectromechanical systems. The nanodiamond particles act as seeds for the further growth of NCD on the nanofibers. The reasonable volume-averaged velocity and temperature distribution were obtained by treating the microchannel as a fluid saturated porous media. This methodology is not limited by assumptions on the laminar or fully developed nature of the flow and is able to predict the total thermal resistance of any microchannel heat sinks satisfactorily [15].

A unique pin-fin carbon fiber epoxy composite material heat sink is used for surface enhancement at the evaporator of thermosyphon [20]. A thermosyphon is a passive means of utilizing twophase flow in a system, similar to a heat pipe, with the only difference being the absence of capillary action. The carbon-fiber heat sink exhibited thermal performance below that of the carbon foam due to a smaller surface area resulting from the pin fin design. However, the thermal performance is improved when compared to the other traditional pin-fin evaporators. The adhesion functions are

5


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

microwave power www.e6cvd.com ).

incorporated into the developed nano-TIMs using lamination or direct mixing methods. To achieve this, hotmelt adhesive is directly laminated into the nanoTIM materials to achieve adhesion function. Thermally Conductive Resins provide fast heat dissipation for a variety of electronic and industrial applications (Website: www.epoxies.com). For the study of thermal performance of solar air collector system, neural network (NN) software is used for simplification of performance analysis. This new formulation can be employed with any programming language or spreadsheet program [21].

electronics

(Website:

Grease materials have been successfully specified into servers, desktops, and notebook CPUs (central processing units) in the computer industry. They are also widely used in displays, automotive control units, and communications equipment. Typically, greases are specified when an application demands high performance and thin bond lines. Thermal pads are effectively employed in a wide range of low thermal demand applications such as disk drives, chipsets, communication equipment, and general Printed Circuit Board( PCB) protection. Phase change materials (PCMs) are designed to offer the thermal performance of greases, with the advantages of pre-cured thermal pads, in a single material. Phase change materials are widely used for memory modules, graphics chips, and notebook computers. Polymer solder hybrids and metal matrix TIMs are most often found in high-end processor chip packaging. Their cost limits their use to highly specialized chips. Polymer solder hybrids are available in the form of greases, PCMs .and pads. CVD polycrystalline diamond films are used as heat spreader for heat sink applications [22].

V. APPLICATIONS OF TIMS The miniaturization and increased functionality of electronic devices is causing manufacturers to seek new thermal management solutions. When compared to conventional materials, chemical vapor deposition (CVD) diamond is emerging as an effective heat spreader. A developing application of CVD diamond as a thermal management tool for both power devices and processors is the work being done with Diamond-On-Silicon (DoS). Both laser and Light Emitting Diodes (LED) device manufacturers are using CVD diamond thermal submounts. Even in low-cost LED packages, small segments of diamond is used to enhance color output stability (http://www.electroiq.com/index/display/packagingarticle-display/276420/articles/advancedpackaging/volume-15/issue-10/features/cvddiamond-solves-thermal-challenges.html).The ultrahigh thermal conductivity (up to 200 W/m-K) of CVDdiamond enables increase in microprocessor frequency and output power of microelectronic and optoelectronic devices such as radar and other radio-frequency (RF) devices, power semiconductors, laser diodes and LEDs . Highpower military and space applications, challenged for size and weight, have found diamond to be a useful material. CVD diamond also has found its way into the designs of RF Power packages, amplifiers, radar devices and infrared cameras. CVD Diamond used in the development of high-power, high temperature semiconductor devices for

The TIMs are used in some specific applications like-Thermally conductive adhesives [23] are used as the interface between silicon chip and heat spreader. These conductive adhesive can also replace solder, screws, bolts, clamps or other form of mechanical attachment devices for cost savings or other reasons. Some thermally conductive adhesives with electrical conductivity can be used to conduct heat away and as an electrical ground to the board. Because of loading of conductive filler particles, these adhesives are harder and less elongation than their unfilled counterparts. Thermal interface material using Carbon Nanotubes (CNT) can maintain an acceptable operating temperature for power electronics. CNT explored in realistic IGBT package configuration. The optical performance of High-Brightness Light-Emitting Diode (HB-LED) packages using the aligned CNT-TIM was greatly

6


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

improved with the use of the CNT-TIM [24]. Cool silver thermal grease is used in the cooling aspect of PC in place of arctic silver (Web site: www.altechnology.com).

potential areas of application of carbon based nanomaterial in the field of energy related applications. B. Economic materials

In a personal computer, laptop and portable electronics, better the thermal interface material, the smaller the heat sink and overall chip-cooling systems have to be. The forests of tiny cylinders called carbon nanotubes are grown onto the surfaces of computer chips to enhance the flow of heat at a critical point where the chips connect to cooling devices called heat sinks. This nanotube layer makes possible the low-cost manufacturing approach to keep future chips from overheating and reduce the size of cooling systems. Nano tape consists of a vertically aligned carbon nanotube forest at its central core, with carefully chosen alloys on both the top and bottom that wet the carbon nanotubes and also will contact to the heat sink and the chip. The nanotape will look like a conventional solder pad, and will work with in same equipment, but it has the mechanical characteristics of an aerogel and the thermal conductivity of a metal. The nanotape can more reliably transfer heat to thermoelectric generators, enabling greatly improved fuel economy [25]. The metallic adhesion layers on each side of the central core of the carbon nanotube forest have been displayed by Johnson, 2011 [25].

status

of

thermal

interface

Thermal conductivity is the most important requirement in thermal management applications. Values in excess of 8 W/cm °K (about twice the value of copper) are adequate and routine in most applications. Higher thermal conductivities are possible but, to achieve them, films must be grown at lower rates. Therefore, production costs increase to the point where film prices can be prohibitive. Films must also be reasonably thick in order to spread the heat properly. As the typical thickness required is 300 to 500 ¾m, deposition techniques must have high enough rates to keep deposition times within practical limits. The process techniques which eliminate cracks are in the way of development. About 20% to 35% of the total plant cost goes to construct air cooler heat exchangers in geothermal power plant. Two layers of hot screens above the electric heater in different height enhanced the average air velocity at the inlet of the model from -1 1.05 to 1.64msec and more. This imposed design of the cooling system increased the plant performance and decreased the installation cost [29].

High Thermally Conductive Graphite Foams are used for Compact Lightweight Radiators [26]. The graphite foam material keeps high-intensity LEDs as cool as those used for front-panel indicators, thereby reclaiming their long lifetimes [27]. The graphite foams are used in commercial LED lighting systems such as in the large arrays for street lamps and parking garages [28].

The films require lapping or polishing after deposition (usually on both sides). Polishing significantly impacts production costs, (due to the hardness of some TIM like diamond) and new polishing techniques are constantly being developed in an effort to reduce these costs. The thermal properties of diamond permitted a cost reduction of the overall device package even though the price of the diamond substrate was higher than the previous substrate. AT&T announced that thermal management comprised the largest active CVD diamond market today. The greatest impact on the price of bulk diamond is still to come. The prospect for lower costs in future advanced systems is

A. Future Prospects

Researchers develop thermal nanotape to help cool future Central Processing Units (CPUs) and Graphics Processing Units (GPUs) . The Graphene could be used to produce ultra capacitors with a greater energy storage density than is currently available. Solar cells and fuel cells are further

7


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

[4] M. Smak, J. Norley, R.A. Reynolds, R. Pachuta and D.W. Krassowski, “Advanced thermal interface materials using natyral graphite”,In Proceedings of the International Electronic Packaging Technical Conference and Exhibition, July 6-11, Maui, HI.Pp: 253-261, 2003.10.1115/IPACK2003-35113 [5] C.W.Chung, J.Y. Lee, Y.S. Huang and C.L. Lin,” Enhancement of heat dissipation for high brightness light emitting diodes”, J. Applied Sci. 6(11):2514- 2516, 2006.www.scialert.net [6] P. kohli, , M. Sobczak, J. Bowin and M. Matthews,” Advanced thermal interface materials for enhanced flip chip BGA”, Proceedings of the 51st Electronic Components Technology Conference, May 29-June 1, USA., pp: 564-570, 2001.http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=927 784 [7] SRC, “Thermal nano-tape for semiconductor packaging develop”,. Semiconductor Research Corporation, Stanford, January 24, 2011

excellent, with diamond production costs predicted to drop to less than $10/carat before polishing. Production costs at this level will open up new markets which are price sensitive. VI. CONCLUSION The survey of existing literature on this subject reveals that the TIM materials with high thermal conductivities have a variety of potential benefits that can contribute to reduced total cost of ownership, including: elimination of the need for liquid cooling; reduced system cooling power consumption; reduced building power consumption and increased operational lifetime. To minimize cost it is important not to over specify the requirement for thermal conductivity, planar dimensions, and thickness. More intensive research in this direction is needed to make the thermal interface materials more efficiently applicable for power handling capability of electronic devices.

[8] J. Xu, and T. Fisher,” Enhancement of thermal interface materials with carbon nanotube arrays”, Int. J. Heat Mass Transfer, 49:1658-1666, 2006. http://docs.lib.purdue.edu/nanopub/50/ [9] N.M. Mohamed, and L.M. Kou, ”Sandwich growth of aligned carbon nanotubes array using thermal chemical vapor deposition method”. Journal of applied sciences, 11(7), pp: 1341-1345, 2011.www.scialert.net [10]Y.J.Lee,”Thermally conductive adhesive tapes: A critical evaluation and comparison”, Adv. Packaging, 16: 28-31, 2007. http://direct.bl.uk/bld/PlaceOrder.do?UIN=210712092&ETOC=RN &from=searchengine

ACKNOWLEDGEMENT

[11] S.S. Verma, “Graphene electronics: Science technology entrepreneur”, 2010. http://www.techno-preneur.net/information-desk/sciencetechmagazine/2010/dec10/Graphene_Electronics.pdf

One of the authors (PJ) is grateful to University Grant Commission (UGC)-New Delhi for providing Research Scholarship for present research work at University of Allahabad, Allahabad-India. PJ is also thankful to the Head, JK Institute of Applied Physics and Technology, University of Allahabad for his support in this context.

[12] J.W.Zimmer and G. Chandler,” Challenges in matching die to package CTE’s for high thermal flux devices”, sp3 Diamond Technologies, Santa Clara, CA. 2008. [13] J. Asmussen, K.W. Hemawan, J. Lu, Y. Gu and T.A. Grotjohn, “Microwave plasma reactor design for diamond synthesis at high pressure and high power densities”, In Proceedings of the International Confrerence on Plasma Science, June 20-24, Norfolk, VA., pp: 1-1,2010. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5534210 [14] M.Berger, “ Producing isolated nanocrystalline diamond fibers at high growth rates”, Nanowerk LLC., 2008. http://www.nanowerk.com/spotlight/spotid=5220.php [15] F.Y. Lim, S. Abdullah and I. Ahmad, “Numerical study of fluid flow and heat transfer in microchannel heat sinks using anisotropic porous media approximation”, J. Applied Sciences,10(18): 2047-2057, 2010. www.scialert.net [16] D. Zhong, H. Qin, C. Wang and Z. Xiao, “Thermal performance of heatsink and thermoelectric cooler packaging designs in led”,In Proceedings of the 11th International Conference on Electronic Packaging Technology and High Density Packaging,Aug. 16-19, Xi’an, pp: 1377-1381. 2010.http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=558 2819 [17] K. Xie, C. Jiang, H. Xu and L. Zhu, “CVD diamond film sink for high power MCMs”, In Proceedings of the 6th International Conference on Electronic Packaging Technology, Aug. 30-Sept. 2, IEEE, pp: 113-116, 2005. http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1564611

REFERENCES [1] R. Petkie and P. Santini., ” Packaging aspects of CVD diamond in high performance electronics requiring enhanced thermal management”, In Proceedings of the 4th International Symposium on Advanced Packaging Materials, March 15-18, Braselton, GA, USA., pp: 223-228,1998. 10.1109/ISAPM.1998.664463 [2] J.M. Ting. and M.L.Lake., “A novel composite material for electronic packaging”, Microelectron. Int., 12: 30-31,1995. http://www.emeraldinsight.com/journals.htm?articleid=1666934&s how=abstract&nolog=316960&PHPSESSID=2o2nomvibsrpflq6eo ea7v5055&&nolog=986579 [3] J.J. Park. and M.Taya.,” Design of thermal interface material with high thermal conductivity and measurement apparatus”, J. Electron. Packag., 128: 46-52, 2006. 10.1115/1.2159008http://scitation.aip.org/getabs/servlet/GetabsS ervlet?prog=normal&id=JEPAE4000128000001000046000001&id type=cvips&gifs=yes&ref=no

8


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

[18] K. Lim and H. Roh,” Thermal characteristics of graphite foam thermosyphon for electronics cooling”, J. Mech. Sci. Tech., 19:1932-1938, 2005. http://www.springerlink.com/content/d56181wr15872255/ [19] J.S. Coursey, J. Kim and P.J. Boudreaux,” Performance of graphite foam evaporator for use in thermal management”, J. Elect. Packaging, 127: 127-134, 2005. http://terpconnect.umd.edu/~kimjh/Documents%20and%20movie s/JEP,%20Coursey%20and%20kim.pdf [20] V. Gandikota, G.F. Jones and A.S. Fleischer,” Thermal performance of a carbon fiber composite material heat sink in an FC-72 thermosyphon”, Exp. Thermal Fluid Sci., 34:554-561, 2010. [21] A. Sencan and G. Ozdemir, “Comparison of thermal performances predicted and experimental of solar air collecto”,. J. Applied Sci.,7(23): 3721-3728, 2007. www.scialert.net [22] H. Schneider, M.L. Locatelli, J. Achard, E. Scheid, P. Tounsi and H. Ding,” Study of CVD diamond films for thermal management in power electronics”, In Proceedings of the 12th European Conference on Power Electronics and Applications, Aalborg, Denmark, Sept. 2-5, pp: 63-72, 2007. [23] Dow Corning Corporation, “Thermally conductive adhesives”, AMPM028-09. Form No. 11-1751-01, 2009. http://www.dowcorning.com/content/publishedlit/11-1751.pdf [24] K. Zhang, Y. Chai, M.M.F. Yuen, D.G. Xiao and P.C.H. Chan, “Carbon nanotube thermal interface material for high-brightness light-emitting-diode cooling”, Nanotechnology, 19, 215706-21708 , 2008. 10.1088/0957-4484/19/21/215706 [25] R.C. Johnson,” Nanotape could make solder pads obsolete”, 2011. http://www.eetimes.com/electronics-news/4212401/Nanotapecould-make-solder-pads-obsolete [26] J. Klett, “ High thermal conductivity graphite foams for compact lightweight radiators”, Oak Ridge National Laboratory, May 9, 2002. http://www.nrel.gov/docs/gen/fy02/NN0123v.pdf [27] R.C. Johnson,” Graphite foam cools hi-intensity LEDs”, 2010. http://www.eetimes.com/electronics-news/4206579/Graphitefoam-cools-hi-intensity-LEDs [28] P. Wray,” ORNLs heat transferring graphite foam to be used in LED streetlight applications”, September 1, 2010. http://ceramics.org/ceramictechtoday/energy-environment/ornlsheat-transfering-graphite-foam-to-be-used-in-led-streetlightapplications [29] M.M. Rahman, S. Kumaresan and C.C Maing, “Temperature profile data in the zone of flow establishment above a model aircooled heat exchanger with 0.56 m2 face area operating under natural convection”, J. Applied sciences., 10(21), pp:2673- 2677, 2010. www.scialert.net[30] S. Narumanchi, “ Advanced thermal interface materials for power electronics”, National Renewable Energy Laboratory, November 8, 2007.

9


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Performance Evaluation and Selection of Optimal Parameters in Turning of Ti-6Al-4V Alloy Under Different Cooling Conditions M Venkata Ramanaa, K Srinivasulub, G Krishna Mohana Raoc,* a: Sr. Asst Prof in Mech Engg, VNRVJIET, Bachupally, Hyderabad, AP, India. b: Asst.Prof. in MEch Engg, CMR Institute of Technology, Medchal, Hyderabad, AP, India c,*: Associate Prof in Mech Engg, JNTUH College of Engg, JNT University Hyderabad, Kukatpally, Hyderabad85, AP, India, Abstract -Titanium alloys have very high tensile strength and toughness, light weight, extraordinary corrosion resistance, and ability to withstand extreme temperatures. However, the high cost of both raw materials and processing limit their use to military applications, aircraft, spacecraft, medical devices, connecting rods on expensive sports cars, some premium sports equipment and consumer electronics. Surface finish plays vital role in service life of components, to ensure a great reliability of sensitive aeronautical components, surface integrity of titanium alloys should be satisfied. Therefore, it required to optimize process parameters like cutting speed, feed and depth of cut while machining titanium components for better surface finish and also high tool life in order to reduce the tool cost. In order to reduce high temperatures in the machining zone, cutting fluids are employed in machining. Cutting fluid improves the surface conditions of the work piece, tool life and the process as a whole. It also helps in carrying away the heat and debris produced during machining. This project work deals with performance evaluation and optimization of process parameter in turning of Ti6Al4V alloy with different coolant conditions using Taguchi’s design of experiments methodology on surface roughness by uncoated carbide tool. The results have been compared among dry, flooded with Servo cut oil and water and flooded with Synthetic oil coolant conditions. From the experimental investigations, the cutting performance on Ti6Al4V alloy with synthetic oil is found to be better when compared to dry and servo cut oil and water in reducing surface roughness. The results from ANOVA shows that while machining Ti6Al4V alloy, the Synthetic oil is more effective under high cutting speed, high depth of cut and low feed rate compared to dry and servo cut oil and water conditions. The ANOVA also reveals that feed rate is dominant parameter under dry, servo cut oil and water and synthetic oil conditions in optimizing the surface roughness.

I. INTRODUCTION A. Work material Titanium alloys are used for demanding applications such as static and rotating gas turbine engine components. Some of the most critical and highly-stressed civilian and military airframe parts are made of these alloys. The use of Titanium has expanded in recent years to include applications in nuclear power plants, food processing plants, oil refinery heat exchangers, marine components and medical prostheses. The high cost of Titanium alloy components may limit their use to certain applications. The relatively high cost is often the result of the intrinsic raw material cost, fabricating costs and the metal removal costs incurred in obtaining the desired final shape. For most applications Titanium is alloyed with small amounts of Aluminum and Vanadium, typically 6% and 4% respectively, by weight. This mixture has a solid solubility which varies dramatically with temperature, allowing it to undergo precipitation strengthening. The difficulties involved in machining of Titanium alloys: Higher difficulties are expected when machining Titanium alloys because of the following factors: (I) The mechanical properties, especially the hardness and the tensile stress at high temperatures (400 â—ŚC) (II) The differences of structure with a variable quantity of the alpha phase (III) The morphology of the transformed beta phase. (IV) High temperature strength, (V) Very low thermal conductivity, (VI) Relatively low modulus of elasticity and (VII) High chemical reactivity. Due to the above mentioned reasons traditional metal cutting of Titanium alloys requires large quantities of cutting fluids

Keywords: Titanium alloy, Turning, Coolants, Optimization

B. Cutting fluids:

10


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Cutting fluids are used in metal machining for a variety of reasons such as improving tool life, reducing work piece thermal deformation, improving surface finish and flushing away chips from the cutting zone. Practically all cutting fluids presently in use fall into one of four categories: -

Straight oils

-

Soluble oils (Servo cut oils)

-

Semi synthetic fluids

-

Synthetic fluids

general observation that a stable, strong adherent layer forms at the interface between the tool and the chip and minimizes the dissolution-diffusion wear mechanism. S. K. Bhaumik et al [4] have made an attempt to machine Ti6Al4V alloy with wurtzite boron nitride (wBN) based cutting tools. The mechanisms controlling the wear of the cutting tool have been found to be similar to those observed in polycrystalline diamond (PCD) and polycrystalline cubic boron nitride (PCBN) tools. The results indicate that the wBN-cBN composite tools can be used economically to machine titanium alloys. A.A. Ganeeva et al [5] has focused on the manufacturing of layered material from titanium alloy. The characteristic features of the fractured specimens are studied. It has been an evident case that the surface finish of titanium alloy materials is a research topic of VITALITY . A thorough study of literature suggests the usage of processes like Dry Machining and Flooded Machining at three different cutting conditions for a better surface finish of TitaiumTi-6Al-4V. In the present work, uncoated tungsten carbide has been selected as cutting tool material replacing the coated tungsten carbide because studies prove that uncoated tungsten carbide will enable us to get a better surface finish when compared to the usage of coated tungsten carbide.

C. Various possible conditions of lubrication in metal cutting: The high temperature and the high stresses developed at the cutting edge of the tool are the principal problems when machining titanium alloys. To minimize the problem, a cutting fluid must be applied, as a basic rule. The cutting fluid not only acts as a coolant but also functions as a lubricant, reducing the tool temperatures and lessening the cutting forces and chip welding that are commonly experienced with titanium alloys, thus improving the tool life. The correct choice of cutting fluid has a significant effect on tool life. Abundant, uninterrupted flow of coolant will also provide a good flushing action to remove chips, excellent chip breakability, minimize thermal shock of milling tools and prevent chips from igniting. Additionally, a high pressure coolant supply can result in small, discontinuous and easily disposable chips, unlike the long continuous chips produced when machining with a conventional coolant supply. II.

III.

EXPERIMENTAL DETAILS

A. Selection Of Control Factors And Levels: A total of three process parameters with three levels are chosen as the control factors such that the levels are sufficiently far apart so that they cover wide range. The process parameter and their ranges are finalized based on the literature, machine operators’ experience. The three control factors selected are spindle speed (A), feed (B), and depth of cut (C). Uncoated carbide tools are used in experimentation. The control factors and their alternative levels are listed in Table 1

LITERATURE SURVEY

Manna.A et. al. [1] has presented an experimental investigation of the influence of cutting conditions on surface finish during of Al/SicMMC. In this study, the Taguchi method, is used to optimize cutting parameters for effective turning of Al/Sic-MMC using a fixed rhombic tooling system. Nektarios M. Heretis et al [2] examined the surface roughness of Ti6Al4V alloy after machining in conventional machines with various tool geometries, cutting speeds and lubrication/cooling conditions. The surface roughness of biocompatible Ti6Al4V is important when this titanium alloy is used for cell growth on human implants. The result reported in this paper provides a guideline on adhesion, orientation and growth of fibroblast for the Ti6Al4V titanium alloy. Ram Cherukuri et al [3] discussed about the research and development in machining of titanium with WC/Co and PCD tools Results agree well with the

Table I: Control Factors And Levels LEVEL

SPEED

NUMBER

(A) (rpm)

FEED RATE

DEPTH OF CUT

(B) (mm/min)

(C) (mm)

1

400

0.206

0.6

2

500

0.240

1.0

3

630

0.329

1.6

B. Selection of Orthogonal Array:

11


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Selection of particular orthogonal array from the standard O.A, depends on the number of factors, levels of each factor and the total degrees of freedom. i)

Number of control factors = 3

ii)

Number of levels for each control factors = 3

iii)

Total degrees of freedom of factors = 6

iv)

Minimum number of experiments to be conducted =7.

Based on these values and the required minimum number of experiments to be conducted (9), the nearest O.A. fulfilling this condition is L9(34).It can accommodate a maximum of four number of control factors each at three levels with 9 numbers of experiments. Here the requirement is to accommodate four control factors at three levels, which can be easily done in this O.A. However, one column in this O.A. lefts empty, which is also permitted by the principles of robust design methodology. Table 2 gives the factor level fixed for the present study.

6

500

0.329

0.6

-

7

630

0.206

1.6

-

8

630

0.240

0.6

-

9

630

0.329

1.0

-

Fig 1 :Principal components and movements of a typical lathe LZ350 Table III: SPECIFICATION OF GEDEE WEILER LZ350

C. Experimental Setup The turning operations are carried out on a lathe machine GEDEEWEILER LZ350. The machining tests a r e conducted under the different conditions o f C u t t i n g s p e e d , Feed rate and Depth of c u t . The dimension of the Ti-6Al-4V test pieces is Diameter 50mm, length 350mm. Figure 1 depicts the lathe machine used in the present study. Carbide insert is shown in figure 2. Round bar of Ti-6Al-4V is shown in figure3 and is turned on lathe. After turning, the surface roughness of the work piece thus produced is tested using a surface roughness tester shown in figure4.

SPEED(rpm)

FEED(mm/revl)

DEPTH OF CUT(mm)

45 - 2000

0.017 – 1.096

0.1 – 12

Cutting tool mat erial: Sandvick uncoated tungsten carbide inserts are used in the experiments.

Table II: Experimental Design As Per L9 Orthogonal Array EXPERIMENT

COLUMN

NUMBER

A

B

C

EMPTY

1

400

0.206

0.6

-

2

400

0.240

1.0

-

3

400

0.329

1.6

-

4

500

0.206

1.0

-

5

500

0.240

1.6

-

Fig2. Carbide turning Insert Grade H13A.

Fig.3: Work-piece Titanium-Ti-6Al-4V

12


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

roughness values are found at four different places and the average is considered. Table IV: Data Summary Of Dry Machining (Surface Roughness) And S/N Ratio

EXP No

Fig. 4: Surface roughness tester IV.

RESULTS AND DISCUSSION

A. Dry Machining: Turning of the work piece has been carried out under dry condition. According to Taguchi method, a robust design and an L9(34) orthogonal array are employed for the experimentation. Initially 9 sets of 4 experiments are performed according to the L9(3 ) orthogonal array, each set of experiments was repeated four times, with a total of 36 experiments being conducted for investigation. A series o f single-point turning t e s t s is conducted the machined surface roughness is measured using TR200 Surface Roughness Tester at four different positions are shown in Table 4. Surface roughness of the work piece turned under dry condition and the corresponding S/N ratio are tabulated for three different experiments. For each experiment

SURFACE ROUGHNESS(RA)(Âľm)

S/N RATIO (dB)

1

2

3

4

Average

1

1.21

1.26

1.28

1.30

1.2625

-2.03

2

1.28

1.27

1.30

1.23

1.270

-2.07

3

3.04

2.85

3.00

3.10

2.9975

-9.50

4

1.52

1.74

1.52

1.90

1.67

-4.494

5

1.48

1.40

1.44

1.44

1.44

-3.16

6

2.20

2.04

2.12

2.11

2.1175

-6.519

7

1.20

1.12

0.86

1.18

1.09

8

1.86

1.60

1.52

1.42

1.60

9

4.14

4.20

4.24

4.32

4.24

-0.815 -4.127 12.517

Table V: Anova Results After Pooling For Dry Condition FACTOR SPEED

POOL

D.O.F

M.S.S

F-RATIO

2.4378

2

1.2189

-

-

-

24.22

2

12.11

94.683

23.96

70.258

DEPTH OF CUT

3.7353

2

1.8676

14.60

3.4795

10.20

ERROR

3.7113

29

0.1279

6.1491

31

1.3468

6.6538

19.51

34.1086

35

15.3244

34.0953

100.00

138.806

1

172.914

36

FEED

yes

S.S

POOLED ERROR

MEAN

13


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Table 5 presents the results of ANOVA on performance characteristic. To minimize the Surface roughness, feed has major contribution (70.258 %) in optimizing the performance characteristic followed by depth of cut and cutting speed. Further, it is also observed that ANOVA has resulted in (19.51 %) of error contribution.

= ((-4.533 +-2.44+-4.225)-(2*(-5.025)))

Table VI: Summary Of S/N Ratios For Dry Condition

Factor

Level 1

Level 2

Level 3

Speed (A)

-4.533

-4.72

-5.819

Feed (B)

-2.44

-3.119

-9.512

-4.225

-6.36

-4.49

= -1.148 Conducting a verification experiment is a crucial final step of the robust design methodology. The predicted results must be conforming to the verification test, with the optimum set of conditions. In this final step, the conformation experiment is conducted with optimum control factors of spindle speed 400 rpm, cutting feed0.206mm/min and depth of cut 0.6mm. The Surface roughness values are taken for four trails and the S/N ratio is calculated for this condition. These values are shown in Tables 8 and 9. The S/N ratio of predicted value and verification test values are compared for validity of the optimum condition. It is found that the S/N ratio value of verification test is within the acceptable limits of the predicted value and the objective is fulfilled. These suggested optimum conditions can be adopted.

Depth Of Cut (C)

In order to find the optimum set of conditions, the individual level averages of S/N ratios are calculated. These values are tabulated in Table 6. The objective is to maximize the S/N ratio values. Thus, the optimized conditions chosen are A1-B1-C1 and their levels are shown in Table 7.

Table VIII: Conformation Test Results For Dry Condition. SURFACE ROUGHNESS(Ra)(µm)

Table VII: Optimum Set Of Control Factors For Dry Condition. Control Factor

Optimum value

Spindle Speed

Feed Rate

(rpm)

(mm/rev)

400

0.206

S/N RATIO(dB)

1

2

3

4

Avg.

1.18

1.16

1.13

1.10

1.1425

-1.160

Table IX: Comparison Of S/N Ratios For Dry Condition.

Depth of Cut (mm)

0.6

ηpredicted(dB)

-1.148

ηconformation test(dB)

-1.160

B. Flooded Machining (Servocut Oil + Water) Having determined the optimum condition from the orthogonal array experiment, the next step is to predict the anticipated process average,

The experimentation is done similar to that of dry condition but with flooding of the cutting zone with a combination of servo cut oil and water. The results obtained are given in table 10 and anova data is presented in table 11.

, under chosen optimum condition. This is calculated by summing the effects of factor levels in the optimum condition. S/N ratios of optimum condition were used to predict the S/N ratio of the optimum condition using the additive model.

Table X: Data Summary Of Flooded Machining (Servo cut Oil + Water) And S/N Ratio:

14


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

SURFACE ROUGHNESS(RA)(µm)

EXP

S/N RATIO (dB) No 1

2

3

4

Average

1

1.80

1.90

1.84

1.92

1.865

-5.416

2

1.42

1.32

1.50

1.61

1.462

-3.324

3

2.06

2.00

2.04

1.84

1.985

-5.963

4

2.00

1.98

1.94

2.06

1.995

-6.0009

5

1.58

1.56

1.36

1.34

1.460

-3.3118

6

2.68

2.46

2.48

2.51

2.522

-6.0205

7

1.60

1.50

1.38

1.44

1.48

-3.418

8

1.52

1.56

1.61

1.32

1.507

-3.559

9

2.66

2.60

2.54

2.68

2.62

-8.3679

Table XI: Anova Results After Pooling For Servo cut Oil

FACTOR

And Water Lubrication Condition.

POOL

S.S

D.O.F

M.S.S

F-RATIO

YES

0.3392

2

0.1696

-

-

-

5.0684

2

2.5342

145.64

5.0336

71.86%

1.0909

2

0.5454

31.347

1.0561

15.077%

0.5059

29

0.0174

0.8451

31

0.187

0.9147

13.046%

7.0045

35

3.2666

7.0045

100%

126.83

1

133.83

36

SPEED FEED DEPTH OF CUT

ERROR POOLED ERROR

MEAN

15


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Table XII: Summary Of S/N Ratios For Servo cut Oil And Water Lubrication Condition. FACTOR

LEVEL 1

LEVEL 2

LEVEL 3

SPEED (A)

-4.901

-5.111

-5.1149

FEED (B)

-4.9449

-3.398

-6.7838

DEPTH OF

-4.9985

-5.8976

= ((-4.901 +-3.398+-4.2309)-(2*(-5.0423))) =-2.4453 In this final step, the conformation experiment is conducted with optimum setting of spindle speed 400 rpm, cutting feed-0.240mm/min and depth of cut 1.6mm. The Surface roughness values are taken for four trails and the S/N ratio is calculated for this condition. The S/N ratio of predicted value and verification test values are compared for validity of the optimum condition. These values are shown in Tables 14 and 15. It is found that the S/N ratio value of verification test is within the acceptable limits of the predicted value and the objective is fulfilled. These suggested optimum conditions can be adopted.

-4.2309

CUT (C)

In order to find the optimum set of conditions, the individual level averages of S/N ratios are calculated. These values are tabulated in Table 12. The objective is to maximize the S/N ratio values. Thus, the optimum conditions chosen are A1-B2-C3 and their levels are shown in Table 13. Table XIII: Optimum Set Of Control Factors For Servocut Oil And Water Lubrication Condition. Control Factor

Optimum value

Spindle Speed

Feed Rate

(rpm)

(mm/rev)

400

0.240

Table XIX: Conformation Test Results For Servocut Oil And Water Lubrication Condition SURFACE ROUGHNESS(Ra)(µm)

Depth of Cut (mm)

1.6

1

2

3

4

Avg.

1.38

1.34

1.24

1.39

1.3375

S/N RATIO(dB)

-2.534

Table XV: Comparison Of S/N Ratios For Servocut Oil And Water Lubrication Condition

To minimize the surface roughness, feed has major contribution (71.86 %) in optimizing and cutting speed the performance characteristic followed by depth of cut. Further, it is also observed that ANOVA has resulted in (13.046 %) of error contribution. Having determined the optimum condition from the orthogonal array experiment, the next step is to predict the

ηpredicted(dB)

-2.4453

ηconformation test(dB)

-2.534

C. Flooded Machining (Synthetic Oil)

anticipated process average, , under chosen optimum condition. This is calculated by summing the effects of factor levels in the optimum condition. S/N ratios of optimum condition were used to predict the S/N ratio of the optimum condition using the additive model.

The experimentation is done similar to that of dry condition but with flooding of the cutting zone with a combination of servo cut oil and water. The results obtained are given in table 16 and anova data is presented in table 17.

16


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

EXP

SURFACE ROUGHNESS(RA)(µm) S/N RATIO (dB)

No 1

2

3

4

Average

1

1.66

1.36

1.48

1.58

1.52

-3.66

2

1.44

1.56

1.70

1.66

1.59

-4.04

3

2.56

2.36

2.40

2.48

2.45

-7.787

4

1.32

1.38

1.40

1.29

1.3475

-2.595

5

1.84

1.60

1.40

1.60

1.61

-3.335

6

2.26

2.42

2.38

2.10

2.29

-7.27

7

1.21

1.18

1.12

1.16

1.1675

-1.348

8

1.32

1.31

1.28

1.26

1.2925

-2.230

9

2.6

2.44

2.52

2.51

2.5175

-8.0215

Table XVI: Data Summary Of Flooded Machining (Synthetic Oil) And S/N Ratio

Table XVII: Anova Results After Pooling For Flooded (Synthetic Oil) Lubrication Condition. FACTOR

S.S

D.O.F

M.S.S

F-RATIO

SPEED

0.2276

2

0.1138

5.524

0.1864

2.066%

FEED

8.1066

2

4.0533

196.76

8.0614

89.42%

0.0862

2

0.04311

-

-

-

0.5986

29

0.0206

-

-

-

0.6848

31

0.0637

-

0.7672

8.506%

9.019

35

4.2308

9.019

100%

110.7394

1

119.7584

36

DEPTH OF CUT

ERROR

POOL

YES

POOLED ERROR

MEAN

17


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Table XVIII: Summary Of S/N Ratios For Flooded (Synthetic Oil) Lubrication Condition. FACTOR

LEVEL 1

LEVEL 2

LEVEL 3

SPEED (A)

-5.162

-4.4

-3.8665

FEED (B)

-2.534

-3.2016

-7.6927

-4.3866

-4.8855

-4.1566

DEPTH OF CUT (C)

In order to find the optimum set of conditions, the individual level averages of S/N ratios are calculated. These values are tabulated in Table 18. The objective is to maximize the S/N ratio values. Thus, the optimum conditions chosen are A3-B1-C3 and their levels are shown in Table 19. Table XIX: Optimum Set Of Control Factors For Flooded (Synthetic Oil) Lubrication Condition. Control Factor

Spindle Speed

Feed Rate

(rpm)

(mm/rev)

Depth of Cut (mm)

Optimum value

630

0.206

1.6

To minimize the surface roughness, feed rate has major contribution (89.42 %) in optimizing the performance characteristic followed by cutting speed and depth of cut. Further, it is also observed that ANOVA has resulted in (8.506 %) of error contribution. Having determined the optimum condition from the orthogonal array experiment, the next step is to predict the anticipated process average under chosen optimum condition. This is calculated by summing the effects of factor levels in the optimum condition. S/N ratios of optimum condition were used to predict the S/N ratio of the optimum condition using the additive model.

=((-3.79 +-2.43+-3.919)-(2*(-4.476))) =-1.187 Conducting a verification experiment is a crucial final step of the robust design methodology. The predicted results must be conformed to the verification test, with the optimum set of conditions. In this final step, the conformation experiment is conducted with optimum setting of spindle speed 630 rpm, cutting feed 0.206mm/min and depth of cut 1.6mm. The Surface roughness values are taken for four trails and the S/N ratio is calculated for this condition. The S/N ratio of predicted value and verification test values are compared for validity of the optimum condition. These values are shown in Table 20 and Table 21. It is found that the S/N ratio value of verification test is within the limits of the predicted value and the objective is fulfilled. These suggested optimum conditions can be adopted. Table XX: Conformation Test Results For Flooded (Synthetic Oil) Lubrication Condition.

18


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

S/N RATIO(dB)

SURFACE ROUGHNESS(Ra)(µm) 1

2

3

4

Avg.

1.15

1.20

1.12

1.13

1.15

-1.217

Table XXI: Comparison Of S/N Ratios For Flooded (Synthetic Oil) Lubrication Condition. ηpredicted(dB)

-1.187

ηconformation test(dB)

-1.217

In the present work, the performance characteristics namely the surface roughness to be minimized and hence “smaller the better type” quality characteristic has been selected for each of the response. D.

Effect Of Cutting Parameters On Surface Roughness

Figure 5 shows the variation between cutting speed and surface roughness. When the cutting speed is low, the surface roughness is also low for dry, servo cut oil + water and synthetic oil conditions, but under higher cutting speeds the surface roughness is high for dry machining compared to servo cut + water and synthetic oil conditions. It is also observed that the surface roughness is less for synthetic oil compared to other conditions.

Figure 5: Response Graph between Cutting Speed and Surface Roughness Figure6 shows the variation between feed rate and surface roughness. When the feed rate is low, the surface roughness is also low for synthetic oil conditions compared to the dry and servo cut oil + water conditions, but under higher feed rates, the surface roughness is high for servo cut oil + water conditions compared to dry and synthetic oil conditions. It is, also observed that the surface roughness is less under synthetic oil conditions for low feed rates.

19


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Figure 6: Response Graph between Feed Rate and Surface Roughness Figure 7 shows the variation between depth of cut and surface roughness. When the depth of cut is low, the surface roughness is also low for synthetic oil conditions compared to the dry and servo cut oil + water conditions, but under higher depth of cut, the surface roughness is high for synthetic oil conditions compared to dry and servo cut oil + water conditions. It is observed that the surface roughness is less under synthetic oil conditions for low depth of cut.

Figure 7: Response Graph between Depth of Cut and Surface Roughness E.

Optimization Of Cutting Parameters

The optimum cutting parameters found in turning of Ti6Al4V alloy for low surface roughness and comparison of the optimum parameters for under Dry, Servo cut oil + water and Synthetic oil conditions are shown in the Table 22. Table XXII: Optimum Parameters Lubrication conditions

SPEED (A) (rpm)

FEED RATE (B) (mm/min)

DEPTH OF CUT (C) (mm)

Dry Servo cut oil+ water Synthetic oil

400 400 630

0.206 0.240 0.206

0.6 1.6 1.6

From Table 22, it is observed that, the Synthetic oil is more effective at high cutting speed, high depth of cut and low feed rate compared to dry and servo cut oil + water conditions, whereas under servo cut oil + water condition is effective at higher depth of cut compared Dry and Synthetic oil. Dry machining suitable only at lower cutting speed, feed rate and depth of cut.

20


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

depth of cut because no cooling and lubrication action.

Feed rate has major contribution in optimizing the performance characteristics followed by depth of cut and cutting speed for dry and servo cut oil + water, whereas the feed rate has major contribution in optimizing the performance characteristics followed by cutting speed and depth of cut for Synthetic oil. F.

IV. CONCLUSIONS Based on the results of the present experimental investigations on machining Ti6Al4V alloy, the following conclusions are drawn:

Influence Of Cooling Conditions:

The main primary purpose of the applying the coolants in machining are lubricating the cutting process primarily at low cutting speeds, cooling the work-piece primarily at high cutting speeds, flushing chips away from the cutting zone, improving tool life, reduced thermal deformation of workpiece and improving Surface Finish. In this project, different cooling conditions like dry, servo cut oil + water and synthetic oil has been chosen as the cutting fluids in machining of Ti-6Al-4V alloy. However in dry machining no coolant has been applied, hence in this machining no lubrication and cooling at machining zone appears. The servo cut oil + water has superior cooling and lubricating properties which impart excellent surface finish and minimizes tool wear. These are recommended for a variety of cutting operations on ferrous and non-ferrous metals; the kinematic viscosity of the servo cut oil varies in between 20 to 24 cst at 40°C. The synthet ic oils are water soluble synthetic coolants. The solutions prepared from these coolants are clear and free from any oil or fatty matter. These products have remarkable anti-rust characteristics. These oils recommended for machining of ferrous metals, high nickel and titanium alloys only. These oils are not recommended for machining of non-ferrous metals. The kinematic viscosity of the synthetic varies between 9 to 15 cst at 40° C.

The cutting performance on Ti6Al4V alloy with synthetic oil shows favorable and better performance compared to dry and servo cut oil + water.

The synthetic oil as a cutting fluid in machining of Ti6Al4V alloy shows advantage in minimizing the surface roughness. , The ANOVA shows that machining of Ti6Al4V alloy, the Synthetic oil is more effective at high cutting speed, high depth of cut and low feed rate compared to dry and servo cut oil + water conditions, but under servo cut oil + water condition is effective at higher depth of cut compared Dry and Synthetic oil, where as dry machining suitable only at lower cutting speed, feed rate and depth of cut. The ANOVA also reveals that feed rate is dominant parameter under dry, servo cut oil + water and synthetic oil conditions in optimizing the surface roughness.

REFERENCES [1] Manna, A, Bhattacharyya, B., (May 2004), “Investigation for optimal parametric combination for achieving better surface finish during turning of Al/SiC-MMC”, The International Journal of Advanced Manufacturing Technology, Vol. 23, pp. 658-665(8) [2] Heretis, Nektarios M.; Spanoudakis, Polychronis; Tsourveloudis, Nikos, (6 November 2009), “Surface roughness characteristics of Ti6Al4V alloy in conventional lathe and mill machining”, International Journal of Surface Science and Engineering, Vol. 3, pp. 435-447(13)

As seen from the results of optimum process parameters for surface finish Table 5.1. The Synthetic oil is more effective at high cutting speed, high depth of cut and low feed rate compared to dry and servo cut oil + water conditions, the main reason for this is due to low viscosity of oil, high thermal conductivity compared to servo cut oil and these oils exhibits more cooling properties, but under servo cut oil + water condition is effective at higher depth of cut and moderate cutting speeds compared dry and synthetic oil, the main reason for this is due to high viscosity of oil, low thermal conductivity compared to synthetic oil and these oils exhibits more lubricating properties than cooling, where as dry machining suitable only at lower cutting speed, feed rate and

[3] Ram Cherukuri, Pal Molian, (2003), “Lathe Turning of Titanium Using Pulsed Laser Deposited, Ultra-Hard Boride Coatings of Carbide Inserts”, Machining Science and Technology: An International Journal, Vol. 7, pp. 119 – 135. [4] S. K. Bhaumik , C. Divakar and A. K. Singh, (1995), “Machining Ti-6Al-4V alloy with a wBN-cBN composite tool”, Materials & Design, Vol. 16, pp. 221-226 [5] A.A.Ganeeva, A.A.Kruglov and R.Ya.Lutfullin, (2010), “Layered Material Manufactured From Titanium Alloy Ti-6al-4v”, Advanced Material Science Vol. 25, pp.136-141.

21


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

A Trust Model for Secure and QoS Routing in MANETS Shilpa S G #1 , Mrs. N.R. Sunitha #2, B.B. Amberker #3

#3

#1

Department of Computer Science and Engg, Siddaganga Institute of Technology, Tumkur, Karnataka, India

#2

Department of Computer Science and Engg, Siddaganga Institute of Technology, Tumkur, Karnataka, India

Department of Computer Science and Engg, National Institute of Technology, Warangal, Andhra Pradesh, India these schemes become worthless when the malicious nodes already entered the network or some nodes in the network are compromised by attacker. Such attacks are more dangerous as these are initiated from inside the network and because of this the first defense line of network become ineffective. Since internal attacks [1] are performed by participating malicious nodes which behave well before they are compromised therefore it becomes very difficult to detect. Routing protocols are generally necessary for maintaining effective communication between distinct nodes. Routing protocol not only discovers network topology but also built the route for forwarding data packets and dynamically maintains routes between any pair of communicating nodes. Routing protocols are designed to adapt frequent changes in the network due to mobility of nodes. Several ad hoc routing protocols have been proposed in literature and can be classified into proactive, reactive and hybrids protocols.

Abstract— Due to the dynamic topology, limited and shared bandwidth, limited battery power of the mobile ad hoc network (MANET), providing Quality of Service (QoS) routing is a challenging task in MANET. The presence of malicious nodes in the network cause an internal threat that disobey the standard and degrades the performance of well-behaved nodes significantly. However, little work has been done on quantifying the impact of internal attack on the performance of ad hoc routing protocols using dynamic key mechanism. In this paper, we focus on the impact of Byzantine attack implemented by malicious nodes on AODV routing protocol as an extension of the previous work. Here, we propose a trust model in which the trustworthiness of each node is evaluated based on trust value and remaining energy of each node. Association level of each node is estimated based on the trust value calculated. Route selection is done using the trustworthiness and performance requirement of each route which is calculated based on both link capacity and traffic requirement to achieve QoS.

Keywords: MANET, Byzantine Trustworthiness, Authentication, QoS. I.

Due to several issues, routing protocol design has become a challenging task. The basic problem with most of the routing protocols is that they trust all nodes of network and based on the assumption that nodes will behave or cooperate properly but there might be a situation where some nodes are not behaving properly. Most adhoc network routing protocols becomes inefficient and shows dropped performance while dealing with large number of misbehaving nodes. Such misbehaving nodes support the flow of route discovery traffic but interrupt the data flow, causing the routing protocol to restart the route-discovery process or to select an

attack,

INTRODUCTION

MANET is vulnerable to various types of attacks because of open infrastructure, dynamic network topology, lack of central administration and limited battery-based energy of mobile nodes. These attacks can be classified into external attacks and internal attacks. Several schemes had been proposed previously that solely aimed on detection and prevention of external attacks [1]. But most of

22


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

The paper is organized as follows. In Section 2, we outline some relevant previous work. In section 3 we discuss a dynamic key management mechanism. In section 4 we discuss our trust model in MANET in detail and optimal routing are developed. In Section 5, we conclude this paper.

alternative route if one is available. The newly selected routes may still include some of misbehaving nodes, and hence the new route will also fail. This process will continue until the source concludes that data cannot be further transferred. Thus, the routing algorithm must react quickly to topological changes as per the degree of trust of a node or a complete path between a source and a destination pair. Nodes in MANETs communicate over wireless links. Therefore efficient calculation of trust is a major issue because an ad hoc network depends on cooperative and trusting nature of its nodes. As the nodes are dynamic the number of nodes in route selection is always changing, thus the degree of trust also keep changing.

II. RELATED WORK A) The following list of papers shows the relative work carried out for different types of attacks in MANETS and possible solutions given. 1) A Distributed Security Scheme for Ad Hoc Networks discuss the dos attack like flooding using AODV protocol and concludes with an immediate enhancement to make the limit-parameters adaptive in nature. This can be done by making calculations based on parameters like memory, processing capability, battery power, and average number of requests per second in the network and so on in [2].

Another challenging issue is energy efficient routing. Especially energy efficient routing is most important because all the nodes are battery powered. Failure of one node may affect the entire network. If a node runs out of energy the probability of network partitioning will be increased. Since every mobile node has limited power supply, energy depletion is become one of the main threats to the lifetime of the ad hoc network. So routing in MANET should be reliable in such a way that it will use the remaining battery power in an efficient way to increase the life time of the network.

2) A study of different types of attacks on multicast in mobile ad hoc networks: considers only rushing attack, black hole attack, neighbour attack and jellyfish attack in [3]. 3) Mitigating denial-of-service attacks in MANET by incentive-based packet filtering: A game-theoretic approach in [4]. 4) A survey of routing attacks in mobile ad hoc network which considers only routing attacks, such as link spoofing and colluding misrelay attacks in [5].

In this paper, we propose a trust model for quality of service (QoS) routing in Manets, called Trust and Energy-based Quality of Service (TE-QOS) routing, which includes secure route discovery, secure route setup, and Trustworthiness based QoS routing metrics to mitigate against malicious nodes which selectively drop or modify packets they agreed to forward. The routing control messages are secured by using both public and shared keys, which can be generated on-demand and maintained dynamically. The message exchanging mechanism also provides a way to detect attacks against routing protocols, particularly the most difficult internal attacks. The routing metrics are obtained by combing the requirements on the trustworthiness value of the nodes in the network and the QoS of the links along a route.

5) A Secure Routing Protocol against Byzantine Attacks for Manets in Adversarial Environments which considers an integrated protocol called secure routing against collusion (SRAC), in which a node makes a routing decision based on its trust of its neighbouring nodes and the performance provided by them in [6]. 6) Enhanced Intrusion Detection System for Discovering Malicious Nodes in Mobile Ad Hoc Networks: The main feature of the proposed system is its ability to discover malicious nodes which can partition the network by falsely reporting other nodes as misbehaving and then proceeds to protect the network in [7]. 7) WAP: Wormhole Attack Prevention Algorithm in Mobile Ad Hoc Networks without using any specialized hardware wormholes can be detected and isolated within the route discovery phase in [8]. 8) A Reliable and Secure Framework for Detection and Isolation of Malicious Nodes in MANET: This

23


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

schemes incur minimal additional overhead and preserve the lightweight nature of AODV.

security framework involves detection of malicious nodes by the destination node, isolation of malicious nodes by discarding the path and prevention data packets by using dispersion techniques in [9].

Huafeng Wu & Chaojian Shi1 in [17] has proposed the trust management model to get the trust rating in peer to peer systems, and aggregation mechanism is used to indirectly combine and obtain other node’s trust rating. The result shows that the trust management model can quickly detect the misbehaviour nodes and limit the impacts of them in a peer to peer file sharing system.

9) A Cooperative Black hole Node Detection Mechanism for ADHOC Networks [10]. 10) Rushing Attack Prevention (RAP) [11] for rushing attacks. B) This section gives the overview of some proposed protocols that are related to energy balance and trust evaluation in reactive protocols.

The above papers have dealt the parameters battery power or trust of a node individually. Our proposal combines these two parameters to discover a reliable route between the source and destination.

In [12], Gupta Nishant and Das Samir had proposed a technique to make the protocols energy aware by using a new routing cost metric which is the function of the remaining battery level in each node on a route and number of neighbours of the node. This protocol gives significant benefits at high traffic but at low mobility scenarios.

III. DYNAMIC KEY MANAGEMENT SCHEME A. Dynamic Key Mechanism There are two basic key management approaches, i.e., public and secret key-based schemes [6]. The public key-based scheme uses a pair of public/private keys and an asymmetric algorithm such as RSA to establish session keys and authenticate nodes. In the latter scheme, a secret key is a symmetric key shared by two nodes, which is used to verify the data integrity. The initial construction starts by issuing public key certificates based on a users’ own knowledge about other users’ public keys. Initially, there is a PKI or CA to distribute the knowledge among users. Clearly, we need to assume that there is some kind of initial trusts among the nodes.

In [13], Rekha Patil etal, has proposed an approach in which the intermediate nodes calculate cost based on battery capacity. The intermediate node judges its ability to forward the RREQ packets or drop it. This protocol improves packet delivery ratio and throughput and reduces nodes energy consumption. M.Tamailarasi etal, in [14] has discussed the mechanism that integrates load balancing approach and transmission power control approach to maximize the life span of MANET. The results of this proposal reduce the average required transmission energy per packet compared to the standard AODV.

We first define a network, as shown in Fig. 1, and then describe a framework of dynamic key management.

Bhalaji et al. in [15] have proposed an approach based on the relationship between the nodes to make them to cooperate in an ad hoc environment. The trust values of each node in the network are calculated by the trust units. The relationship estimator has determined the relationship status of the nodes by using the calculated trust values.

Let G = (V; E) be a network whose vertices in V are nodes and whose edges in E are direct wireless links among nodes. We define for each node x, the set N1(x) ,which contains the vertices in the network G that are hop-1 or direct neighbors of x, i.e.

Kamal Deep Meka et al. in [16] have proposed a trust based framework to improve the security and robustness of adhoc network routing protocols. For constructing their trust framework they have selected the Ad hoc on demand Distance Vector (AODV) which is popular and used widely. Making minimum changes for implementing AODV and attaining increased level of security and reliability is their goal. Their schemes are based on incentives & penalties depending on the behaviour of network nodes. Their

N1(x) = {y: (x, y) Є E and y ≠x} --- (1)

24


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

the route discovery message is protected by a keyed hash MAC algorithm such as MD5.Finally, the hash value and signature can now be attached to the route discovery message and sent out to its neighbours [6]. The complete route request (RREQ) packet sent by the node can be summarized as m + h(m + num) + E(num, Ks,pri) -(3 ) where m= M + {IDf }+ SN, M is the original message; IDf denotes the ID of f , which is the node that forwards the message m; SN is the sequence number of the message; and h(m + k) denotes the keyed hash algorithm with a key k on message m, where + denotes, the concatenation of strings, E(m, K) denotes the public key encryption algorithm.

Fig.1 Route Discovery

Similarly, we define the hop-2 neighbours of a node as follows. For each node x, N2(x) contains the vertices in the network G that are hop-2 neighbours of x, which include neither vertices in N1(x) nor x itself, i.e.,

Suppose that z N1(s) is one of s’s hop-1 neighbours. Whenever there is a need for s to initiate a route discovery process, it picks a key k1 at random, which will serve as the shared secret key between s and z. Then, s encrypts the key k1 by using its neighbour’s public key Kz,pub. After that, it encrypts the above encrypted key by using its own private key Ks,pri. The result serves as a signature for the route discovery message, which is protected by a keyed hash MAC algorithm .The complete RREQ sent by s can be summarized as

N2(x) = {z: (y; z) Є E and y Є N1(x), z ≠ x} --- (2)

Similarly, we can define the hop-n neighbours of x [Nn(x)] in terms of Nn−1(x) if the flooding path from the source to destination has n links. Initially, a node x has a public key Kx,pub that is distributed to N1(x) by using PKI or CA. Similarly, a node y has public key Ky,pub distributed to N1(y). Thus, for example, if y N1(x) and x N1(y), i.e., x and y are hop-1 neighbours, then x can authenticate y by issuing a certificate that is signed by x with x’s private key. Those who hold x’s public key can now read the certificate and trust the binding of y and its public key. Based on the available certificate and key information, two hop-1neighboring nodes can easily establish a secret key between them.

mq + h(mq +kl) +E(E(kl, Kz,pb),Ks,pri), for r Є N1(s) ---(4) where mq stands for the message used in RREQ. Then, z sends back s a route reply (RREP) packet in a similar format mp+h(mp+k1)+E (E(k1 ,Ks,pub),K z,pri) , for z (5)

N1(s)---

where mp stands for the message used in RREP. By decrypting the message and comparing the key, s can authenticate z and distribute a shared key to z. Similarly, s establishes a shared key with each of its hop-1 neighbours.

B. Key Distribution and Node Authentication Whenever there is a need for a node to initiate a route discovery process, it creates pairwise shared keys with intermediate nodes, hop by hop, until it reaches the destination. First, it picks random number num. Then, it signs num with its private key by using a public key algorithm like RSA. After that,

Suppose that y N1(z). z can also similarly find out its hop-1 neighbours and also establishes a shared key with each of them. For s to send messages to its hop-2 neighbours, i.e., N2(s), for example, y, s requests z to forward the message to y. In z’s

25


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

1) Detection of a Single Malicious Node: To be more specific, we assume that z (in Fig. 1) is a compromised node during the route discovery phase, although it is initially authenticated. Clearly, z could not tamper the message from s to y because the message is protected with a key between s and y. Of course, z may simply drop the message when it needs to forward the message to y. However, there are at least two copies of the same message y expects to receive.

handshaking with y, z can pick s’s public key instead of a random key and send it to y. This way, s’s public key can be delivered to its hop-2 neighbours. Similarly, s can obtain the public keys of its hop-2 neighbours. By checking the acknowledgement message back from y via z, s can find out all of its hop-2 neighbours N2(s). Therefore, s can send a message to r N2 (s) via z N1(s) in the following format:

By comparing these copies from other neighbours, y is still able to detect that z is faulty or compromised. Similarly, y can also detect other internal attacks, such as message fabrication caused by z. Therefore, the attacks initiated by a single inside node can be detected.

m2+ h(m2+ k1) ----(6) where m2 = m + h(m + k2) + E(E(k2, Kr,pub),Ks,pri) for r Đ„ N2(s), -(7)

2) Detection of Two Colluding Nodes: A more challenging case is the Byzantine attack. In our design of key management schemes, a source has directly established a shared key with each of its hop-n neighbours.

where k2 is the shared key between s and its hop-2 neighbour r. C. Route Discovery and Attack Detection Once the security associations between a source and destination have been established, and trustworthy routes have been identified from source to destination, the source can simply use the shared key to protect the

Suppose that both z and y are compromised and colluding. In addition, s shares a hop-1 key with z (i.e., k1,sz), a hop-2 key with y (i.e., k2,sy), and a hop-3 key with x (i.e., k3,sx). During route discovery, x may receive three copies of a message m from s and via different intermediate nodes y and z, respectively, in the following formats:

data traffic sent to the destination: m + h (m + ksd), where ksd is the key shared between the source node (s) and destination node (d).

C1= m + h (m + k3, sx)

To detect internal attacks, including Byzantine attacks, we assume the following.

C2= m + h (m + k2, sy) +h (m + h(m + k2,sy)+k1,yx) C3= m + h (m + k1, sz) +h (m + h(m + k1,sz)+k1,zy)+

1) Each node has a pair of public/private keys and a unique ID. A compromised node participates in routing until detected.

h (m+h(m+k1,sz )+k1,yx)

-----

[8] When x receives the messages C1, C2, and C3 which are from s, y, z respectively, it compares and find discrepancies among messages. C1 directly comes from s and thus can be trusted; y cannot change the message without being detected, and thus, C2 must match C1. Therefore, C3 has been modified, and x finds that there may be some compromised or faulty nodes among the nodes that forward the message, e.g., z and/or y. has modified the message but y does not tell during its forwarding. If y reports the discrepancies of the two copies, then z must be a compromised node. Otherwise, both y and x are

2) The source and destination nodes are secured by external security agents. There is a shared key between the source and destination nodes. 3) Each of the intermediate nodes between the source and destination has established a shared key with the source node by using the key management scheme described in Section III-B. 4) There are enough uncompromised nodes in the network so that a message can arrive at the destination via different routes. In this section, algorithm is extended in detecting collusion to Byzantine attacks, in which two or more nodes collude to drop, fabricate, modify, or misroute packets, and these nodes are consecutively located on a path [6].

Compromised and colluding nodes, although y does not change the message.

26


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

In summary, the internal attacks initiated by a single compromised node and the Byzantine attacks can be detected without using expensive aggregated signatures as in [18], which are used to protect a route from end to end.

For that, a very simple equation for the calculation of trust value: TV = tanh (R1+R2+A)

(9)

where, TV= Trust Value

IV.TRUST MODELING AND OPTIMAL ROUTING Trust modelling is a technical approach to represent trust for digital processing. The trust values are estimated considering various attributes related to behaviour of the node for a certain time. For each node in the network, a trust value will be stored that represent the value of the trustiness to each of its neighbour nodes. This trust value will be adjusted based on the experiences that the node has with its neighbour nodes.

If the denominator is not zero and R1 is less than the chosen threshold (R1 <1) & not zero then it can cause selective packet drop attack.

Based on the above parameters trust level of each node can be of the following types:

A = Acknowledgement bit. (0 or 1) if the acknowledgment is received for data transmission from the destination then nodes in that path are assigned value 1 else value 0 is assigned.

STRANGER • Node x have never sent/received any messages to/from node y • Trust levels between them are very low. • Probability of malicious behavior is very high. • Newly arrived nodes are grouped in to this category. KNOWN

Based on the trust value (TV) calculated for each node, the trust levels can be estimated as shown in table 1. TABLE I: TRUST ESTIMATION OF A NODE

• Node x have sent/received some messages to/from node y • Trust levels between them are neither low nor too high. • Probability of malicious behavior is to be observed. FRIEND • Node x have sent/received plenty of messages to/from node y • Trust levels between them are very high. • Probability of malicious behaviour is very less.

Threshold Trust Value

Trust Level

0.7-1.0

F

0.4-0.6

K

0.0-0.3

S

Also, the Association between nodes is asymmetric, i.e. node x may not have trust on node y the same way as node y has trust on node x or vice versa. Each node in an adhoc network would have identified its neighbourhood friends over a certain period of time by evaluating their trust levels. Some of the neighbourhood friends may suddenly turn malicious and non co-operative due to node capturing. To detect this, each node before starting

A. Association Evaluator Technique: The Association status depends up on the trust value. The trust values are calculated based on the following parameters of the nodes

27


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

the data transfer may invoke the trust evaluator for a specific interval of time and can re-establish the trust levels.

transmission or reception of a data packet the remaining energy is found using the Eq(4). If the remaining energy falls below 50%, that node will not act as a router to forward the packets.

If the threshold trust value is not satisfied, the friend is degraded to known and their packets are not forwarded. This is the penalty the node pay for not being cooperative. If however, the node turns out to be a repenting offender that is no longer malicious and that has behaved normally for a certain amount of time, re-socialization or re-integration in to the network is possible if the threshold trust level for a friend is satisfied. In this case, the concerned node will have to work its way up to raise its trust level to the threshold set for a friend.

C. Reliability Relation In Table II, the relationship of trust level and the remaining energy of each node are given which determines its Trustworthiness.

TABLE II: TRUSTWORTHINESS OF EACH NODE

Trust Level

Remaining Energy %

Reliability(R)

Trustworthiness (T)

F

80-100

Very Very high

1.0

K

80-100

Very High

0.8

F

50-79

High

0.6

K

50-79

Medium

0.4

F

00-49

Low

0.3

K

00-49

Low

0.2

S

50-100

Low

0.2

2) Reception mode: The power consumed for receiving a packet is given by Eq (2)

S

00-49

Low

0.1

Consumed energy = Pr * T

M

00-40

Very low

0.0

B. Power Consumption Every node in the MANET calculates its power consumption and finds the remaining energy periodically. Each node may operate in any of the following modes: 1) Transmission mode: The power consumed for transmitting a packet is given by the Eq (1) Consumed energy = Pt * T

(10)

where Pt is the transmitting power and T is transmission time.

(11)

where Pr is the reception power and T is the reception time.

By using the trustworthiness value of each node we calculate the path trust.

The value T can be calculated as T= Data size / Data rate

(12)

Calculating Path Trustworthiness:

Hence, the remaining energy of each node can be calculated using Eq (1) or Eq (2)

Consider a path p Ps→x, where Ps→x is the set of paths that start from a source node s to a destination node x, i.e., Ps→x= {all paths from s to x}.

Rem energy = Current energy–Consumed energy (13)

The nodes on the path p calculate its trustworthiness using Table II by checking its trust level and the remaining energy and take the following decision:

Initially every node has full battery capacity say 100% which is assigned to current energy .On each

28


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

If the reliability is very low the node discards the route request else the reliability is acceptable, cumulative reliability is found by adding the predecessor trustworthiness with its trustworthiness.

(16)

which is the average number of packets in the network based on the hypothesis that each queue behaves as an M/M/1 queue of packets. Note that for a link on path p, a smaller value of Qx(p) is preferred, either because of a smaller delay or a relatively larger link capacity.

Cumulative Reliability,

Rewriting (16) into a recursive format, we have If the node has already received the route request with same source address and same broad cast id and if the cumulative reliability is less than the cumulative reliability of current route request, the previous route request path is rejected and the current route request path is recorded.

(17) Combining both requirements on the trustworthiness and performance, a path that is less reliable and does not meet the desired performance must be penalized in our objective; thus, a combined cost function can be designed as

Path Trustworthiness, (15)

(18) where β>0 is a constant used to scale the value of the cost function.

D. Mathematical Formulation of Optimal Routing

Now the routing problem can be written as

The routing metrics can be quantified as follows. First, we need to consider the trustworthiness of each candidate route. We assume that each node has locally built up a trustworthiness repository for the nodes it knows based on its CR and current behaviour observed in the topology discovery phase. A destination node has also calculated a path trustworthiness value for each possible route from the source node, as shown in above section. The repository is updated every time the topology is rediscovered.

minimize D(p), for p

Ps→x

(19)

and subject to the constraints Bn− Fn≥ 0; τn ≥ 0; Rmin≤ Rx (n; j) ≤ 1; for n

p

(20)

where Rmin is the minimum trustworthiness required for a node to be allowed to join in a route. In (18)– (20), we explicitly integrate the security and performance requirements into a routing problem. Note that if the intermediate nodes between s and x have the same levels of trustworthiness, then all the routes between s and x can equivalently be measured by the traditional hop counts. The routes of the same hop counts will have the same levels of trustworthiness. In this case, the optimal route is only determined by the performance requirements, as shown in (18), provided that the requirement on the minimum trustworthiness is met. By solving the optimization problem, we can develop a routing algorithm.

Second, the performance requirement must be considered in making routing decisions. Assume that the transmission capacity of the wireless link that originates from a node n is Bn. The traffic requirement for the link is Fn, which is measured in the same units as Bn. For delay-sensitive traffic, we also use τn to represent the processing and propagation delay when being delivered by n to its next hop. A frequently used objective function [6] is,

29


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

performance and trustworthiness requirements, as shown in (20), will be eliminated from the route.

Here, the source node selects multiple routes as candidates. Each intermediate node along the candidate routes computes a cost from the source and passes it the next node on the way to the destination. The cost contains two parts, i.e., PTp and Qx, in terms of (15) and (17), respectively, which can be calculated based on the nodes along the route. Therefore, each route can be assigned an index called Trustworthiness–QoS index (TQI), i.e., a number to represent the combined trustworthiness and performance cost along the route, by each intermediate node along the route. The destination chooses a final route among the candidates in terms of the accumulated TQI value.

5) When a link breakage in an active route is detected, a route error (RERR) packet is used to notify the other nodes that the loss of that link has occurred. Some maintenance procedures are needed as in AODV. V. CONCLUSION Security of mobile ad hoc networks has recently gained momentum in the research community. Due to the open medium of ad hoc networks, and their inherent lack of infrastructure, security exposures can be an obstacle to basic network operation. It is impossible to find a general idea that can work efficiently against all kinds of attack, since every attack has its own distinct characteristics. There are only fewer works on detecting and defending against internal attacks in the field of MANET’s routing protocol.

E. Routing Algorithm The heuristic algorithm can be summarized as follows. 1) During route discovery, a source node sends RREQ packets to its neighbouring nodes. In these packets, along with the regular information, the node also sends its security related information, such as key information

This paper develops an optimal routing algorithm by combining both trustworthiness and performance. To derive trustworthiness, we used a dynamic trust based approach through which association between nodes are used to resist adhoc networks in byzantine environment. Fairness mechanism is included by the notion of friends, knowns and strangers. As far as number of alternative routes exists this protocol works well by choosing the optimal paths. This novel protocol can work in critical environment like military scenarios. Proposed approach is the first secure routing that quantitatively considers not only the security, network performance but also network lifetime by considering energy as well.

2) Once an RREQ packet is received by an intermediate node, it calculates the TQI by using (18). The node places the link trustworthiness and QoS information in the RREQ packet and forwards it to its next hop. This process is repeated until it reaches the final destination. 3) At the destination, the node waits for a fixed number of RREQs before it makes a decision. Or else, a particular time can be set for which the destination or intermediate node needs to wait before making a routing decision. Once the various RREQs are received, the destination node compares the various TQI index values and selects the index with the least cost. It then unicasts the RREP back to the source node. When the source node receives the RREP, it starts data communication by using the route.

REFERENCES [1] P. Papadimitratos, and Z.J. Haas, “Securing the Internet Routing Infrastructure,” IEEE Communications, vol. 10, no. 40, October 2002, pp. 60-68. Digital Object Identifier 10.1109/MCOM.2002.1039858 [2] Dhaval Gada, Rajat Gogri, Punit Rathod, Zalak Dedhia and Nirali Mody Sugata Sanyal, Ajith Abraham, “A Distributed Security Scheme for Ad Hoc Networks”, ACM Publications, Vol-11, Issue 1, 2004, pp. 5 – 5.

4) Once the route is established, the intermediate nodes monitor the link status of the next hops in the active routes. Those that do not meet the

30


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

[16] Meka, Virendra, and Upadhyaya, "Trust based routing decisions in mobile ad-hoc networks" In Proceedings of the Workshop on Secure Knowledge Management, 2006.

[3] Hoang Lan Nguyen and UyenTrang Nguyen, “A study of different types of attacks on multicast in mobile ad hoc networks”, IEEE proceedings in International Conference on Networking (ICN 2006), 2006.

[17] Huafeng Wu1, Chaojian Shi1,” A Trust Management Model for P2P File Sharing System”, International Conference on Multimedia and Ubiquitous Engineering, IEEE Explore 978-07695-3134-2/08, 2008.

[4] Xiaoxin Wu, David K.Y, “Mitigating denial-of-service attacks in MANET by incentive-based packet filtering: A game-theoretic approach”, 3rd International conference on secure Communications, September 2007, pp. 310-319.

[18] B. Awerbuch, R. Curtmola, D. Holmer, and C. Nita-Rotaru, ODSBR:An On-Demand Secure Byzantine Routing Protocol, Oct. 15, 2003,JHU CS Tech. Rep., Ver. 1.

5] A , “A survey of routing attacks in mobile ad hoc networks”, IEEE Journal on Wireless Communication, Vol-14, Issue 5, December 2007, ISSN: 1536-1284, pp.85-91. [6] Ming Yu; Mengchu Zhou; Wei Su, “A Secure Routing Protocol Against Byzantine Attacks for MANETs in Adversarial Environments”, IEEE Transactions on Vehicular Technology Vol58, Issue 1, Jan. 2009 , pp.449 – 460. [7] Nasser, N.; Yunfeng Chen, “Enhanced Intrusion Detection System for Discovering Malicious Nodes in Mobile Ad Hoc Networks”, IEEE International Conference on Communications, ICC apos; Vol-07 , Issue 24-28 June 2007 , pp.1154 – 1159. [8] Sun choi, Doo-young Kim, Do-hyeon Lee, Jae-il Jung, “WAP: Wormhole Attack Prevention Algorithm in Mobile Ad Hoc Networks”, International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing, Vol-0 , ISBN = {978-0- 7695-3158-8}, 2008, pp.343-348 . [9] S.Dhanalakshmi, Dr.M.Rajaram, “A Reliable and Secure Framework for Detection and Isolation of Malicious Nodes in MANET”, IJCSNS International Journal of Computer Science and Network Security, vol-8 No.10, October, 2008. [10] Moumita Deb, “A Cooperative Black hole Node Detection Mechanism for ADHOC Networks”, Proceedings of the World Congress on Engineering and Computer Science, 2008. [11] Y.-C. Hu, A. Perrig, and D. B. Johnson, “Rushing attacks and defense in wireless ad hoc network routing protocols,” in Proc. ACM WiSe, Sep. 2003, pp. 30–40. [12] Gupta Nishant, Das Samir, “Energy-aware on-demand routing for mobile Ad Hoc networks,” Lecture notes in computer science ISSN: 0302-743, Springer, International workshop in Distributed Computing, 2002. [13] Rekha Patil , A.Damodaram, “Cost Based Power Aware Cross Layer Routing Protocol For Manet”, IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.12, December 2008. [14] M.Tamilarasi, T.G Palani Velu, “Integrated Energy-Aware Mechanism for Manets using On-demand Routing”, International Journal of Computer, Information, and Systems Science, and Engineering 2;3 © www.waset.org Summer 2008. [15] N.Bhalaji, Dr.A.Shanmugam “Reliable Routing against selective packet drop attack in DSR based MANET” in Journal of Software, Vol. 4, No. 6, August 2009. pp 536-543

31


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Investigating the Particle Swarm Optimization Clustering Method on Nucleic Acid Sequences BarileĂŠ B. Baridam Department of Computer Science, University of Pretoria South Africa

Abstract— Particle swarm optimization (PSO) has been employed on several optimization problems, including the clustering problem. PSO has also been employed in the clustering of data of different structure and dimensionality. In this paper it is employed in the clustering of nucleic acid sequences. The application of clustering, as a statistical tool, in the analysis of data of varied complexity has been treated by several researchers. Besides PSO, distance-based algorithms have been widely proposed for the clustering problem. This paper investigates the efficiency of PSO clustering on nucleic acid sequences through the introduction of distance measures among which are the Euclidean distance measure, Manhattan distance, edit distance and the codon-based scoring method (COBASM). Subobjective weights were introduced to observe the behaviour of PSO under various conditions. From the result obtained, PSO-based clustering produces compact and well-separated clusters. However, the result varied with distance measure.

Keywords: Nucleic measure, PSO

acids,

clustering,

Clustering, in computational biology, goes beyond a mere statistical tool for information retrieval. It actually reveals the genetic information of participating sequences. Such information helps in the determination of gene families and the establishment of implicit links between them. Clustering of biological sequence data presents a great challenge to the computing society as well as to biologists. This challenge arises from the fact that sequence data cannot be easily clustered by the application of conventional distance or similarity measures Also, string edit distance algorithms employed in string comparisons and string similarity searches are mostly not suitable in biological sequence data clustering [2]. This is basically because the structural nature of biological sequences makes string edit distance not appropriate. For example, the edit between the strings bbbbbbbddddddd and dddddddbbbbbbb clearly shows there is no similarity between the strings. However, looking at the strings biologically, there is an element of structural similarity which the edit distance neglects. Since the issue of structural similarity is major in biological sequence analysis the edit distance and other distance-based algorithms are incapable of clustering biological sequences. The introduction of particle swarm optimization (PSO) becomes necessary at this point to since it has been proven to be robust in the handling of optimization problems [3]. This means, then, that distance measures will have to be used with the PSObased clustering method to observe their performance under various conditions. Since PSO has already been successfully applied to data clustering and image segmentation [4], [3], this paper investigates the efficiency of PSO-based clustering method in clustering nucleic acid sequences with respect to the distance measures. The measures used are the Euclidean distance, the Manhattan distance, and the

similarity

IV. INTRODUCTION Clustering, as an important aspect of knowledge discovery, has as its main aim to group related elements based on some predefined measure of closeness or proximity. Clustering involves the discovery of relationships in data without the application of any prior knowledge of the relationships. The final result of clustering depends on the perception of the user through the application of some subjective decisions. These decisions are (1) the definition and measurement of the relationships between the data elements that would warrant clustering, (2) the actual number of clusters expected in the clustering task, and (3) the representation of the generated clusters. Most conventional clustering algorithms employ the use of distance or similarity measures to determine objects proximity and to generate clusters [1].

32


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

features and exhibited statistical properties. CLUSEQ builds a probabilistic suffix tree in the initialization of sequence. Although this method seems better than most sequence clustering methods, CLUSEQ does not consider that some sequences can exhibit closer similarity than others depending on whether the sequences and amino acids or nucleic acids [13].

edit distance. The codon-based scoring method (COBASM) [5] is also used with PSO in the clustering of nucleic acids sequences. COBASM considers the 1 application of codons to maintain the structural similarity of sequences. The remainder of the paper is organized as follows: Section II presents related work, Section III discusses particle swarm optimization, Section IV describes distance measures employed in the PSO clustering task, Section V is devoted to the experimental results obtained with the PSO-based sequence clustering, and Section VI presents the conclusion, and directions for further research.

Most clustering methods employ distance measures to determine the proximity of data elements. Some of these distance/similarity measures are mentioned in Section IV. However, the edit distance, originally designed for similarity search is also employed in clustering tasks. It has been proven that the edit distance lacks the ability to handle sequences based on their structural similarities [2]. Muthukrishnan and Sahinalp [14] proposed the edit distance with the use of block operations all in an attempt to optimize the edit distance’s performance. Furthermore, to still optimize the efficiency of the edit distance, Cormode and Muthukrishnan [15] introduced a greedy algorithm to reduce moves of substrings to moves of characters and convert moves of characters to only inserts and deletes.

V. RELATED WORK Several methods have been proposed for data clustering tasks [6]. These methods have been divided into two broad categories: Hierarchical and partitional. One of the highly researched partitional algorithm is the K-means algorithm. It is a partitional iterative clustering approach [7] to data clustering. The K-means algorithm is popular and most criticized for its demanding the number of clusters for a clustering task a priori. However, K-means algorithm is simple and easier to implement with linear time complexity.

In the same vein, Lopresti and Tomkins [16] proposed block edit models for approximate string matching, which could be extended to sequence clustering, by examining string edit distance in which two strings are compared by extracting collections of substrings and placing the two strings into correspondence with each other.

The Fuzzy-C means (FCM) is a clustering method that introduces the fuzzy version of the K-means [8], [9]. Although FCM still demands the provision of the value of K a priori, it outperforms the K-means in that it is less affected by the presence of uncertainty in the data [10].

VI. PARTICLE SWARM OPTIMIZATION

The K-harmonic means algorithm computes the harmonic means of each cluster centre to every pattern and then updates the cluster centroids accordingly [11]. The K-harmonic means is less affected by the initial conditions. Experimental results show that it outperforms the FCM and K-means [12].

Particle swarm optimization (PSO) is derived from the social behaviour of, and the implicit rules adhered to by birds in a flock that enable them move synchronously without colliding [17]. The belief that social sharing of information by members of a population may provide an evolutionary advantage was the basic idea behind the development of PSO [18]. Naturally, our problems are sometimes solved by our interaction one with another. Our interaction produces socio-cognitive experience which ultimately affects our behaviours and attitudes, otherwise referred to as the social and cognitive components. The cognitive component represents the particle’s own experience

Yang and Wang [2] proposed CLUSEQ for the clustering of sequences based on sequence structural 1

A codon is simply a tri-nucleotide (triplets of bases - A, C, G, and U or T, typifying Adenine, Cytosine, Guanine, Uracil and Thymine, respectively) sequence that is used to identify or specify an amino acid.

33


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

PSO has been used by Van der Merwe and Engelbrecht [4] to cluster sets of multidimensional data using a fitness function consisting of quantization error only. In general, the results show that the PSObased clustering algorithm performs better than the Kmeans algorithm. PSO is more likely to find nearoptimal solutions than K-means. This is because, whereas PSO is less sensitive to the effect of the initial conditions owing to its population-based nature, K-means, as a greedy algorithm, depends on the initial conditions. PSO-based clustering has also been used by Omran [3] in the clustering of image pixels. In his work, several versions of PSO were examined. The gbest PSO was found to outperform most of the other versions on most data sets. Tillett et al. [20] employed PSO in the clustering of sensors in a sensor network. When the PSO technique was tested against random search and simulated annealing, it was found to be more robust. PSO has also been applied in document clustering [21]. Cui et al. demonstrated that the hybrid PSO algorithm employed in the task of document clustering was able to generate more compact clusters in comparison to the K-means algorithm. Gene clustering was done by Xiao et al. [22] by proposing the application of Self-Organizing Map (SOM) and PSO. SOM and PSO were applied independently in gene clustering. The result obtained when both methods were used was better than when the the individual methods were used.

as to where the best solution is, while the social component represents the belief of the entire swarm as to where the best solution is. PSO simulates this idea of a social optimization where social organisms tend to move towards the direction of optimal benefit. The two early variants of the PSO algorithm are referred to as the gbest (global best) PSO and the lbest (local best) PSO. The particles (or a swarm of individuals) in the gbest PSO move toward their best previous positions and toward the best particle in the entire swarm. In the lbest PSO each particle moves towards its best previous positions and towards the best particle in its restricted neighbourhood [19]. The gbest PSO has been employed in unsupervised image classification and is considered efficient in cluster analysis in comparison to lbest PSO [3]. The personal best position, yi, of particle i is the best position the particle has ever visited. The best position is the position that resulted in the best fitness value. Considering f to represent a fitness function, then, the personal best position of particle i at time step t is computed as:

(1) The current position of particle i is denoted by xi. The velocity of particle i for the lbest PSO is calculated as in equation (3). For the gbest PSO, ŷij = ŷj, for all i =1,…, nx (the total size of the swarm) where ŷij is the neighbourhood best position of the particle and ŷj is the position of the global best particle.

B. PSO-based Clustering Algorithm In this paper PSO-based clustering is employed in the clustering of nucleic acid sequence data, with minor modifications on the data type. Several nucleotides combine to form a nucleic acid sequence which are referred to in this paper as patterns. Each sequence represents a particle (a candidate solution) in the swarm. Patterns identify particles, and a single particle represents the cluster centroids in the individual clusters. To measure the fitness of each particle, Equation (4) was used.

where vij(t), yij(t) and xij(t) are the velocity, the personal best position and the current position, respectively, of particle i in an Nd-dimensional swarm, P, for j =1, ··· ,Nd at time step t, c1 and c2 are positive acceleration constants used to scale the contribution of the cognitive and social components respectively, and r1j(t), r2j(t) U(0, 1) are random values in the range [0,1]. Equation (3) is used to update the particle’s new position at every iteration.

where Intmax and Intmin are respectively the intra and inter-cluster distances, w1 and w2 are user-defined constants used respectively to specify the weight that

A. PSO Clustering Method

34


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Fig. 1. The PSO clustering algorithm

influences how much the intra and the inter-cluster distances will contribute to the final fitness, and nmax is the maximum value in the data set (between 0 and 5 in this paper, i.e. 4). The intra and inter-cluster distances are measured by calculating the maximum and minimum average distance within and between the clusters, respectively [3], and are given as

VII.

DISTANCE MEASURES

This section examines distance/similarity measures employed in this paper in the clustering of nucleic acid sequences. Most clustering tasks are performed based on some similarity or dissimilarity measures. Distance or similarity measures are mathematical representations of closeness or similarity. The selection of distance measures for clustering is an important task. This is because it has the ability to influence the shape of the clusters, as some patterns may be close to one another according to one distance measure and farther away according to another. This was observed in the under-listed distance measures.

and

A. Euclidean Distance The most widely-used distance measures are the Euclidean distance and the squared Euclidean distance. The Minkowski metric from which the Euclidean distance is derived, is defined as

where Sk is the kth cluster, si is the ith sequence in cluster Sk, ck is the centroid of Sk, mk is the number of sequences in Sk, and K is the number of clusters formed for the clustering problem. The notation d(x,y) is used in equations (5), (6) and (7) to denote the distance between the properties x and y. Quantization error function is employed to determine the quality of the clustering and is defined as:

The Euclidean distance is a special case of the Minkowski metric where β = 2 [23]. The Euclidean distance tends to form hyper-spherical clusters [23]. The squared Euclidean distance metric uses the same equation as the Euclidean distance metric, but without the square root. This makes clustering with the squared Euclidean distance metric faster than with the regular Euclidean distance.

In summary, the PSO clustering algorithm is given in Figure1. Initialize each sequence to contain ck cluster centroids; for t =1 to Imax do for each sequence (si) (i) calculate the distance, d(si, ck) for all clusters ck –centroid of cluster Sk (ii) allocate sequence si to cluster Sk for d(si, ck) = min k=1,··· ,K {d(si, ck)} (iii)calculate fitness using equation(4)

B. Edit Distance The edit distance (also called the Levenshtein distance) is another distance measure developed by Levenshtein [24], and employed in sequence similarity search. The edit distance is a generalization of the Hamming distance. It is used in DNA sequence analysis, plagiarism detection, speech recognition, and spell checking [25].The edit distance is the minimum number of edit operations (insertions, deletions and substitutions) needed to transform one

Update the pbest position and the gbest solution.

35


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

sequence into another. For two sequences S1[1..i], and S2[1..j] the edit distance (ED) between S1and S2 (denoted by d(i, j)) is defined as

Fig. 2. Edit Distance Algorithm.

C. The Codon-based Scoring Method The codon-based scoring method (COBASM) [27], [5] takes an entire source sequence and compares each character with the target the same way the edit distance does. However, instead of scoring mismatches, COBASM scores a match. Where there are matches, between the characters compared, COBASM scores 1 per character and 0 otherwise. If there are consecutive blocks of three characters that are similar, an additional 1 is added to the score. This procedure continues until all the characters are compared. In other to capture all the codons in the target sequence, COBASM continues the search on the second position in the target sequence. The idea is to capture the principle governing the construction of the codon table used in the formation of the twenty amino acids found in protein. Nucleic acid (DNA/RNA) sequences are only considered similar if the percentage similarity is 70% [13]. Therefore, the value obtained from COBASM must be up to 70% the entire length of the source sequence before it could be considered a member of the cluster. The algorithm is given in Figure 3.

The value d(i,j) is, therefore, the minimum edit operations needed to transform the first i characters of S1 into the first j characters of S2. Using the algorithm in Figure 2, the edit distance d(I,j) is calculated using a bottom-up dynamic programming approach as is common to most string algorithms [26]. From the algorithm, if the lengths of S1 and S2 are denoted by n and m, respectively, the edit distance between the two sequences is the value d(n,m), obtained by computing d(i,j) for all combinations of i and j, for 0 ≤ i ≤ n and 0 ≤ j ≤ m. The edit distance is simple and easy to implement. However, it has the following disadvantages: • The edit distance has an order of mn time and space complexity (O(mn)), which makes it rather too slow when the dataset is large. • It parallelizes poorly as a result of large data dependencies.

A contiguous collection of nucleotide symbols is what is referred to as sequence. The symbols are A, C, G, T in DNA, and a replacement of T with U in RNA. In sequence clustering, data are represented in symbolic form and need to be converted to numeric form to implement PSO. To achieve this, the nucleotides are assigned values to convert them to numeric as follows: A=1, C=2, G=3, U=T=4. The resultant sequence data can be interpreted to mean a series of events that are separated by intervals. A symbol (now represented in numeric form) is regarded as an event and a comma (,) an interval. An event interval is, therefore, represented by a lower and an upper bound, as (1, 3, 2) with an interval between in a 3-dimensional plane, to mean AGC. A sequence of length 60 will have 60 events of 59 intervals, i.e. 60-dimensions. COBASM is simple to implement and results have proved that it is robust in the task of sequence clustering as compared to edit distance. In the experiment performed in this paper, when Euclidean distance is replaced with COBASM in PSO-based sequence clustering, the result obtained shows a significant improvement over other methods. It is proven by Baridam [5] that COBASM satisfies

int ED(char s[1..m], char t[1..n]) // d is a table with m+1 declare int d[0..m, 0..n] rows //and n+1 columns for i = 0,…,m do d[i, 0] = i endfor for j = 0,…,n do d[0, j] := j endfor for i = 1,…,m do for j = 1,…, n do if s[i] = t[j] then cost = 0 else cost = 1 // deletion, insertion and substitution d[i, j] = minimum(d[i-1, j] + 1, d[i, j-1] + 1, d[i1, j-1] + cost) endif endfor

36


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

the condition for metrics. This justifies the usage of COBASM alongside other distance metrics in this paper.

Initialize S1 and S2; for | S1 |: i = 1 to n do for| S2 |: j = 1 to m do //determine length of longest sequence //if sequences are unaligned or unequal if n<m //if length of sequences less than longest sequence //do pattern-element-search //Compare s1[i] with s2[j],s2[j +1], ··· ,s2[m−n] and //s1[i + 1] with s2[j + 1],s2[j + 2], ··· ,s2[m − n + 1]

D. Manhattan Distance The Manhattan distance metric is defined as:

if s1[i]= s2[j] score =1 else score =0 endif if n = m //length of sequences are equal if s1[i]= s2[j] //examine each character of S1 and // S2 score =1 else score =0 endif endif //split sequence S1 and S2 (including gaps if aligned) //into blocks of three nucleotides each and compare //adjacent blocks for i, j ≥ 0 do //do a total block-match if s1[i +1,i +2,i +3] = s2[j +1,j +2,j + 3] score = score +1 endif endfor endfor endfor return score

where Nd is the number of variables, and S1i and S2i th

are the values of the i variable, at points S1 and S2 respectively. The Manhattan distance is measured as the sum of the displacements along the vertical and horizontal axes. This implies that the Manhattan distance function computes the distance between points through a grid-like path. The Manhattan distance metric is poor with datasets of high dimensionality [28]. VIII.

EXPERIMENTAL RESULTS

This section compares the results of applying different distance/similarity measures with the PSO clustering algorithm in the clustering of six sequence datasets. The distance measures are Euclidean distance, edit distance (ED), Manhattan distance measures and COBASM. The six datasets used were emblFasta Rickettsia typhi str. RNA sequences with Accession Number AE017197 from Wilmington Complete Genome of 1111500 nucleotides, Homo sapiens’ melanatonic melanoma DNA sequences, mRNA bos taurus sequences from Genetic Sequence Databank with Accession Number BE484664 obtained from the work of Sonstegard, et al [29], and DNA dental sequences from Department of Microbiology, University of Pretoria, South Africa.

Fig. 3. A pseudo-code for the codon-based scoring method

The main purpose was to compare the quality of the clusters generated by each distance measure based on • the quantization error, Qe • the intra-cluster distances, Intmax and • the inter-cluster distances, Intmin. The intracluster and inter-cluster distances defines the degree of compactness and separability of generated clusters. For all the results obtained, averages of 30 simulations over 100 iterations are reported with standard deviations to indicate the range of values to which the distance measures converge.

TABLE I PERFORMANCE COMPARISON

37


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

38


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

The following data sets, of varying complexities, were employed. • Dataset 1: 500 Rickettsia typhi str. RNA sequences consisting of 30000 nucleotides. • Dataset 2: 200 Rickettsia typhi str. RNA sequences consisting of 12000 nucleotides. • Dataset 3: 100 Rickettsia typhi str. RNA sequences consisting of 6000 nucleotides. • Dataset 4: 31 DNA dental sequences of varying lengths consisting of approximately 12550 nucleotides.

respectively with COBASM. The quality further improved with the weights set to 0.6 and 0.4, respectively with all the distance measures. A significant result was obtained when the weights were set to 0.8 and 0.2, respectively. The results, again, became poor with the weights set to 0.1 and 0.9, respectively. From these results, it is clear that an increase in the value of w1 produced better quality of generated clusters. These trends were observed on all the other datasets. The results obtained further demonstrate that numeric-based distance measures do not produce best clustering results on nucleic acid sequences.

• Dataset 5: 20 Homo sapiens’ melanatonic melanoma DNA sequences of varying lengths and a total of 15658 nucleotides with the longest sequence having 1471, and the shortest 134 nucleotides long. • Dataset 6: 141 mRNA bos taurus sequences of 29718 nucleotides with the longest sequence having 508, and the shortest 198 nucleotides long. Accession date: June 15, 2008.

IX. CONCLUSION AND FURTHER RESEARCH This paper investigated the performance of PSO-based clustering method as applied to the clustering of nucleic acid sequences by introducing distance measures. The performances of the three distance measures namely edit distance, Manhattan distance and COBASM were examined alongside Euclidean distance, as they were applied in the clustering of the high-dimensional problems. Several sub-objective weights were used to observe the robustness of the method. PSO was found to perform best when COBASM was introduced in the clustering problem. The performance was evaluated based on the quality, compactness and separability of formed clusters. The results demonstrate that numeric-based distance measures are not capable of producing quality clusters on nucleic acid sequences.

Table 1 summarizes the results obtained for each of the four distance measures. Investigations of the influence of sub-objective weights on the intra-and inter-cluster distances on the final fitness were done. To determine the quality of clusters generated using Equation (4), weights were employed as follows: w1 = 0.5, 0.6, 0.3, 0.8, 0.1 and w2 = 0.5, 0.4, 0.7, 0.2, 0.9, respectively. The values are chosen to ensure sum of the weights (w1 and w2) equals 1.0. The final results obtained from this parametric clustering are very much dependent on the number of iterations, hence the results in Table I. The results obtained show some remarkable improvement in quality, compactness and separability of clusters generated with COBASM on virtually all the datasets as indicated by the values generated in Table 1. The performance of PSO when the other distance measures were employed also showed some significant results. This shows the robustness of the PSO-based sequence clustering. However, it was observed that Manhattan distance performed very poorly in all cases. This confirms that Manhattan distance measure is poor in the handling of high dimensional data [28]. For Dataset 1, the quality of clusters generated improved from 60.7711 with w1 = w2 =0.5 to 24.7662 with the weights set to 0.3 and 0.7,

This work can be extended by applying PSO with the codon-based scoring method in the clustering of amino acids (protein) sequences. In the experiment conducted in this paper, multidimensional problems were avoided by truncating the sequences to the nearest available dimension that could be handled by PSO clustering functions. An extension to multi-dimensional problems will be a novel contribution.

REFERENCES [1] M. Kirsten, S. Wrabel, and T. Horvath, “Distance-based approaches to relational learning and clustering,” in Relational data mining. New York, Inc.: Springer-Verlag, 2000, pp. 213– 230.

39


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

[2] J. Yang and W. Wang, “CLUSEQ: e cient and e ective sequence clustering,” in Proceeding of 19th International Conference Data Engineering, Mar. 2003, pp. 101–1125. [3] M. G. H. Omran, “Particle swarm optimization methods for pattern recognition and image processing,” PhD thesis, University of Pretoria, Faculty of Engineering, Built Environment and Information Technology, Department of Computer Science, Nov. 2004. [4] M. Van der and A. P. Engelbrecht, “Data clustering using particle swarm optimization,” in IEEE Congress on Evolutionary Computation, Canberra, Australia, 2003, pp. 215–220. [5] B. B. Baridam, “Block-based similarity measure and optimization techniques for nucleic acid sequence clustering,” 2011. [6] B. S. Everitt, Cluster Analysis, 3rd ed. New York: Halsted Press, 1993. [7] E. Forgy, “Cluster analysis of multivariate data: Efficiency versus interpretability of classification,” Biometrics, vol. 21, pp. 768–769, 1965. [8] J. Bezdek, “A convergence theorem for the fuzzy ISODATA clustering algorithms,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 2, pp. 1–8, 1980. [9] ——, Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, 1981. [10] A. Liew, S. Leung, and W. Lau, “Fuzzy image clustering incorporating spatial continuity,” in IEEE Proceedings Vision, Image and Signal Processing, vol. 147, no. 2, 2000. [11] Y. Leung, J. Zhang, and Z. Xu, “Clustering by space-space filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 12, pp. 1396–1410, 2000. [12] G. Hamerly and C. Elkan, “Alternatives to the k-means algorithm that find better clusterings,” in Proceedings of the ACM Conference on Information and Knowledge Management(CIKM-2002), 2002, pp. 600–607. [13] J. Claverie and C. Notredame, Bioinformatics for dummies, 2nd ed. Indiana: Wiley, 2007. [14] S. Muthukrishnan and S. C. Sahinalp, “Approximate nearest neighbors and sequence comparison with block operations,” in Proceeding of the thirty-second annual ACM symposium on Theory of computing, 2000, pp. 416– 424. [15] G. Cormode and S. Muthukrishnan, “The string matching problem with moves,” ACM Transactions on Algorithms, vol. 3, no. 1, Feb. 2007. [16] D.LoprestiandA.Tomkins,“Block edit models for approximate string matching,” Theoretical Computer Science, vol. 181, pp. 159–179, 1997. [17] R. C. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of Sixth Symposium on Micro Machine and Human Science. IEEE Service Center, Piscataway, NJ, 1995, pp. 39–43. [18] K. E. Parsopoulos and M. N. Vrahatis, “Recent approaches to global optimization problems through particle swarm optimization,” Natural Computing, vol. 1, pp. 235–306, 2002. [19] R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization: An overview,” Swarm Intelligence, vol. 1, pp. 33–57, 2007, available online:[doi:10.1007/s11721-0070002-0]. [20] J. C. Tillett, R. M. Rao, F. Sahin, and T. M. Rao, “Particle swarm optimization for clustering of wireless sensors,” in Proceeding of the Society of Photo-Optical Instrumentation Engineers, vol. 5100, no. 73, 2003. [21] X. Cui, P. Palathingal, and T. E. Potok, “Document clustering using particle swarm optimization,” in IEEE Swarm Intelligence Symposium, Pasadena, California, 2005, pp. 185–191. [22] X. Xiao, E. Dow, R. Eberhart, B. Z. Miled, and R. Oppelt, “Gene clustering using self-organizing maps and particle

swarm optimization,” in Proceeding of Second IEEE International-Workshop on High Performance Computational Biology, Nice, France, 2003. [23] R. Xu and D. Wunsch II, “Survey of clustering algorithms,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 601– 614, May 2005. [24] V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals,” Doklady Akademii Nauk SSSR, vol. 163, no. 4, pp. 845–848, Jan. 1965. [25] M. Gilleland, “Levenshtein distance, in three flavours,” Merriam Park Software, Available online: [www.merriampark.com/ld.htm], Tech. Rep. [26] D. Gusfield, Algorithms on strings, trees and sequences: Computer Science and computational Biology. Cambridge, UK: Cambridge University Press, 1997. [27] B. B. Baridam and O. O. Owolabi, “Conceptual clustering of RNA sequences with codon usage model,” Global Journal of Computer Science and Technology, vol. 10, no. 8, pp. 41–45, 2010. [28] G.Hamerly, “Learning structure and concepts in data using data clustering,” PhD thesis, University of California, San Diego, 2003. [29] T. Sonstegard, A. V. Capuco, J. White, C. P. Van Tastell, E. E. Connor, J. Cho, R. Sultana, L. Shade, J. E. Wray, K. D. Wells, and J.Quackenbush, “Analysis of bovine mammary gland EST and functional annotation of the Bos Taurus gene index,” Mammary Genome, vol. 13, no. 7, pp. 373–379, 2002.

40


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

An Optimized Energy Efficient Routing Algorithm For Wireless Sensor Network NIDHI BATRA, ANUJ JAIN, SURENDER DHIMAN A.P BMIET, SNP,

A.P IITB, SNP

MATLAB simulation tool. And last section includes conclusion and future aspects.

Asbstrac-: Wireless sensor network consists of thousands of individual nodes which collectively work as per application of the network. Each node made up of various parts which individually having a big research area. One of the upcoming research area of it is its power consumption which in turn depends on energy wastage of nodes. So various steps are to be taken to overcome this problem of energy wastage, one of them would be the proper designing of routing algorithm that considers the factor that enhances energy wastage. In this paper firstly outline of WSN is being discussed which includes MAC and routing protocols of WSN , then an optimised routing protocol is proposed and its analysis is done by MATLAB simulation tool. Finally paper is concluded and its future scope is being discussed.

Figure1 WSN architecture with applications

II.

Keywords- MAC protocols, Routing protocols, S-MAC, Leach, SPIN, Directed diffusion, QoS based routing protocol etc.

I.

MAC AND ROUTING PROTOCOLS IN WIRELESS SENSOR NETWORK

Medium Access Control (MAC) protocols [4, 5] coordinate the times where a number of nodes access a shared communication medium. MAC protocol task is to regulate the access of a number of nodes to a shared medium in such a way that certain application-dependent performance requirements are satisfied or in other words it specify how nodes in a network access the shared communication channel. MAC protocols can be classified as: whether the protocol is Fixed assignment, Demand assignment and random access based. Second is on the basis of which problem of energy wastage is being solved by the protocol. Another classification is whether the protocol is contention based or scheduled based. Various MAC protocols are [9-11]:

INTRODUCTION

The concept of wireless sensor networks [1, 2] is based on a simple equation: Sensing + CPU + Radio = Thousands of potential applications. Wireless sensor networks [3] consist of multiple sensor nodes exchanging data per wireless connection. Each sensor node can collect process and transfer local environmental information. This opens a wide range of applications, not only in the military field, but also in environment and habitat monitoring, healthcare systems, home automation, traffic control, and early disaster detections. The major factor that influences the sensor network design is lifetime of node which can be enhanced by making an energy efficient routing algorithm. Section 2 gives the idea about MAC protocols and routing protocols in wireless sensor network and gives overview of various existing protocols. In Section 3 a new routing algorithm is being proposed which considers the factors that leads to energy wastage and also the algorithm is optimized in sense of distance travelled. Then analysis of proposed algorithm is done on

MA C

DESCRIPTION

ADVANTAGES

DISADVANTAGES

The S-MAC (Sensor-MAC)

The energy waste

Broadcast data packets

PR OT OC OL SMA

41


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

C

protocol provides mechanisms to circumvent idle listening, collisions, and overhearing. SMAC adopts a periodic wakeup scheme, that is, each node alternates between a fixedlength listen period and a fixedlength sleep period. It includes periodic listen and sleep, collision and overhearing avoidance, adaptive listening, and message passing.

caused by idle listening is reduced by sleep schedules.

TMA C

T-MAC protocol solves the drawback of constant sleep and listen period. In T-MAC, listen period ends when no activation event has occurred for a time threshold TA

Gives better result under variable load.

Early sleeping problem.

DE EMA C

DEE-MAC protocol reduces energy consumption by forcing the idle listening nodes to sleep using synchronization performed at the cluster head. DEEMAC operations consist of rounds. Each of the rounds includes a cluster formation phase and a transmission phase.

Reduce the cost of idle listening.

Power of cluster head is easily depleted.

In LEACH the nodes organize themselves into local cluster with one node acting as cluster-head. The operation of LEACH is broken up into rounds,

Proper rotation of clusterhead in a cluster.

LE AC H

do not use RTS/CTS which increases collision probability.

where each round begins with a setup phase followed by a steady state phase. AMA C

Sleep and listen periods are predefined and constant, which decreases the efficiency of the algorithm under variable traffic load.

A-MAC is a TDMA-based MAC protocol that uses the supplied energy efficiently by applying a scheduled power down mode when there is no data transmission activity.

Additional flag field called MORE PACKET.

Overhead increases.

Node can transmit to multiple destinations

Table1 Various MAC protocols for WSN

Routing means providing path for the data to flow in the network i.e. sending information from source to sink via intermediate nodes. Routing in WSNs[6, 7, 8] is very challenging due to the inherent characteristics such as: due to the relatively large number of sensor nodes, it is not possible to build a global addressing scheme for the deployment of a large number of sensor nodes, in contrast to typical communication networks, almost all applications of sensor networks require the flow of sensed data from multiple sources to a particular BS, sensor nodes are tightly constrained in terms of energy, processing, and storage capacities, in most application scenarios, nodes in WSNs are generally stationary after deployment except for, may be, a few mobile nodes. Routing protocols can be classified as network structure based and protocol operation based.

Various routing protocols are:

ROUTING DESCRIPTION PROTOCOL SPIN Efficient for short range applications only.

SPIN disseminate all the information at each node to every node in the network assuming that all nodes in the network are potential basestations. This enables a user to query any node and get the required information immediately. These protocols make use of the property that nodes in close proximity have similar data, and hence there is a need to only distribute the data that other nodes do not posses. The

42


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

SPIN family of protocols is designed based on two basic ideas: 1. Sensor nodes operate more efficiently and conserve energy by sending data that describe the sensor data instead of sending all the data; for example, image and sensor nodes must monitor the changes in their energy resources. 2. Conventional protocols like flooding or gossiping based routing protocols waste energy and bandwidth when sending extra and un-necessary copies of data by sensors covering overlapping areas.

DIRECTED DIFFUSION

Directed diffusion is a data-centric (DC) and application- aware paradigm in the sense that all data generated by sensor nodes is named by attribute-value pairs. The main idea of the DC paradigm is to combine the data coming from different sources enroute (in-network aggregation) by eliminating redundancy, minimizing the number of transmissions; thus saving network energy and prolonging its lifetime. DC routing finds routes from multiple sources to a single destination that allows in-network consolidation of redundant data. In directed diffusion, sensors measure events and create gradients of information in their respective neighbourhoods.

PEGASIS

The protocol, called Power-Efficient Gathering in Sensor Information Systems (PEGASIS), is a near optimal chain-based protocol. The basic idea of the protocol is that in order to extend network lifetime, nodes need only communicate with their closest neighbours and they take turns in communicating with the base-station. When the round of all nodes communicating with the basestation ends, a new round will start and so on. This reduces the power required to transmit data per round as the power draining is spread uniformly over all nodes. PEGASIS has two main objectives. First, increase the lifetime of each node by using collaborative techniques and as a result the network lifetime will be increased. Second, allow only local coordination between nodes that are close together so that the bandwidth consumed in communication is reduced.

QOS BASED

In QoS-based routing protocols [6], the network has to balance between energy consumption and data quality. In particular, the network has to satisfy certain QoS metrics, e.g., delay, energy, bandwidth, etc. when delivering data to the BS.

COHERENT BASED

In coherent routing, the data is forwarded to aggregators after minimum processing. To perform energy-efficient routing, coherent processing is normally selected. Since coherent processing generates long data streams, energy efficiency must be achieved by path optimality.

III.

PROPOSED ROUTING ALGORITHM

The energy efficient shortest path routing algorithm proposed includes MAC protocols and routing protocols of WSN. It also includes the concept of ACO for getting the shortest path between sender and receiver. The routing algorithm is designed as follows: Step 1: Get number of nodes and id of sink node from user. Step 2: Deployed the nodes in network as per topology defined, number of nodes deployed is 5 more than given by user (by default). After that initialise all nodes properties as default, these are radio, id, sender or receiver, distance, sending

43


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

packet, receiving packet. And then distance between each node is to be taken. Step 3: In this step nodes are checked for their radio to be ‘ON’ and to find sender and receiver. After finding sender, receiver is to be choosen on basis of shortest distance out of the nodes whose radios are ‘ON’. After getting sender and receiver the nodes of all other nodes should be ‘OFF’ for energy conservation. Step 4: Now set the property ‘sp’ of sender as it has to send packet to the desired receiver choosen above. When Receiver starts receiving data st its property ‘rp’, if time of reception is less than transmission the data is transmitted successfully, otherwise retransmit the data for prescribed number of times if reception time is less in between those number of times than transmission time data is transmitted successfully other data is unable to read, in that case again the nodes are checked for their radio ‘ON’ at that time and whole procedure is repeated. Step 5: Put that sender and receiver node’s id in path file. Now put the radio of nodes acting as sender and receiver in ‘OFF’ condition. Step 6: Now again set the default properties of all node in network. And procedure repeats from step 3 to step 5, but the difference is that now last element of path is taken as sender. Step 7: The whole procedure repeats till last element of path is same as sink provided by user. Step 8: Now links are made between all elements of path defining route from source to sink via intermediate nodes. Source node is displayed by green colour; sink by red colour and intermediate nodes by magenta colour. The routing path obtained by this method is energy efficient shortest path.

Figure 2 Number of nodes and sink node (user defined)

2. In this nodes are being as deployed as per topology defined by the user.

Figure 3 Topology (deployment of nodes in network)

IV.

3. This is the final step of routing algorithm. It consists of following sub functions: nodes, final sender, final receiver, transmission, reception and path. Nodes tell the how many nodes have their radio ‘ON’. Final sender decides the nodes act as final sender. Final receiver is to be taken on the basis of shortest distance among active nodes, after this except final sender and final receiver radio of all other nodes should be kept ‘OFF’ so that there would be no energy wastage in terms of collision, overhearing and idle listening. After decision of final sender and receiver data transmission take place then reception of data take place. Next step is to check the data whether it is transmitted successfully or not and if not it would be retransmitted for specific number of times.

RESULTS AND ANALYSIS

The proposed routing algorithm is being simulated in MATLAB. The GUI designed consists of number of nodes and sink node, both of these are user defined. It also contains pushbuttons topology and routing described below. It also gives the path followed by data from source to sink via intermediate nodes with low energy consumption and shortest path. The axis shows the deployment of nodes in network and routing path from source to sink. Nodes with red colour shows sink node, green colour is for source node and magenta colour for intermediate node. The whole routing algorithm consists of 3 main parts: 1 First step include user’s defined number of nodes and sink node.

44


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

Final step is to get the whole path traversed by data from source to sink node i.e path consisting of source, intermediate and sink node

1. Nodes only listens the data when radio is in ‘ON’ state so idle listening reduces. 2. After selection of sender and receiver all other nodes except them should make their radio’ OFF’ till transmission took place. 3. Nodes radio status is kept ‘ON’ randomly so that energy consumption of all nodes are in balance mode. The proposed routing algorithm can be further improved by taken into consideration the environmental factors. This can be done by deploying the WSN nodes in physical environment and then transmit data between nodes and compare the data with the work done in above thesis this will gives the effect of environmental factors on routing algorithm in WSN. REFRENCES

Figure 4 Routing path of data (source to sink)

[1] Lewis, F.L., “Wireless Sensor Networks,” Smart Environments: Technologies, Protocols, and Applications, ed. D.J. Cook and S.K. Das, John Wiley, New York, 2004. [2] Protocols and Architectures for Wireless Sensor Networks. Holger Karl and Andreas Willig Copyright 2005 John Wiley & Sons, Ltd. ISBN: 0-470-09510-5. [3] David Culler, Deborah Estrin, Mani Srivastava,”Overview of sensor networks” IEEE 2004. [4] Ilker Demirkol, Cem Ersoy, and Fatih Alagöz, “MAC Protocols for Wireless Sensor Networks: a Survey. [5] Li Deliang, Peng Fei, “Energy-efficient MAC protocols for Wireless Sensor Networks”. [6]Bhaskar Krishnamachari,” Towards Efficient Routing in Wireless Sensor Networks” , SenMetrics 2005. [7] Jamal N. Al-Karaki Ahmed E. Kamal,” Routing Techniques in Wireless Sensor Networks: A Survey”,IEEE Wireless Communication magazine(2005). [8] Kemal Akkaya & Mohamed Younis ,” A Survey on Routing Protocols for Wireless Sensor Networks” 2005. [9] Rozeha A. Rashid, Wan Mohd Ariff Ehsan W. Embong, Azami Zaharim, Norsheila Fisal, “Development of Energy Aware TDMA-Based MAC Protocol for Wireless Sensor Network System”, Euro Journals Publishing, Inc. 2009 [10] SONG Wen-miao, LIU Yan-ming, ZHANG Shu-e, “Research on SMAC protocol for WSN”, 2008 IEEE [11] Sungrae Cho, Kalyani Kanuri, Jin-Woong Cho, Jang-Yeon Lee, and Sun-Do June,” Dynamic Energy Efficient TDMA-based MAC Protocol for Wireless Sensor Networks”, Proceedings of the Joint International Conference on Autonomic and Autonomous Systems and International Conference on Networking and Services (ICAS/ICNS 2005).

The final graph showing less energy consumption by proposed routing algorithm in comparison to the existing algorithms.

Figure 5 Energy comparison between proposed and existing routing algorithm

V.

CONCLUSION AND FUTURE SCOPE

Routing in sensor networks is a new area of research, with a limited, but rapidly growing set of research results. This thesis work design an energy efficient optimized routing algorithm that combines the features of both an energy efficient routing algorithm and shortest distance routing algorithm thus the main factor that effect the proper working of Wireless Sensor Network i.e lifetime of WSN nodes has been reduced. Since the energy consumption reduces and distance is also optimized so the proposed routing algorithm is cost effective. In designing the algorithm some points are taken into consideration, these are:

45


INTERNATIONAL JOURNAL OF INNOVATIVE TECHNOLOGY & CREATIVE ENGINEERING (ISSN:2045-8711) VOL.1 NO.5 MAY 2011

46


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.