Aijrstem vol2 print

Page 1

ISSN (PRINT): 2328-3491 ISSN (ONLINE): 2328-3580 ISSN (CD-ROM): 2328-3629

Issue 5, Volume 1 & 2 December-2013 to February-2014

American International Journal of Research in Science, Technology, Engineering & Mathematics

International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research)

STEM International Scientific Online Media and Publishing House Head Office: 148, Summit Drive, Byron, Georgia-31008, United States. Offices Overseas: India, Australia, Germany, Netherlands, Canada. Website: www.iasir.net, E-mail (s): iasir.journals@iasir.net, iasir.journals@gmail.com, aijrstem@gmail.com



PREFACE We are delighted to welcome you to the fifth issue of the American International Journal of Research in Science, Technology, Engineering & Mathematics (AIJRSTEM). In recent years, advances in science, technology, engineering, and mathematics have radically expanded the data available to researchers and professionals in a wide variety of domains. This unique combination of theory with data has the potential to have broad impact on educational research and practice. AIJRSTEM is publishing high-quality, peer-reviewed papers covering topics such as Computer and computational sciences, Physics, Chemistry, Mathematics, Applied

mathematics,

Biochemistry,

Robotics,

Statistics,

Electrical

&

Electronics

engineering, Mechanical & Industrial engineering, Civil Engineering, Aerospace engineering, Chemical engineering, Astrophysics, Nanotechnology, Acoustical engineering, Atmospheric sciences, Biological sciences, Education and Human Resources, Environmental research and education, Geosciences, Social, Behavioral and Economic sciences, Geospatial technology, Cyber security, Transportation, Energy and Power, Healthcare, Hospitality, Medical and dental sciences, Marine sciences, Renewable sources of energy, Green technologies, Theory and models and other closely related fields in the discipline of Science, Technology, Engineering & Mathematics. The editorial board of AIJRSTEM is composed of members of the Teachers & Researchers community who have expertise in the fields of Science,

Technology,

Engineering

&

Mathematics

in

order

to

develop

and

implement widespread expansion of high�quality common standards and assessments. These fields are the pillars of growth in our modern society and have a wider impact on our daily lives with infinite opportunities in a global marketplace. In order to best serve our community, this Journal is available online as well as in hard-copy form. Because of the rapid advances in underlying technologies and the interdisciplinary nature of the field, we believe it is important to provide quality research articles promptly and to the widest possible audience.

We are happy that this Journal has continued to grow and develop. We have made every effort to evaluate and process submissions for reviews, and address queries from authors and the general public promptly. The Journal has strived to reflect the most recent and finest researchers in the field of emerging technologies especially related to science, technology, engineering & mathematics. This Journal is completely refereed and indexed with major databases like: IndexCopernicus, Computer Science Directory, GetCITED, DOAJ, SSRN, TGDScholar, WorldWideScience, CiteSeerX, CRCnetBASE, Google Scholar, Microsoft Academic

Search,

INSPEC,

ProQuest,

ArnetMiner,

Base,

ChemXSeer,

citebase,


OpenJ-Gate, eLibrary, SafetyLit, SSRN, VADLO, OpenGrey, EBSCO, ProQuest, UlrichWeb, ISSUU, SPIE Digital Library, arXiv, ERIC, EasyBib, Infotopia, WorldCat, .docstoc JURN, Mendeley,

ResearchGate,

cogprints,

OCLC,

iSEEK,

Scribd,

LOCKSS,

CASSI,

E-PrintNetwork, intute, and some other databases.

We are grateful to all of the individuals and agencies whose work and support made the Journal's success possible. We want to thank the executive board and core committee members of the AIJRSTEM for entrusting us with the important job. We are thankful to the members of the AIJRSTEM editorial board who have contributed energy and time to the Journal with their steadfast support, constructive advice, as well as reviews of submissions. We are deeply indebted to the numerous anonymous reviewers who have contributed expertly evaluations of the submissions to help maintain the quality of the Journal. For this fifth issue, we received 119 research papers and out of which only 43 research papers are published in two volumes as per the reviewers’ recommendations. We have highest respect to all the authors who have submitted articles to the Journal for their intellectual energy and creativity, and for their dedication to the field of science, technology, engineering & mathematics.

This issue of the AIJRSTEM has attracted a large number of authors and researchers across worldwide and would provide an effective platform to all the intellectuals of different streams to put forth their suggestions and ideas which might prove beneficial for the accelerated pace of development of emerging technologies in science, technology, engineering & mathematics and may open new area for research and development. We hope you will enjoy this fifth issue of the American International Journal of Research in Science, Technology, Engineering & Mathematics and are looking forward to hearing your feedback and receiving your contributions.

(Administrative Chief)

(Managing Director)

(Editorial Head)

--------------------------------------------------------------------------------------------------------------------------The American International Journal of Research in Science, Technology, Engineering & Mathematics (AIJRSTEM), ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 (December-2013 to February-2014, Issue 5, Volume 1 & 2). ---------------------------------------------------------------------------------------------------------------------------


BOARD MEMBERS

        

                 

EDITOR IN CHIEF Prof. (Dr.) Waressara Weerawat, Director of Logistics Innovation Center, Department of Industrial Engineering, Faculty of Engineering, Mahidol University, Thailand. Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan. Divya Sethi, GM Conferencing & VSAT Solutions, Enterprise Services, Bharti Airtel, Gurgaon, India. CHIEF EDITOR (TECHNICAL) Prof. (Dr.) Atul K. Raturi, Head School of Engineering and Physics, Faculty of Science, Technology and Environment, The University of the South Pacific, Laucala campus, Suva, Fiji Islands. Prof. (Dr.) Hadi Suwastio, College of Applied Science, Department of Information Technology, The Sultanate of Oman and Director of IETI-Research Institute-Bandung, Indonesia. Dr. Nitin Jindal, Vice President, Max Coreth, North America Gas & Power Trading, New York, United States. CHIEF EDITOR (GENERAL) Prof. (Dr.) Thanakorn Naenna, Department of Industrial Engineering, Faculty of Engineering, Mahidol University, Thailand. Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College London, Torrington Place, London. ADVISORY BOARD Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson School of Business and Economics, Mercer University, Macon, Georgia, United States. Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau, Germany. Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer University, Macon, Georgia, United States. Prof. (Dr.) Fabrizio Gerli, Department of Management, Ca' Foscari University of Venice, Italy. Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taiwan. Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of Granada, Spain. Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece. Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia, Malaysia. Prof. (Dr.) Vit Vozenilek, Department of Geoinformatics, Palacky University, Olomouc, Czech Republic. Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University of Technology, Sarawak, Malaysia. Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Praneel Chand, Ph.D., M.IEEEC/O School of Engineering & Physics Faculty of Science & Technology The University of the South Pacific (USP) Laucala Campus, Private Mail Bag, Suva, Fiji. Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain. Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de Matematicas, Universidad de Barcelona, Spain.


           

                  

Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains, Malaysia, Malaysia. Dr. Cathryn J. Peoples, Faculty of Computing and Engineering, School of Computing and Information Engineering, University of Ulster, Coleraine, Northern Ireland, United Kingdom. Prof. (Dr.) Pavel Lafata, Department of Telecommunication Engineering, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, 166 27, Czech Republic. Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFACCORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation & Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India. Prof. (Dr.) Anis Zarrad, Department of Computer Science and Information System, Prince Sultan University, Riyadh, Saudi Arabia. Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL University, Green Fields, Vaddeswaram, Andhra Pradesh, India. Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil. Prof. (Dr.) Md. Rizwan Beg, Professor & Head, Dean, Faculty of Computer Applications, Deptt. of Computer Sc. & Engg. & Information Technology, Integral University Kursi Road, Dasauli, Lucknow, India. Prof. (Dr.) Vishnu Narayan Mishra, Assistant Professor of Mathematics, Sardar Vallabhbhai National Institute of Technology, Ichchhanath Mahadev Road, Surat, Surat-395007, Gujarat, India. Dr. Jia Hu, Member Research Staff, Philips Research North America, New York Area, NY. Prof. Shashikant Shantilal Patil SVKM , MPSTME Shirpur Campus, NMIMS University Vile Parle Mumbai, India. Prof. (Dr.) Bindhya Chal Yadav, Assistant Professor in Botany, Govt. Post Graduate College, Fatehabad, Agra, Uttar Pradesh, India. REVIEW BOARD Prof. (Dr.) Kimberly A. Freeman, Professor & Director of Undergraduate Programs, Stetson School of Business and Economics, Mercer University, Macon, Georgia, United States. Prof. (Dr.) Klaus G. Troitzsch, Professor, Institute for IS Research, University of Koblenz-Landau, Germany. Prof. (Dr.) T. Anthony Choi, Professor, Department of Electrical & Computer Engineering, Mercer University, Macon, Georgia, United States. Prof. (Dr.) Yen-Chun Lin, Professor and Chair, Dept. of Computer Science and Information Engineering, Chang Jung Christian University, Kway Jen, Tainan, Taiwan. Prof. (Dr.) Jen-Wei Hsieh, Department of Computer Science and Information Engineering, National Taiwan University of Science and Technology, Taiwan. Prof. (Dr.) Jose C. Martinez, Dept. Physical Chemistry, Faculty of Sciences, University of Granada, Spain. Prof. (Dr.) Joel Saltz, Emory University, Atlanta, Georgia, United States. Prof. (Dr.) Panayiotis Vafeas, Department of Engineering Sciences, University of Patras, Greece. Prof. (Dr.) Soib Taib, School of Electrical & Electronics Engineering, University Science Malaysia, Malaysia. Prof. (Dr.) Sim Kwan Hua, School of Engineering, Computing and Science, Swinburne University of Technology, Sarawak, Malaysia. Prof. (Dr.) Jose Francisco Vicent Frances, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Rafael Ignacio Alvarez Sanchez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Francisco Miguel Martinez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Antonio Zamora Gomez, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Leandro Tortosa, Department of Science of the Computation and Artificial Intelligence, Universidad de Alicante, Alicante, Spain. Prof. (Dr.) Samir Ananou, Department of Microbiology, Universidad de Granada, Granada, Spain. Dr. Miguel Angel Bautista, Department de Matematica Aplicada y Analisis, Facultad de Matematicas, Universidad de Barcelona, Spain. Prof. (Dr.) Prof. Adam Baharum, School of Mathematical Sciences, University of Universiti Sains, Malaysia, Malaysia. Prof. (Dr.) Huiyun Liu, Department of Electronic & Electrical Engineering, University College London, Torrington Place, London.


                                

Dr. Cristiano De Magalhaes Barros, Governo do Estado de Minas Gerais, Brazil. Prof. (Dr.) Pravin G. Ingole, Senior Researcher, Greenhouse Gas Research Center, Korea Institute of Energy Research (KIER), 152 Gajeong-ro, Yuseong-gu, Daejeon 305-343, KOREA. Prof. (Dr.) Dilum Bandara, Dept. Computer Science & Engineering, University of Moratuwa, Sri Lanka. Prof. (Dr.) Faudziah Ahmad, School of Computing, UUM College of Arts and Sciences, University Utara Malaysia, 06010 UUM Sintok, Kedah Darulaman. Prof. (Dr.) G. Manoj Someswar, Principal, Dept. of CSE at Anwar-ul-uloom College of Engineering & Technology, Yennepally, Vikarabad, RR District., A.P., India. Prof. (Dr.) Abdelghni Lakehal, Applied Mathematics, Rue 10 no 6 cite des fonctionnaires dokkarat 30010 Fes Marocco. Dr. Kamal Kulshreshtha, Associate Professor & Head, Deptt. of Computer Sc. & Applications, Modi Institute of Management & Technology, Kota-324 009, Rajasthan, India. Prof. (Dr.) Anukrati Sharma, Associate Professor, Faculty of Commerce and Management, University of Kota, Kota, Rajasthan, India. Prof. (Dr.) S. Natarajan, Department of Electronics and Communication Engineering, SSM College of Engineering, NH 47, Salem Main Road, Komarapalayam, Namakkal District, Tamilnadu 638183, India. Prof. (Dr.) J. Sadhik Basha, Department of Mechanical Engineering, King Khalid University, Abha, Kingdom of Saudi Arabia. Prof. (Dr.) G. SAVITHRI, Department of Sericulture, S.P. Mahila Visvavidyalayam, Tirupati517502, Andhra Pradesh, India. Prof. (Dr.) Shweta jain, Tolani College of Commerce, Andheri, Mumbai. 400001, India. Prof. (Dr.) Abdullah M. Abdul-Jabbar, Department of Mathematics, College of Science, University of Salahaddin-Erbil, Kurdistan Region, Iraq. Prof. (Dr.) ( Mrs.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, Tirupati-517502, India. Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India. Prof. (Dr.) Manjulatha, Dept of Biochemistry,School of Life Sciences,University of Hyderabad,Gachibowli, Hyderabad, India. Prof. (Dr.) Upasani Dhananjay Eknath Advisor & Chief Coordinator, ALUMNI Association, Sinhgad Institute of Technology & Science, Narhe, Pune -411 041, India. Prof. (Dr.) Sudhindra Bhat, Professor & Finance Area Chair, School of Business, Alliance University Bangalore-562106, India. Prof. Prasenjit Chatterjee , Dept. of Mechanical Engineering, MCKV Institute of Engineering West Bengal, India. Prof. Rajesh Murukesan, Deptt. of Automobile Engineering, Rajalakshmi Engineering college, Chennai, India. Prof. (Dr.) Parmil Kumar, Department of Statistics, University of Jammu, Jammu, India Prof. (Dr.) M.N. Shesha Prakash, Vice Principal, Professor & Head of Civil Engineering, Vidya Vikas Institute of Engineering and Technology, Alanahally, Mysore-570 028 Prof. (Dr.) Piyush Singhal, Mechanical Engineering Deptt., GLA University, India. Prof. M. Mahbubur Rahman, School of Engineering & Information Technology, Murdoch University, Perth Western Australia 6150, Australia. Prof. Nawaraj Chaulagain, Department of Religion, Illinois Wesleyan University, Bloomington, IL. Prof. Hassan Jafari, Faculty of Maritime Economics & Management, Khoramshahr University of Marine Science and Technology, khoramshahr, Khuzestan province, Iran Prof. (Dr.) Kantipudi MVV Prasad , Dept of EC, School of Engg., R.K.University, Kast urbhadham, Tramba, Rajkot-360020, India. Prof. (Mrs.) P.Sujathamma, Department of Sericulture, S.P.Mahila Visvavidyalayam, ( Women's University), Tirupati-517502, India. Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications, National Institute of Technical Teachers' Training and Research, Bhopal M.P. India. Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial Management Department, Semnan University, Semnan, Iran. Prof. P.R.SivaSankar, Head, Dept. of Commerce, Vikrama Simhapuri University Post Graduate Centre, KAVALI - 524201, A.P. India. Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science( AIES), Amity University, Noida, India. Prof. Manoj Chouhan, Deptt. of Information Technology, SVITS Indore, India.


                               

Prof. Yupal S Shukla, V M Patel College of Management Studies, Ganpat University, KhervaMehsana. India. Prof. (Dr.) Amit Kohli, Head of the Department, Department of Mechanical Engineering, D.A.V.Institute of Engg. and Technology, Kabir Nagar, Jalandhar,Punjab (India). Prof. (Dr.) Kumar Irayya Maddani, and Head of the Department of Physics in SDM College of Engineering and Technology, Dhavalagiri, Dharwad, State: Karnataka (INDIA). Prof. (Dr.) Shafi Phaniband, SDM College of Engineering and Technology, Dharwad, INDIA. Prof. M H Annaiah, Head, Department of Automobile Engineering, Acharya Institute of Technology, Soladevana Halli, Bangalore -560107, India. Prof. (Dr.) Prof. R. R. Patil, Director School Of Earth Science, Solapur University, Solapur Prof. (Dr.) Manoj Khandelwal, Dept. of Mining Engg, College of Technology & Engineering, Maharana Pratap University of Agriculture & Technology, Udaipur, 313 001 (Rajasthan), India Prof. (Dr.) Kishor Chandra Satpathy, Librarian, National Institute of Technology, Silchar-788010, Assam, India Prof. (Dr.) Juhana Jaafar, Gas Engineering Department, Faculty of Petroleum and Renewable Energy Engineering (FPREE), Universiti Teknologi Malaysia-81310 UTM Johor Bahru, Johor. Prof. (Dr.) Rita Khare, Assistant Professor in chemistry, Govt. Women’s College, Gardanibagh, Patna, Bihar. Prof. (Dr.) Raviraj Kusanur, Dept of Chemistry, R V College of Engineering, Bangalore-59, India. Prof. (Dr.) Hameem Shanavas .I, M.V.J College of Engineering, Bangalore Prof. (Dr.) Sanjay Kumar, JKL University, Ajmer Road, Jaipur Prof. (Dr.) Pushp Lata Faculty of English and Communication, Department of Humanities and Languages, Nucleus Member, Publications and Media Relations Unit Editor, BITScan, BITS, PilaniIndia. Prof. Arun Agarwal, Faculty of ECE Dept., ITER College, Siksha 'O' Anusandhan University Bhubaneswar, Odisha, India Prof. (Dr.) Pratima Tripathi, Department of Biosciences, SSSIHL, Anantapur Campus Anantapur515001 (A.P.) India. Prof. (Dr.) Sudip Das, Department of Biotechnology, Haldia Institute of Technology, I.C.A.R.E. Complex, H.I.T. Campus, P.O. Hit, Haldia; Dist: Puba Medinipur, West Bengal, India. Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family Studies College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India. Prof. (Dr.) R.K.Tiwari, Professor, S.O.S. in Physics, Jiwaji University, Gwalior, M.P.-474011. Prof. (Dr.) Deepak Paliwal, Faculty of Sociology, Uttarakhand Open University, Haldwani-Nainital Prof. (Dr.) Dr. Anil K Dwivedi, Faculty of Pollution & Environmental Assay Research Laboratory (PEARL), Department of Botany,DDU Gorakhpur University,Gorakhpur-273009,India. Prof. R. Ravikumar, Department of Agricultural and Rural Management, TamilNadu Agricultural University,Coimbatore-641003,TamilNadu,India. Prof. (Dr.) R.Raman, Professor of Agronomy, Faculty of Agriculture, Annamalai university, Annamalai Nagar 608 002Tamil Nadu, India. Prof. (Dr.) Ahmed Khalafallah, Coordinator of the CM Degree Program, Department of Architectural and Manufacturing Sciences, Ogden College of Sciences and Engineering Western Kentucky University 1906 College Heights Blvd Bowling Green, KY 42103-1066. Prof. (Dr.) Asmita Das , Delhi Technological University (Formerly Delhi College of Engineering), Shahbad, Daulatpur, Delhi 110042, India. Prof. (Dr.)Aniruddha Bhattacharjya, Assistant Professor (Senior Grade), CSE Department, Amrita School of Engineering , Amrita Vishwa VidyaPeetham (University), Kasavanahalli, Carmelaram P.O., Bangalore 560035, Karnataka, India. Prof. (Dr.) S. Rama Krishna Pisipaty, Prof & Geoarchaeologist, Head of the Department of Sanskrit & Indian Culture, SCSVMV University, Enathur, Kanchipuram 631561, India Prof. (Dr.) Shubhasheesh Bhattacharya, Professor & HOD(HR), Symbiosis Institute of International Business (SIIB), Hinjewadi, Phase-I, Pune- 411 057, India. Prof. (Dr.) Vijay Kothari, Institute of Science, Nirma University, S-G Highway, Ahmedabad 382481, India. Prof. (Dr.) Raja Sekhar Mamillapalli, Department of Civil Engineering at Sir Padampat Singhania University, Udaipur, India. Prof. (Dr.) B. M. Kunar, Department of Mining Engineering, Indian School of Mines, Dhanbad 826004, Jharkhand, India. Prof. (Dr.) Prabir Sarkar, Assistant Professor, School of Mechanical, Materials and Energy Engineering, Room 307, Academic Block, Indian Institute of Technology, Ropar, Nangal Road, Rupnagar 140001, Punjab, India.


                             

Prof. (Dr.) K.Srinivasmoorthy, Associate Professor, Department of Earth Sciences, School of Physical,Chemical and Applied Sciences, Pondicherry university, R.Venkataraman Nagar, Kalapet, Puducherry 605014, India. Prof. (Dr.) Bhawna Dubey, Institute of Environmental Science (AIES), Amity University, Noida, India. Prof. (Dr.) P. Bhanu Prasad, Vision Specialist, Matrix vision GmbH, Germany, Consultant, TIFACCORE for Machine Vision, Advisor, Kelenn Technology, France Advisor, Shubham Automation & Services, Ahmedabad, and Professor of C.S.E, Rajalakshmi Engineering College, India. Prof. (Dr.)P.Raviraj, Professor & Head, Dept. of CSE, Kalaignar Karunanidhi, Institute of Technology, Coimbatore 641402,Tamilnadu,India. Prof. (Dr.) Damodar Reddy Edla, Department of Computer Science & Engineering, Indian School of Mines, Dhanbad, Jharkhand 826004, India. Prof. (Dr.) T.C. Manjunath, Principal in HKBK College of Engg., Bangalore, Karnataka, India. Prof. (Dr.) Pankaj Bhambri, I.T. Deptt., Guru Nanak Dev Engineering College, Ludhiana 141006, Punjab, India. Prof. Shashikant Shantilal Patil SVKM , MPSTME Shirpur Campus, NMIMS University Vile Parle Mumbai, India. Prof. (Dr.) Shambhu Nath Choudhary, Department of Physics, T.M. Bhagalpur University, Bhagalpur 81200, Bihar, India. Prof. (Dr.) Venkateshwarlu Sonnati, Professor & Head of EEED, Department of EEE, Sreenidhi Institute of Science & Technology, Ghatkesar, Hyderabad, Andhra Pradesh, India. Prof. (Dr.) Saurabh Dalela, Department of Pure & Applied Physics, University of Kota, KOTA 324010, Rajasthan, India. Prof. S. Arman Hashemi Monfared, Department of Civil Eng, University of Sistan & Baluchestan, Daneshgah St.,Zahedan, IRAN, P.C. 98155-987 Prof. (Dr.) R.S.Chanda, Dept. of Jute & Fibre Tech., University of Calcutta, Kolkata 700019, West Bengal, India. Prof. V.S.VAKULA, Department of Electrical and Electronics Engineering, JNTUK, University College of Eng.,Vizianagaram5 35003, Andhra Pradesh, India. Prof. (Dr.) Nehal Gitesh Chitaliya, Sardar Vallabhbhai Patel Institute of Technology, Vasad 388 306, Gujarat, India. Prof. (Dr.) D.R. Prajapati, Department of Mechanical Engineering, PEC University of Technology,Chandigarh 160012, India. Dr. A. SENTHIL KUMAR, Postdoctoral Researcher, Centre for Energy and Electrical Power, Electrical Engineering Department, Faculty of Engineering and the Built Environment, Tshwane University of Technology, Pretoria 0001, South Africa. Prof. (Dr.)Vijay Harishchandra Mankar, Department of Electronics & Telecommunication Engineering, Govt. Polytechnic, Mangalwari Bazar, Besa Road, Nagpur- 440027, India. Prof. Varun.G.Menon, Department Of C.S.E, S.C.M.S School of Engineering, Karukutty,Ernakulam, Kerala 683544, India. Prof. (Dr.) U C Srivastava, Department of Physics, Amity Institute of Applied Sciences, Amity University, Noida, U.P-203301.India. Prof. (Dr.) Surendra Yadav, Professor and Head (Computer Science & Engineering Department), Maharashi Arvind College of Engineering and Research Centre (MACERC), Jaipur, Rajasthan, India. Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences & Humanities Dehradun Institute of Technology, (D.I.T. School of Engineering), 48 A K.P-3 Gr. Noida (U.P.) 201308 Prof. Naveen Jain, Dept. of Electrical Engineering, College of Technology and Engineering, Udaipur-313 001, India. Prof. Veera Jyothi.B, CBIT, Hyderabad, Andhra Pradesh, India. Prof. Aritra Ghosh, Global Institute of Management and Technology, Krishnagar, Nadia, W.B. India Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions, Sirhind Mandi Gobindgarh, Punajb, India. Prof. (Dr.) Varala Ravi, Head, Department of Chemistry, IIIT Basar Campus, Rajiv Gandhi University of Knowledge Technologies, Mudhole, Adilabad, Andhra Pradesh- 504 107, India Prof. (Dr.) Ravikumar C Baratakke, faculty of Biology,Govt. College, Saundatti - 591 126, India. Prof. (Dr.) NALIN BHARTI, School of Humanities and Social Science, Indian Institute of Technology Patna, India. Prof. (Dr.) Shivanand S.Gornale , Head, Department of Studies in Computer Science, Government College (Autonomous), Mandya, Mandya-571 401-Karanataka, India.


                               

Prof. (Dr.) Naveen.P.Badiger, Dept.Of Chemistry, S.D.M.College of Engg. & Technology, Dharwad-580002, Karnataka State, India. Prof. (Dr.) Bimla Dhanda, Professor & Head, Department of Human Development and Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India. Prof. (Dr.) Tauqeer Ahmad Usmani, Faculty of IT, Salalah College of Technology, Salalah, Sultanate of Oman. Prof. (Dr.) Naresh Kr. Vats, Chairman, Department of Law, BGC Trust University Bangladesh Prof. (Dr.) Papita Das (Saha), Department of Environmental Science, University of Calcutta, Kolkata, India. Prof. (Dr.) Rekha Govindan , Dept of Biotechnology, Aarupadai Veedu Institute of technology , Vinayaka Missions University , Paiyanoor , Kanchipuram Dt, Tamilnadu , India. Prof. (Dr.) Lawrence Abraham Gojeh, Department of Information Science, Jimma University, P.o.Box 378, Jimma, Ethiopia. Prof. (Dr.) M.N. Kalasad, Department of Physics, SDM College of Engineering & Technology, Dharwad, Karnataka, India. Prof. Rab Nawaz Lodhi, Department of Management Sciences, COMSATS Institute of Information Technology Sahiwal. Prof. (Dr.) Masoud Hajarian, Department of Mathematics, Faculty of Mathematical Sciences, Shahid Beheshti University, General Campus, Evin, Tehran 19839,Iran Prof. (Dr.) Chandra Kala Singh, Associate professor, Department of Human Development and Family Studies, College of Home Science, CCS, Haryana Agricultural University, Hisar- 125001 (Haryana) India Prof. (Dr.) J.Babu, Professor & Dean of research, St.Joseph's College of Engineering & Technology, Choondacherry, Palai,Kerala. Prof. (Dr.) Pradip Kumar Roy, Department of Applied Mechanics, Birla Institute of Technology (BIT) Mesra, Ranchi- 835215, Jharkhand, India. Prof. (Dr.) P. Sanjeevi kumar, School of Electrical Engineering (SELECT), Vandalur Kelambakkam Road, VIT University, Chennai, India. Prof. (Dr.) Debasis Patnaik, BITS-Pilani, Goa Campus, India. Prof. (Dr.) SANDEEP BANSAL, Associate Professor, Department of Commerce, I.G.N. College, Haryana, India. Dr. Radhakrishnan S V S, Department of Pharmacognosy, Faser Hall, The University of Mississippi Oxford, MS- 38655, USA. Prof. (Dr.) Megha Mittal, Faculty of Chemistry, Manav Rachna College of Engineering, Faridabad (HR), 121001, India. Prof. (Dr.) Mihaela Simionescu (BRATU), BUCHAREST, District no. 6, Romania, member of the Romanian Society of Econometrics, Romanian Regional Science Association and General Association of Economists from Romania Prof. (Dr.) Atmani Hassan, Director Regional of Organization Entraide Nationale Prof. (Dr.) Deepshikha Gupta, Dept. of Chemistry, Amity Institute of Applied Sciences,Amity University, Sec.125, Noida, India. Prof. (Dr.) Muhammad Kamruzzaman, Deaprtment of Infectious Diseases, The University of Sydney, Westmead Hospital, Westmead, NSW-2145. Prof. (Dr.) Meghshyam K. Patil , Assistant Professor & Head, Department of Chemistry,Dr. Babasaheb Ambedkar Marathwada University,Sub-Campus, Osmanabad- 413 501, Maharashtra, India. Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir Padampat Singhania University, Udaipur (Raj.) Prof. (Dr.) Sudarson Jena, Dept. of Information Technology, GITAM University, Hyderabad, India Prof. (Dr.) Jai Prakash Jaiswal, Department of Mathematics, Maulana Azad National Institute of Technology Bhopal, India. Prof. (Dr.) S.Amutha, Dept. of Educational Technology, Bharathidasan University, Tiruchirappalli620 023, Tamil Nadu, India. Prof. (Dr.) R. HEMA KRISHNA, Environmental chemistry, University of Toronto, Canada. Prof. (Dr.) B.Swaminathan, Dept. of Agrl.Economics, Tamil Nadu Agricultural University, India. Prof. (Dr.) K. Ramesh, Department of Chemistry, C.B.I.T, Gandipet, Hyderabad-500075. India. Prof. (Dr.) Sunil Kumar, H.O.D. Applied Sciences &Humanities, JIMS Technical campus,(I.P. University,New Delhi), 48/4 ,K.P.-3,Gr.Noida (U.P.) Prof. (Dr.) G.V.S.R.Anjaneyulu, CHAIRMAN - P.G. BOS in Statistics & Deputy Coordinator UGC DRS-I Project, Executive Member ISPS-2013, Department of Statistics, Acharya Nagarjuna University, Nagarjuna Nagar-522510, Guntur, Andhra Pradesh, India.


                                

Prof. (Dr.) Sribas Goswami, Department of Sociology, Serampore College, Serampore 712201, West Bengal, India. Prof. (Dr.) Sunanda Sharma, Department of Veterinary Obstetrics Y Gynecology, College of Veterinary & Animal Science,Rajasthan University of Veterinary & Animal Sciences,Bikaner334001, India. Prof. (Dr.) S.K. Tiwari, Department of Zoology, D.D.U. Gorakhpur University, Gorakhpur-273009 U.P., India. Prof. (Dr.) Praveena Kuruva, Materials Research Centre, Indian Institute of Science, Bangalore560012, INDIA Prof. (Dr.) Rajesh Kumar, Department Of Applied Physics, Bhilai Institute Of Technology, Durg (C.G.) 491001, India. Dr. K.C.Sivabalan, Field Enumerator and Data Analyst, Asian Vegetable Research Centre, The World Vegetable Centre, Taiwan. Prof. (Dr.) Amit Kumar Mishra, Department of Environmntal Science and Energy Research, Weizmann Institute of Science, Rehovot, Israel. Prof. (Dr.) Manisha N. Paliwal, Sinhgad Institute of Management, Vadgaon (Bk), Pune, India. Prof. (Dr.) M. S. HIREMATH, Principal, K.L.ESOCIETY’s SCHOOL, ATHANI Prof. Manoj Dhawan, Department of Information Technology, Shri Vaishnav Institute of Technology & Science, Indore, (M. P.), India. Prof. (Dr.) V.R.Naik, Professor & Head of Department, Mechancal Engineering, Textile & Engineering Institute, Ichalkaranji (Dist. Kolhapur), Maharashatra, India. Prof. (Dr.) Jyotindra C. Prajapati,Head, Department of Mathematical Sciences, Faculty of Applied Sciences, Charotar University of Science and Technology, Changa Anand -388421, Gujarat, India Prof. (Dr.) Sarbjit Singh, Head, Department of Industrial & Production Engineering, Dr BR Ambedkar National Institute of Technology,Jalandhar,Punjab, India. Prof. (Dr.) Professor Braja Gopal Bag, Department of Chemistry and Chemical Technology , Vidyasagar University, West Midnapore Prof. (Dr.) Ashok Kumar Chandra, Department of Management, Bhilai Institute of Technology, Bhilai House, Durg (C.G.) Prof. (Dr.) Amit Kumar, Assistant Professor, School of Chemistry, Shoolini University, Solan, Himachal Pradesh, India Prof. (Dr.) L. Suresh Kumar, Mechanical Department, Chaitanya Bharathi Institute of Technology, Hyderabad, India. Scientist Sheeraz Saleem Bhat, Lac Production Division, Indian Institute of Natural Resins and Gums, Namkum, Ranchi, Jharkhand, India. Prof. C.Divya , Centre for Information Technology and Engineering, Manonmaniam Sundaranar University, Tirunelveli - 627012, Tamilnadu , India. Prof. T.D.Subash, Infant Jesus College Of Engineering and Technology, Thoothukudi Tamilnadu, India. Prof. (Dr.) Vinay Nassa, Prof. E.C.E Deptt., Dronacharya.Engg. College, Gurgaon India. Prof. Sunny Narayan, university of Roma Tre, Italy. Prof. (Dr.) Sanjoy Deb, Dept. of ECE, BIT Sathy, Sathyamangalam, Tamilnadu-638401, India. Prof. (Dr.) Reena Gupta, Institute of Pharmaceutical Research, GLA University, Mathura, India. Prof. (Dr.) P.R.SivaSankar, Head Dept. of Commerce, Vikrama Simhapuri University Post Graduate Centre, KAVALI - 524201, A.P., India. Prof. (Dr.) Mohsen Shafiei Nikabadi, Faculty of Economics and Management, Industrial Management Department, Semnan University, Semnan, Iran. Prof. (Dr.) Praveen Kumar Rai, Department of Geography, Faculty of Science, Banaras Hindu University, Varanasi-221005, U.P. India. Prof. (Dr.) Christine Jeyaseelan, Dept of Chemistry, Amity Institute of Applied Sciences, Amity University, Noida, India. Prof. (Dr.) M A Rizvi, Dept. of Computer Engineering and Applications , National Institute of Technical Teachers' Training and Research, Bhopal M.P. India. Prof. (Dr.) K.V.N.R.Sai Krishna, H O D in Computer Science, S.V.R.M.College,(Autonomous), Nagaram, Guntur(DT), Andhra Pradesh, India. Prof. (Dr.) Ashok Kr. Dargar, Department of Mechanical Engineering, School of Engineering, Sir Padampat Singhania University, Udaipur (Raj.) Prof. (Dr.) Asim Kumar Sen, Principal , ST.Francis Institute of Technology (Engineering College) under University of Mumbai , MT. Poinsur, S.V.P Road, Borivali (W), Mumbai-400103, India. Prof. (Dr.) Rahmathulla Noufal.E, Civil Engineering Department, Govt.Engg.College-Kozhikode


                                

Prof. (Dr.) N.Rajesh, Department of Agronomy, TamilNadu Agricultural University -Coimbatore, Tamil Nadu, India. Prof. (Dr.) Har Mohan Rai , Professor, Electronics and Communication Engineering, N.I.T. Kurukshetra 136131,India Prof. (Dr.) Eng. Sutasn Thipprakmas from King Mongkut, University of Technology Thonburi, Thailand. Prof. (Dr.) Kantipudi MVV Prasad, EC Department, RK University, Rajkot. Prof. (Dr.) Jitendra Gupta,Faculty of Pharmaceutics, Institute of Pharmaceutical Research, GLA University, Mathura. Prof. (Dr.) Swapnali Borah, HOD, Dept of Family Resource Management, College of Home Science, Central Agricultural University, Tura, Meghalaya, India. Prof. (Dr.) N.Nazar Khan, Professor in Chemistry, BTK Institute of Technology, Dwarahat-263653 (Almora), Uttarakhand-India. Prof. (Dr.) Rajiv Sharma, Department of Ocean Engineering, Indian Institute of Technology Madras, Chennai (TN) - 600 036,India. Prof. (Dr.) Aparna Sarkar,PH.D. Physiology, AIPT,Amity University , F 1 Block, LGF, Sector125,Noida-201303, UP ,India. Prof. (Dr.) Manpreet Singh, Professor and Head, Department of Computer Engineering, Maharishi Markandeshwar University, Mullana, Haryana, India. Prof. (Dr.) Sukumar Senthilkumar, Senior Researcher Advanced Education Center of Jeonbuk for Electronics and Information Technology, Chon Buk National University, Chon Buk, 561-756, SOUTH KOREA. . Prof. (Dr.) Hari Singh Dhillon, Assistant Professor, Department of Electronics and Communication Engineering, DAV Institute of Engineering and Technology, Jalandhar (Punjab), INDIA. . Prof. (Dr.) Poonkuzhali, G., Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, INDIA. . Prof. (Dr.) Bharath K N, Assistant Professor, Dept. of Mechanical Engineering, GM Institute of Technology, PB Road, Davangere 577006, Karnataka, INDIA. . Prof. (Dr.) F.Alipanahi, Assistant Professor, Islamic Azad University,Zanjan Branch, Atemadeyeh, Moalem Street, Zanjan IRAN Prof. Yogesh Rathore, Assistant Professor, Dept. of Computer Science & Engineering, RITEE, Raipur, India Prof. (Dr.) Ratneshwer, Department of Computer Science (MMV), Banaras Hindu University Varanasi-221005, India. Prof. Pramod Kumar Pandey, Assistant Professor, Department Electronics & Instrumentation Engineering, ITM University, Gwalior, M.P., India Prof. (Dr.)Sudarson Jena, Associate Professor, Dept.of IT, GITAM University, Hyderabad, India Prof. (Dr.) Binod Kumar,PhD(CS), M.Phil(CS),MIEEE,MIAENG, Dean & Professor( MCA), Jayawant Technical Campus(JSPM's), Pune, India Prof. (Dr.) Mohan Singh Mehata, (JSPS fellow), Assistant Professor, Department of Applied Physics, Delhi Technological University, Delhi Prof. Ajay Kumar Agarwal, Asstt. Prof., Deptt. of Mech. Engg., Royal Institute of Management & Technology, Sonipat (Haryana) Prof. (Dr.) Siddharth Sharma, University School of Management, Kurukshetra University, Kurukshetra, India. Prof. (Dr.) Satish Chandra Dixit, Department of Chemistry, D.B.S.College ,Govind Nagar,Kanpur208006, India Prof. (Dr.) Ajay Solkhe, Department of Management, Kurukshetra University, Kurukshetra, India. Prof. (Dr.) Neeraj Sharma, Asst. Prof. Dept. of Chemistry, GLA University, Mathura Prof. (Dr.) Basant Lal, Department of Chemistry, G.L.A. University, Mathura Prof. (Dr.) T Venkat Narayana Rao, C.S.E,Guru Nanak Engineering College, Hyderabad, Andhra Pradesh, India Prof. (Dr.) Rajanarender Reddy Pingili, S.R. International Institute of Technology, Hyderabad, Andhra Pradesh, India Prof. (Dr.) V.S.Vairale, Department of Computer Engineering, All India Shri Shivaji Memorial Society College of Engineering, Kennedy Road, Pune-411 001, Maharashtra, India Prof. (Dr.) Vasavi Bande, Department of Computer Science & Engineering, Netaji Institute of Engineering and Technology, Hyderabad, Andhra Pradesh, India Prof. (Dr.) Hardeep Anand, Department of Chemistry, Kurukshetra University Kurukshetra, Haryana, India. Prof. Aasheesh shukla, Asst Professor, Dept. of EC, GLA University, Mathura, India.


                                                

Prof. S.P.Anandaraj., CSE Dept, SREC, Warangal, India. Satya Rishi Takyar , Senior ISO Consultant, New Delhi, India. Prof. Anuj K. Gupta, Head, Dept. of Computer Science & Engineering, RIMT Group of Institutions, Mandi Gobindgarh, Punjab, India. Prof. (Dr.) Harish Kumar, Department of Sports Science, Punjabi University, Patiala, Punjab, India. Prof. (Dr.) Mohammed Ali Hussain, Professor, Dept. of Electronics and Computer Engineering, KL University, Green Fields, Vaddeswaram, Andhra Pradesh, India. Prof. (Dr.) Manish Gupta, Department of Mechanical Engineering, GJU, Haryana, India. Prof. Mridul Chawla, Department of Elect. and Comm. Engineering, Deenbandhu Chhotu Ram University of Science & Technology, Murthal, Haryana, India. Prof. Seema Chawla, Department of Bio-medical Engineering, Deenbandhu Chhotu Ram University of Science & Technology, Murthal, Haryana, India. Prof. (Dr.) Atul M. Gosai, Department of Computer Science, Saurashtra University, Rajkot, Gujarat, India. Prof. (Dr.) Ajit Kr. Bansal, Department of Management, Shoolini University, H.P., India. Prof. (Dr.) Sunil Vasistha, Mody Institute of Tecnology and Science, Sikar, Rajasthan, India. Prof. Vivekta Singh, GNIT Girls Institute of Technology, Greater Noida, India. Prof. Ajay Loura, Assistant Professor at Thapar University, Patiala, India. Prof. Sushil Sharma, Department of Computer Science and Applications, Govt. P. G. College, Ambala Cantt., Haryana, India. Prof. Sube Singh, Assistant Professor, Department of Computer Engineering, Govt. Polytechnic, Narnaul, Haryana, India. Prof. Himanshu Arora, Delhi Institute of Technology and Management, New Delhi, India. Dr. Sabina Amporful, Bibb Family Practice Association, Macon, Georgia, USA. Dr. Pawan K. Monga, Jindal Institute of Medical Sciences, Hisar, Haryana, India. Dr. Sam Ampoful, Bibb Family Practice Association, Macon, Georgia, USA. Dr. Nagender Sangra, Director of Sangra Technologies, Chandigarh, India. Vipin Gujral, CPA, New Jersey, USA. Sarfo Baffour, University of Ghana, Ghana. Monique Vincon, Hype Softwaretechnik GmbH, Bonn, Germany. Natasha Sigmund, Atlanta, USA. Marta Trochimowicz, Rhein-Zeitung, Koblenz, Germany. Kamalesh Desai, Atlanta, USA. Vijay Attri, Software Developer Google, San Jose, California, USA. Neeraj Khillan, Wipro Technologies, Boston, USA. Ruchir Sachdeva, Software Engineer at Infosys, Pune, Maharashtra, India. Anadi Charan, Senior Software Consultant at Capgemini, Mumbai, Maharashtra. Pawan Monga, Senior Product Manager, LG Electronics India Pvt. Ltd., New Delhi, India. Sunil Kumar, Senior Information Developer, Honeywell Technology Solutions, Inc., Bangalore, India. Bharat Gambhir, Technical Architect, Tata Consultancy Services (TCS), Noida, India. Vinay Chopra, Team Leader, Access Infotech Pvt Ltd. Chandigarh, India. Sumit Sharma, Team Lead, American Express, New Delhi, India. Vivek Gautam, Senior Software Engineer, Wipro, Noida, India. Anirudh Trehan, Nagarro Software Gurgaon, Haryana, India. Manjot Singh, Senior Software Engineer, HCL Technologies Delhi, India. Rajat Adlakha, Senior Software Engineer, Tech Mahindra Ltd, Mumbai, Maharashtra, India. Mohit Bhayana, Senior Software Engineer, Nagarro Software Pvt. Gurgaon, Haryana, India. Dheeraj Sardana, Tech. Head, Nagarro Software, Gurgaon, Haryana, India. Naresh Setia, Senior Software Engineer, Infogain, Noida, India. Raj Agarwal Megh, Idhasoft Limited, Pune, Maharashtra, India. Shrikant Bhardwaj, Senior Software Engineer, Mphasis an HP Company, Pune, Maharashtra, India. Vikas Chawla, Technical Lead, Xavient Software Solutions, Noida, India. Kapoor Singh, Sr. Executive at IBM, Gurgaon, Haryana, India. Ashwani Rohilla, Senior SAP Consultant at TCS, Mumbai, India. Anuj Chhabra, Sr. Software Engineer, McKinsey & Company, Faridabad, Haryana, India. Jaspreet Singh, Business Analyst at HCL Technologies, Gurgaon, Haryana, India.



TOPICS OF INTEREST Topics of interest include, but are not limited to, the following:  Computer and computational sciences  Physics  Chemistry  Mathematics  Actuarial sciences  Applied mathematics  Biochemistry, Bioinformatics  Robotics  Computer engineering  Statistics  Electrical engineering & Electronics  Mechanical engineering  Industrial engineering  Information sciences  Civil Engineering  Aerospace engineering  Chemical engineering  Sports sciences  Military sciences  Astrophysics & Astronomy  Optics  Nanotechnology  Nuclear physics  Operations research  Neurobiology & Biomechanics  Acoustical engineering  Geographic information systems  Atmospheric sciences  Educational/Instructional technology  Biological sciences  Education and Human resource  Extreme engineering applications  Environmental research and education  Geosciences  Social, Behavioral and Economic sciences  Advanced manufacturing technology  Automotive & Construction  Geospatial technology  Cyber security  Transportation  Energy and Power  Healthcare & Hospitality  Medical and dental sciences  Pesticides  Marine and thermal sciences  Pollution  Renewable sources of energy  Industrial pollution control  Hazardous and e-waste management  Green technologies  Artificial/computational intelligence  Theory and models



TABLE OF CONTENTS (December-2013 to February-2014, Issue 5, Volume 1 & 2)

Issue 5, Volume 1 Paper Code

Paper Title

Page No.

AIJRSTEM 14-105

Identification of parental lines and rice hybrid (KRH-4) using protein and isozyme electrophoresis C.Pushpa, Rame Gowda, N.Nethra, K.M.Harinikumar, K. Uma Rani and N. Gangaraju

01-04

AIJRSTEM 14-108

The normalization and the boundary condition of a particle wave function moving in a field of an arbitrary one-dimensional potential A. Zh. Khachatrian, Al. G. Aleksanyan, V.A. Khoecyan, N. A. Aleksanyan

05-11

AIJRSTEM 14-109

Effect of Core Yarn Twist in DREF 2 Friction Spun Yarn Prof. S K Sett

12-15

AIJRSTEM 14-110

A survey of Clustering Technique for Saving the Energy in Wireless Sensor Networks S.Janakiraman, V.Sivakumar, K.Maharajan

16-20

AIJRSTEM 14-112

Generalizing the Concept of Membership Function of Fuzzy Sets on the Real line Using Distribution Theory G Velammal, J Mary Gracelet

21-25

AIJRSTEM 14-113

Theorems and General Inverse Value Approximation Formulae of Basic Trigonometric Functions Manaye Getu Tsige

26-30

AIJRSTEM 14-115

KNOWLEDGE AND ATTITUDE ASSESSMENT ON NO SCALPEL VASECTOMY AMONG MARRIED MEN IN SOUTH INDIA: A CORRELATION STUDY Kiran Rao Chavan

31-34

AIJRSTEM 14-116

On Consistency Issue of the Definition of Mean Free Path on the Basis of Measure Theoretic Probability and Speed of Light Postulate of Relativity Debashis Chatterjee

35-39

AIJRSTEM 14-122

FIXED POINT THEOREMS FOR WIDELY GENERALIZED MAPPINGS IN HILBERT SPACES AND APPLICATIONS Mamta Patel, Sanjay Sharma

40-43

AIJRSTEM 14-123

A Study of Clustering Techniques for Crop Prediction - A Survey Utkarsha P. Narkhede, K. P. Adhiya

44-48

AIJRSTEM 14-126

Performance comparison of various Ad-Hoc routing protocols of VANET in Indian city scenario Soumen Saha, Dr.Utpal Roy, Dr. D.D. Sinha

49-54

AIJRSTEM 14-132

Review of Multi-document Text Summarization Methods Soniya Patil, Ashish T. Bhole

55-59

AIJRSTEM 14-133

Application of Structural Time Series model for forecasting Gram production in India D.P. Singh, A.K. Thakur and D.S. Ram

60-62

AIJRSTEM 14-135

Implementation of NRCS-CN and Remote Sensing Technology for analysis of Rainfall-Runoff Relationship in Paghman Sub basin, Kabul, Afganisthan P.Rajasekhar, Folad Malwan , P.Vimal Kishore

63-69

AIJRSTEM 14-139

Digital Image Encryption with Discrete Fractional Transforms and Chaos: A Comparative Analysis Bharti Ahuja, Dr. Rajiv Srivastava, Rashmi Singh Lodhi

70-75

AIJRSTEM 14-141

DC-DC Converter for Interfacing Energy Storage Dr. S. W. Mohod, Mr. S. D. Deshmukh

76-81

AIJRSTEM 14-143

Application of ARIMA model for forecasting Paddy production in Bastar division of Chhattisgarh D.P. Singh, Prafull Kumar and K. Prabakaran

82-87

AIJRSTEM 14-147

Study and Review of Fuzzy Inference Systems for Decision Making and Control Swati Chaudhari, Manoj Patil

88-92

AIJRSTEM 14-152

Analysis of Rainfall-Runoff relationship in Shomali sub-basin using NRCS-CN and Remote Sensing P.Rajasekhar, P.Vimal Kishore, Folad Malwan

93-100

AIJRSTEM 14-153

A Systematic Combined Approach to Prolong the Life Time of Heterogeneous Wireless Sensor Networks: A review of existing techniques and an engineered better approach Aminur Rahman, Rajbhupinder Kaur, Jiaur Rehman Ahmed

101-110


Issue 5, Volume 2 Paper Code

Paper Title

Page No.

AIJRSTEM 14-158

A Prediction based Energy-Efficient Tracking Method in Sensor Networks Mr. J.Joseph Ignatious, Dr.S.Abraham Lincon

111-115

AIJRSTEM 14-160

UPPER VERTEX COVERING NUMBER OF A GRAPH D. K. Thakkar, A. A. Prajapati and J. V. Changela

116-118

AIJRSTEM 14-162

On Applied Mathematical Programming Techniques for Estimating Frontier Production Functions T. Suneetha, J. Prabhakara Naik, R. Md. Mastan Shareef and R. Abbaiah

119-127

AIJRSTEM 14-164

Effect of Resin and Thickness on Tensile Properties of Laminated Composites B.V.Babu Kiran, G. Harish

128-134

AIJRSTEM 14-166

Robust Color image segmentation using efficient Soft-computing techniques: A survey Janakiraman.S, J.Gowri

135-139

AIJRSTEM 14-167

Selection Of Industrial Robots using Complex Proportional Assessment Method Dayanand Gorabe, Dnyaneshwar Pawar, Nilesh Pawar

140-143

AIJRSTEM 14-170

Study and Review of Genetic Neural Approaches for Data Mining Nilakshi P. Waghulde, Nilima Patil

144-148

AIJRSTEM 14-174

Evaluation of Resistance Offered by 304 Stainless Steel Alloy Against Corrosion and Pitting in H3PO4- HCl Medium by Determination and Study of Corrosion Resistance Parameter Rita Khare

149-152

AIJRSTEM 14-175

EXPORT GROWTH AND DIVERSIFICATION OF SRI LANKA'S MAJOR PRODUCT SECTORS P.D. TALAGALA, T.S. TALAGALA

153-157

AIJRSTEM 14-178

Determination of Confined Aquifer Parameters by Sushil K. Singh Method Dr. Rajasekhar.P, Vimal Kishore.P, Mansoor Mansoori

158-163

AIJRSTEM 14-182

MULTI-AGENT MODEL OF HYBRID ENERGY SYSTEM IMPLEMENTATION Mohammad Asif Iqbal, Shivraj Sharma

164-168

AIJRSTEM 14-183

FAILURE ANALYSIS OF INTERNAL COMBUSTION ENGINE VALVES BY USING ANSYS Goli Udaya Kumar, Venkata Ramesh Mamilla

169-173

AIJRSTEM 14-185

Direct smelting of gold concentrates, a safer alternative to mercury amalgamation in small-scale gold mining operations Abbey, C E, Al-Hassan, S, Nartey, R S and Amankwah, R K,

174-179

AIJRSTEM 14-190

Review: Apriori Algorithms and Association Rule Generation and Mining Minal Ingle, Nitin Suryavanshi

180-183

AIJRSTEM 14-193

AN OPTIMAL APRROACH FOR THE TASKS ALLOCATION BASED ON THE FUSION OF EC AND ITCC IN THE DISTRIBUTED COMPUTING SYSTEMS Abhilasha Sharma, Surbhi Gupta

184-189

AIJRSTEM 14-196

A SECURE HASHING SCHEME FOR IMAGE AUTHENTICATION USING ZERNIKE MOMENTS AND LOCAL FEATURES WITH HISTOGRAM FEATURES S.Deepa, A.Nagajothi

190-195

AIJRSTEM 14-206

A POWER BALANCING APPROACH FOR EFFICIENT ROUTE DISCOVERY BY SELECTING LINK STABILITY NEIGHBORS IN MOBILE ADHOC NETWORKS R.Nandhini K. Malathi

196-201

AIJRSTEM 14-207

Studies on glass reinforced laminates based on amide oligomers – epoxy resin based thermosetting resin blends Mahendrasinh M Raj

202-207

AIJRSTEM 14-210

Relationship Between Daily Variation of CRI, Solar Activity and Interplanetary Conditions R.K. Tiwari,Ajay K. Pandey and Anuj Hundet

208-211

AIJRSTEM 14-215

CME Association of geomagnetic storms occurred in January- December 2011 Ruchi Nigam and R.K.Tiwari

212-214

AIJRSTEM 14-216

Service Innovation Discovery for Enterprise Business Intelligence Applications (SIDE-BIA) Prof. Dr. P.K. Srimani, F.N.A.Sc. Prof. Rajasekharaiah K.M.

215-219

AIJRSTEM

An Experimental Study of Sn-Doped-ZnO using sol-gel approach

14-217

Anand, Rajesh Malik, Rajesh Khanna

AIJRSTEM

A Study of Tin Oxide Structure and Deposition under sol-gel Approach

14-218

Suraksha, Rajesh Malik, Rajesh Khanna

220-223

228-227


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

A Prediction based Energy-Efficient Tracking Method in Sensor Networks Mr. J.Joseph Ignatious1, Dr.S.Abraham Lincon2 Department of Electronics and Instrumentation Engineering, Annamalai University, Annamalai Nagar, India. Abstract - Energy is one of the most critical constraints for sensor network applications of tracking. Object tracking is an important feature of the ubiquitous society and also a killer application of wireless sensor networks. Object Tracking Sensor Network is mainly used to track certain objects in a monitored area and to report their location to the application’s users. On the other hand, Object Tracking Sensor Network is familiar for their energy consumption when compared with other WSN applications. Based on the fact that the actions of the tracked objects are sometimes knowable, we propose a Prediction-based Energy Efficient tracking technique using sequential patterns deliberate to achieve considerable reductions in the energy degenerate by the Object Tracking Sensor Networks while maintaining tolerable missing rate levels. The intended method can not only decrease the missing-rate by predict future movements of moving objects, but can also reduce the energy consumption by sinking the number of nodes that join in tracking i.e., the majority of the sensor nodes stay in sleep mode and minimizing the communication disbursement. Index Terms- Object tracking, wireless sensor networks (WSNs), prediction based energy efficient technique, object tracking sensor network (OTSN). I. INTRODUCTION Recently, an increasing interest in deploying wireless sensor networks (WSNs) for real-life applications. OTSN is mainly used to track certain objects in a monitored area and to report their position to the application’s users. Object tracking, which is also called target tracking, is a major field of research in WSNs and has many real-life applications such as wild life monitoring, security applications for buildings and compounds to avoid interference or trespassing, and international border monitoring for prohibited crossings. Additionally, object tracking is measured one of the most challenging applications in WSNs due to its application requirements, which place a heavy load on the network resources, mainly energy consumption. The main task of an object tracking sensor network (OTSN) is to track a moving object and to report its latest location in the monitored area to the application in an acceptable timely manner, and this dynamic process of sensing and reporting keeps the network’s resources under heavy pressure. However, there has been a very limited focus on the energy lost by the computing components, which are referred to as microcontroller unit (MCU) and the sensing components [4]. Although the energy dissipated by the MCU and the sensing component is less than what is consumed by the radio component, it still represents a significant source of energy consumption in the sensor node. Therefore, our concern in this paper is to develop an energy-efficient OTSN that should make remarkable reductions in the energy consumed by both the MCU and the sensing components. OTSN is considered as one of the most energy-consuming applications of WSNs. Due to this fact, there is a necessity to develop energy-efficient techniques that adhere to the application requirements of an objecttracking system, which reduce the total energy consumption of the OTSN while maintaining a tolerable missing rate level. [1]. In this paper, we map the prediction-based tracking technique using prediction model, which is an object tracking technique that gives ability to predict the objects’ future movements to track it with the least number of sensor nodes while keeping the other sensor nodes in the network in sleep mode. This would accomplish our goals while notably reducing the network’s energy consumption. The PTSP is based on the inherited patterns of the objects’ movements in the network and the utilization of sequential patterns to predict to which sensor node that the moving object will be heading next. Since the PTSP totally depends on prediction, it is possible to have some missing objects during the tracking process. The main contribution of this paper is the prediction technique that will be used to predict the future location of a moving object. The remaining section of this paper is organized as follows: Section II reviews the related work regarding the algorithms proposed for object tracking in WSNs. Section III gives our proposed tracking framework for moving objects. Section IV provides a performance study for our proposed schemes. Section V concludes this paper.

AIJRSTEM 14-158; © 2013, AIJRSTEM All Rights Reserved

Page 111


J.Joseph Ignatious et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 111-115

II. RELATED WORK Dissimilar types of techniques have been proposed in the literature for object tracking in WSNs [3]. In their work, they classified object-tracking techniques into five main classes, which are naive, schedule monitoring, continues monitoring, dynamic clustering, In the first technique all the sensor nodes stays in active mode and monitor their detection areas all the time and sends the detected data to the base station periodically. Therefore, the energy consumption is high [3],[4]. The scheduled monitoring scheme assumes that all the sensor nodes and base station are well synchronized, and all the sensor nodes can turn to sleep and only wake up when it comes to their time (turn) to monitor their detection areas and report sensed results. Thus, in this scheme, all the sensor nodes will be active for X second then go to sleep for (T − X) seconds.. However, actually the assumption that all sensor nodes and base station are well synchronized is very difficult to be realized and the number of sensor nodes involved in object tracking is more than necessary. Hence the energy consumption is high[3],[4]. Continuous monitoring scheme, instead of having all the sensor nodes in the sensor network wake up periodically to sense the whole area, only one node which has the object in its detection area will be activated. Whenever a node wakes up, it will keep on monitoring the object until it enters a neighboring cell. In order to reduce the missing rate, the active sensor has to stay awake as long as the object is in its detection area. This causes the unbalanced energy consumption among the sensor nodes and thus reduces the network lifetime [2],[3]. The another scheme is Dynamic Clustering which the network is formed with powerful CH nodes and low-end SN nodes. The CH node which has the strongest sensed signal of target becomes the cluster head and organizes nearby SN nodes to form a cluster. The SN nodes in the cluster transmit the sensed data to the cluster head, and then the cluster head sends digested data to base station after data aggregation After cluster formation there is no rotation in cluster head, object travels at high speed so cluster head would consume high energy [2],[3]. These techniques use energy level as high, which is used to track the object and also cannot predict the position movement of the object. A number of works have been proposed in the literature for using sequential patterns, in WSNs. Most of these works aim to generate hidden patterns in the area of phenomena under monitoring such as temperature and medical data[13],[14]. However, there are some works that have been proposed to generate patterns about the sensors’ behavior. In [10], the authors proposed sensor association rules in an attempt to capture the temporal correlations between the sensor nodes in a particular WSN[6]. Sensor association rules generate patterns concerning the sensor nodes; thus, these patterns can be used to develop the quality of service (QoS) of the network, predicting the source of missed objects or estimating the value of a missed reading. III. PREDICTION-BASED TRACKING TECHNIQUE USINGSEQUENTIAL PATTERN Object-tracking sensor network (OTSN) is a widely used technique to track certain objects in a monitored area and to report their location to the application’s users. On the other hand OTSNs are well known for their energy consumption when compared with other WSN applications. Due to this fact, there is a necessity to develop energy-efficient techniques that adhere to the application requirements of an object-tracking system, which reduce the total energy consumption of the OTSN while maintaining a tolerable missing rate level[4]. The prediction-based tracking technique using sequential pattern (PTSP), which is an object tracking technique that revolves around the ability to predict the objects’ future movements to track it with the minimum number of sensor nodes while keeping the other sensor nodes in the network in sleep mode. This would achieve our goals while significantly reducing the network’s energy consumption. The proposed PTSP is based on two stages: a. Sequential Pattern Generation b. Object Tracking and Monitoring These stages use three mechanisms for tracking to reduce energy usage asshown in Figure1. If the object entered into the monitoring area, it reads the object and predicts future movement by mechanisms [1]. 1. Operation of PTSP MECHANISMS The PTSP consists of three parts. These are 1. A prediction model which anticipates the future movement of an object so only the sensor nodes expected to discover the object will be activated; 2. Awake up mechanism that, based on some heuristics taking both energy and performance into accounts, sets up which nodes and when they should be activated; 3. A recovery mechanism initiated only when the network loses the track of an object. Prediction algorithm at sensor nodes Incoming Message: HistMsg(Hist) Local Variables: Sen Read, Pred System Functions: Predictor() Procedure:

AIJRSTEM 14-158; © 2013, AIJRSTEM All Rights Reserved

Page 112


J.Joseph Ignatious et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 111-115

1: {Once the object enters the detection area, the sensor predictsobject’s movement from history} 2: Pred ← Predictor(Hist) 3: while object is inside the detection area do 4: monitor the object, record the sensor readings to Sen Read 5: end while 6: if (Sen Read _= Pred) then 7: Send Update Msg(Sen Read) to cluster head 8: end if 9: {Calculate object’s movement history from the previoushistory and movement in its detection area} 10: Hist ← (SenRead,Hist) 11: send HistMsg(Hist) to destination node

Figure:1 Flow chart for PTSP Wake up Mechanism No matter what kind of heuristics we use, we do not expect the prediction to achieve 100% accuracy. In this case, prediction errors means object missing, which causes excessive energy overhead for re-locating the object. To accommodate the prediction errors, a set of target nodes are woken up to help capturing the object, instead of only one destination node[9]. We propose a wake up mechanism [4], that decides the membership of the target nodes based on the different levels of conservativeness shown in Figure:2. SEQUENTIAL PATTERN GENERATION The huge log of data collected from the sensor network and aggregated at the sink in a database, producing the inherited behavioral patterns of object movement in the monitored area. Based on these data, the sink will be able to generate the sequential patterns that will be deployed by the sink to the sensor nodes in the network. This will allow the sensor nodes to predict the future movements of a moving object in their detection area[10]. OBJECT TRACKING AND MONITORING In the second stage, the actual tracking of moving objects starts. Object tracking consists of two stages: sensor node activation occurs when the next sensor that should wake up is decided, whereas missing object recovery, which is the second stage, involves the location of missing objects. This stage works based on two mechanisms: 1. Sensor Node Activation mechanism, which entails the use of the sequential patterns to predict which node(s) should be activated to continually keep tracking of the moving object. 2. Missing object recovery mechanism, which will be used to find missing objects in case the activated node is not able to locate an object in its detection area. IV. PERFORMANCE EVALUATION To evaluate this technique, different scenarios and settings have been implemented using a GloMoSim simulator. The simulation experiments carried out in an OTSN of 55 logical sensor nodes in a 1000× 1000m2 monitored region. It is assumed that each sensor node will have a coverage range of 15 m. The network is based on a hexagon topology, i.e., sensor nodes are evenly placed in the area such that each sensor node has a hexagon-shaped detection area. The default speed of the tracking the object = 5m/s.Ewake=0.5mw, Esleep=0.05mw, TS=100seconds, X=180seconds, T=210seconds.

AIJRSTEM 14-158; © 2013, AIJRSTEM All Rights Reserved

Page 113


J.Joseph Ignatious et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 111-115

We have assumed that an object may choose a random path. The ratio of the key paths to the random paths is 3:1. The sampling duration will be 210ms (X), and the OTSN will send a report regarding the location of the moving object every 2100 ms (T) to the application. Each simulation experiment will last for 280s which will include more than one object. As for the energy consumption, we have adopted the WINS energy consumption for sensor nodes used in PES [5] (see Table I). We have only included the MCU and the sensing components’ energy consumption since our proposed tracking technique focuses on reducing the energy consumption on those two components. The energy consumption for the radio component will be the subject of our future research. We will show different charts depicting performance under variations of the preceding parameters, and we will thoroughly analyze their effect on the results. Two metrics have been used in the performance analysis. 1) Total energy consumed:the amount of energy consumed by the whole network to monitor moving objects, which include the active and sleep modes during the simulation. 2) Missing rate:The missing reports’ percentage to the total number of reports that should have been received by the application. Since there is no 100% prediction, it is necessary to have this metric as a basic for comparison. Energy Consumption Analysis In this, we have tested and compared PTSP to the other basic tracking schemes, i.e., SM and CM, in the context of network workload. The experiments were conducted by increasing the number of moving objectsfrom one to ten objects. The naïve technique is participation of all nodes, consumes energy 2.75w The continuous monitoring is participation of all nodes not continuously, consumes energy 0.39w.The scheduled monitoring consumes energy 0.32w. The planned PTSP technique consumes 0.2709mw energy for tracking the object which reduces energy consumption of 15.5% As a result, if the network has a large number of moving objects, then the most ideal scheme would be SM shown in Figure:5. In the case of CM and PTSP, the increase in the number of objects means an increase in the number of active sensor nodes and, thus, higher energy consumption levels shown in Figure:3 and Figure:4. Schemes

Nodes Involved All(=S) All(=S)

Continuous

CM

One for each object

Yes

PTSP

One for each object

No

Naïve SM

Yes No

Energy Consumption Ewake × TS × S ((Ewake × X + Esleep × (T − X)) ×(TS/T) × S Ewake × TS + Esleep × TS × (S − 1) Ewake × (TS/T)+ Esleep × (TS × S – (TS/T)× X)

Recovery Mechanism Analysis In this experiment, we have evaluated the three previously explained recovery mechanisms (source recovery, destination recovery, and all neighbors recovery) in terms of energy consumption. Therefore, we have chosen the first recovery mechanism (source recovery) as our recovery mechanism of choice in all the previous experiments since it was the most energy-conservative recovery mechanism.

Figure:3.Illustration of Energy consumption

Figure:4.Illustration of Energy Throughput

AIJRSTEM 14-158; © 2013, AIJRSTEM All Rights Reserved

Page 114


J.Joseph Ignatious et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 111-115

Figure:5.Illustration of Tracking sensor nodes V. CONCLUSION A Prediction based Energy Efficient Technique is proposed for tracking the object in wireless sensor networks, by reducing energy consumption, which minimizes nodes participation, has been achieved. PTSP utilizes the sensor sequential patterns to produce accurate predictions of the future movements of a certain object. These sequential patterns are continuously evaluated and updated to provide the prediction mechanism with the late stand most accurate predictions. We have simulated our proposed tracking scheme (PTSP), along with two basic tracking schemes for comparison purposes. The results generated by the experiments were mainly through testing the performance of PTSP and the other tracking schemes such that the total energy consumed by the network during the simulation period, including the active and sleep mode energy consumption for each sensor node in the network, and missing rate, which represents a ratio of the missing reports to the total number of reports received by the application. Moreover, it has been verified by the simulation results that PTSP outperformed all the other tracking schemes by observance allow energy consumption level while maintaining an adequate level of missing rate. REFERENCES [1] [2] [3] [4] [5] [6] [7]. [8] [9] [10] [11] [12] [13] [14]

SamerSamarah, Muhannad Al-Hajri, and AzzedineBoukerche, “A Predictive Energy-Efficient Technique to Support ObjectTracking Sensor Networks”IEEE Transactions on Vehicular Technology, Vol. 60, No. 2, February 2011 Y. Xu and W.-C. Lee, “On localized prediction for power efficient object tracking in sensor networks,” in Proc. 1st Int. Workshop Mobile Distrib.Comput., 2003, pp. 434–439. G.-Y. Jin, X.-Y.Lu, and M.-S.Park, “Dynamic clustering for object tracking in wireless sensor networks,” in Proc. 3rd Int. Symp. UCS, Seoul, Korea, 2006, pp. 200–209. W. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “Energy-efficient communication protocol for wireless microsensor networks,” in Proc.33rd Hawaii Conf. Syst. Sci., 2000, pp. 3005–3014 Y. Xu, J. Winter, and W.-C. Lee, “Prediction-based strategies for energy saving in object tracking sensor networks,” in Proc. IEEE Int. Conf.Mobile Data Manage., Berkeley, CA, 2004, pp. 346–357.S F. Zhao and L. J. Guibas, Wireless Sensor Networks. An Information Processing Approach. San Mateo, CA: Morgan Kaufmann, 2002. W. Alsalih, H. Hassanein, and S. Akl, “Placement of multiple mobile data collectors in wireless sensor networks,” Ad Hoc Netw., vol. 8, no. 4, pp. 378–390, Jun. 2010. A. Manjeshwar and D. P. Agrawal, “TEEN: A routing protocol for enhanced efficiency in wireless sensor networks,” in Proc. 15th Int. ParallelDistrib. Process.Symp., San Francisco, CA, 2001, pp. 2009–2015. S. Balasubramanian, I. Elangovan, S. K. Jayaweera, and K. R. Namuduri, “Distributed and collaborative tracking for energyconstrained ad-hoc wireless sensor networks,” in Proc. Wireless Commun.Netw. Conf., Atlanta, GA, 2004, pp. 1732–1737. A. Boukerche and S. Samarah, “A novel algorithm for mining association rules in wireless ad hoc sensor networks,” IEEE Trans. Parallel Distrib.Syst., vol. 19, no. 7, pp. 865–877, Jul. 2008. S. Samarah and A. Boukerche, “Chronological tree—A compressed structure for mining behavioral patterns in wireless sensor networks,” J. InterconnectionNetw., vol. 9, no. 3, pp. 255–276, Sep. 2008. H. G. Goh, M. L. Sim, and H. T. Ewe, “Energy efficient routing for wireless sensor networks with grid topology,” in Proc. Int. Fed. Inf. Process., Santiago de Chile, Chile, 2006, pp. 834–843. M. Naderan, M. Dehghan, and H. Pedram, “Mobile object tracking techniques in wireless sensor networks,” in Proc. Int. Conf. Ultra ModernTelecommun., St. Petersburg, Russia, 2009, pp. 1–8. K. Romer, “Distributed mining of spatio-temporal event patterns in sensor networks,” in Proc. 2nd IEEE Int. Conf. Distrib. Comput. Sensor Syst., San Francisco, CA, 2006, pp. 103–116. J.Joseph Ignatious received his BE degree in Electronics and Communication Engineering in April. 2005, in Nov. 2002 he received the ME degree in Process control and Instrumentation from Annamalai University, Chidambaram. He has since Feb. 2011 been pursuing the PhD degree at the division of Instrumentation Engineering Annamalai University.

S.Abraham Lincon obtained his B.E degree is Electronics and Instrumentation Engineering in the year 1984 and received M.E. degree in Power System Engineering in 1987 and another M.E degree in Process Control and Instrumentation Engineering in 2000 from the Annamalaiuniversity, Chidambaram. Presently he is working as a professor of the Department of Instrumentation Engineering in Annamalai University. His areas of research are Process Control, Fault Detection and Diagnosis and Multivariable Control.

AIJRSTEM 14-158; © 2013, AIJRSTEM All Rights Reserved

Page 115


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

UPPER VERTEX COVERING NUMBER OF A GRAPH 1

2

3

D. K. Thakkar , A. A. Prajapati and J. V. Changela Department of Mathematics, Saurashtra University Campus, Rajkot – 360 005, INDIA. 2 Mathematics Department, Government Engineering Collage, Jhalod Road, Dahod – 389 151, INDIA. 3 Mathematics Department, Parul Institute of Technology Limda, Ta. Vagodia – 391 760, INDIA. 1

Abstract: In this paper we consider upper vertex covering sets of a graph and its upper vertex covering number. We prove that upper vertex covering number of a graph does not increase when a vertex is removed from the graph. We also prove necessary and sufficient condition in which this number does not changed. We also consider well covered graphs and prove some interesting results. We further prove that if is approximately well dominated graph then is either well covered or is approximately well covered. Keywords: upper vertex covering number, vertex covering number, graphs, well dominated graph, approximately well dominated graph.

, well covered

AMS Subject Classification (2010): 05C69, 05C70, I. Introduction In this paper we consider vertex covering set in graphs. We will define sets which are infect minimal vertex covering sets with maximum cardinality. We will find conditions under which the upper vertex covering number of a graph decreases or remains same. Before that we will prove that this number never increases when a vertex is removed. Well cover graphs have been studied in past [2]. However we consider these graphs here for different reason. II. Preliminaries Definition: 2.1 [4] A subset of is said to be a vertex covering set if for every edge of the graph at least one end vertex of it is in . Definition: 2.2 [4] A vertex covering set is said to be a minimal vertex covering set if is not a vertex covering set for every in . Definition: 2.3 A vertex covering set with minimum cardinality is called a minimum vertex covering set and is denoted as . The cardinality of a minimum vertex covering set of a graph is called the vertex covering number of and is denoted as . Definition: 2.4 A minimal vertex covering set with maximum cardinality is called . The cardinality of is called the upper vertex covering number of the graph and is denoted as . Obviously . In this paper we will consider only simple graphs with finite vertex set. III. Main Result First we mentioned the following notations [3]

The above sets are mutually disjoint and their union First, we prove that for any graph , is empty. Theorem: 3.1 Let be a graph and then

AIJRSTEM 14-160; © 2014, AIJRSTEM All Rights Reserved

. . [3]

Page 116


D. K. Thakkar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 116-118

Theorem: 3.2 Let be a graph, be a vertex of such that then . [3] Now we prove that the upper vertex covering number does not increase when a vertex is removed from the graph. Theorem: 3.3 Let be a graph and . Then . Proof: Let be a set of . If all the neighbours of are in then is a minimal vertex covering set of and therefore and thus . If some neighbor of is not in then is a minimal vertex covering set of graph . Therefore, . That is We define the following symbols.

We now prove the following theorem. Theorem: 3.4 Let be a graph and . Then if and only if there is a of not containing such that is also of . Proof: Suppose that . Let be any of . If some neighbor of is not in then is a minimal vertex covering set of and hence . That is . Which implies that . Which is not true. Thus, all neighbors of must in . Let then as proved in previous theorem is a minimal vertex covering set of . If is not a of then . That is Which is a contradiction. Hence, is a of . Also, thus is the required . Conversely, suppose is a of not containing such that is also a of then . Thus, Corollary: 3.5 Let be a graph and . Then if and only if whenever is a of not containing then is not a of Example: 3.6 Consider the wheel graph with six vertices. Then . Let then and . Thus, .

1

5

2

0 4

3 Figure: 1

In fact

is a

of

not containing 0 such that

is a

of

.

IV. Well Covered Graphs Definition: 4.1 [4] A graph is said to be a well covered if any two minimal vertex covering sets have the same cardinality. Equivalently a graph is well covered if . For example and are well covered graphs. Definition: 4.2 A graph is said to be an approximately well covered graph if . For example Peterson graph is an approximately well covered graph. Definition: 4.3[1] A graph is said to a well dominated graph if any two minimal dominating sets have the same cardinality. Equivalently a graph is well dominated if .

AIJRSTEM 14-160; Š 2014, AIJRSTEM All Rights Reserved

Page 117


D. K. Thakkar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 116-118

Definition: 4.4[1] A graph is said to be an approximately well dominated graph if . Theorem: 4.5 Let be a well covered graph and . Then 1) is well covered or . 2) If then and is well covered. 3) If then either is well covered and or . Proof: 1) . Also Hence, if then is well covered or if then . 2) In this case Therefore, Thus, is well covered and . 3) Therefore, or . Thus, either is well covered and or Theorem: 4.6 Let be an approximately well covered graph and . Then 1) If then either is well covered or approximately well covered. 2) If then either is well covered or approximately well covered or . Proof: 1) . Also . Thus, if then and in this case is well covered or if . Then and hence is approximately well covered. 2) If then if then is well covered. If then . Then is approximately well covered. If then Theorem: 4.7 If graph is approximately well dominated then either is well covered or is approximately well covered. Proof: Since is approximately well dominated. . Now every maximal independent set is a minimal dominating set. Therefore cardinality of every maximal independent set is equal to or . Therefore or . Now (Because a maximum independent set is a minimal dominating set). Case:I . Then from the above inequality . Therefore . Now and . Thus, . Therefore the graph is well covered. Case:II . Now again . If . Then . Therefore by the argument in case I is well covered. Suppose then .Therefore . Therefore .Therefore, . Hence, V. References [1] [2] [3] [4] [5]

A. Finbow, B. Hartnell and R. Nowakowski, “Well Dominated Graphs: a collection of well covered ones.” Ars Combin, 25(1988), 5 – 10. D. K. Thakkar, B. M. Kakrecha, G. J. Vala “Well Dominated and Approximately Well Dominated Graphs” Discrete Mathematics, Elixir Dis. Math. 53(2012) 11973-11975. D. K. Thakkar and J. C. Bosamiya “Vertex Covering Number of a Graph” Mathematics Today, Vol. 27, June 2011, 30 – 35. T. W. Hynes, S. T. Hedetniemi, P. J. Slater “Fundamental of Domination in Graphs”, 1998, Marcel Dekkar. Inc. T. W. Hynes, S. T. Hedetniemi, P. J. Slater “Advanced Topics on Domination in Graphs”, 1998, Marcel Dekkar. Inc.

AIJRSTEM 14-160; © 2014, AIJRSTEM All Rights Reserved

Page 118


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

On Applied Mathematical Programming Techniques for Estimating Frontier Production Functions T. Suneetha, J. Prabhakara Naik, R. Md. Mastan Shareef* and R. Abbaiah Department of Statistics, S. V. University, Tirupati-517 502, INDIA *Associate Professor, SANA Engineering College, Kodada (A.P), INDIA Abstract: Several authors worked on the estimation of parametric frontier production functions, as characterized by the work of Aigner and Chu(1968). Afriat (1972) and Richmond (1974), begins by assuming a function giving maximum possible output as a function of certain inputs. Aigner and Chu (1968) suggest the estimation of the considered model by mathematical programming methods.One problem with these approaches is extreme sensitivity to outliers. . This has led to the development of so-called “probabilistic” frontiers (Timmer (1971), Dugger (1974)), which are estimated by the same types of mathematical programming techniques discussed above, except that some specified proportion of the observations is allowed to lie above the frontier. Another problem with the mathematical programming techniques is that they do not lead to estimates with known statistical properties. Indeed, these are not really statistical techniques. In an attempt to give them a statistical basis, Schmidt (1976) has added a one-sided disturbance to the considered model. In particular, the assumption that disturbance has an exponential distribution leads to the linear programming techniques, while the assumption that disturbance has a half-normal distribution leads to the quadratic programming techniques. Therefore, Aigner and chu’s estimates can be viewed as maximum likelihood estimates under particular error specifications. In another paper, Aigner Amemiya, and Poirier (1976) construct a more reasonable error structure than a purely one-sided one. Their justification for this error specification is that firms are presumed to differ in their “production” of y for a given set of values for the “input” according to random variation in the model their ability to utilize “best practices” technology a source of error that is one-sided. A primary contribution of this error structure to the literature is that it allows the placement of the fitted function to be estimated along with the usual parameters of interest through the parameter  . Thus, the criticism levied at the average function by proponents of the frontier [e.g., Aigner and chu (1968)] and criticisms that accompany strict use of the frontier or envelope function as the ”appropriate” industry production function (Timmer (1971)) are ameliorated by this more accommodating specification. Key Words:Mathematical Programming, Frontier Production Function, timation of Production Function, Stochastic Frontier Model, Production Function, Cobb-Douglas Function. I.

MATHEMATICAL PROGRAMMING FOR ESTIMATION OF PRODUCTION FUNCTION Previous work on the estimation of parametric frontier production functions, as characterized by the work of Aigner and Cher (1968). Afriat (1972) and Richmond (1974), begins by assuming a function giving maximum possible output as a function of certain inputs. In deterministic terms, we write yi = f(xi ;),

i=1,2,-----,n.

(1.1)

here yi is output, xi is a vector of inputs, and  is a parameter vector to be estimated. In the present context the N observations typically represent a cross-section of firms with in a given industry [cf,. Aigner and Chu (1968)]. Aigner and Chu (1968) suggest the estimation of (1.1) by mathematical programming methods. Specifically, they suggest minimization of

 y  f (X ; ) i

i

i

Subject to yi≤ f(xi ; ), which is a linear programming problem if the production function is linear. Alternatively, they suggest minimization of

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 119


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

  y  f ( X ;  )

2

i

i

i

Subject to the same constraint, which is a quadratic programming problem if the production function is linear. One problem with these approaches is extreme sensitivity to outliers. This has led to the development of so-called “probabilistic” frontiers (Timmer (1971), Dugger (1974)), which are estimated by the same types of mathematical programming techniques discussed above, except that some specified proportion of the observations is allowed to lie above the frontier. The selection of this proportion is essentially arbitrary, lacking explicit economic or statistical justification. Another problem involves reconciling the observations above the frontier with the concept of the frontier as maximum possible output. Typically this is accomplished by appealing to measurement error in the extreme observations, however, it seems preferable to incorporate the possibility of measurement error, and of other unobservable shocks, in a less arbitrary fashion. Another problem with the mathematical programming techniques is that they do not lead to estimates with known statistical properties. Indeed, these are not really statistical techniques. In an attempt to give them a statistical basis, Schmidt (1976) has added a one-sided disturbance to (1.1) which yields the model yi = f(xi ; ) + i (1.2) Where i ≤ 0. Given a distribution assumption for the disturbance term, the model can then be estimated by maximum likelihood techniques. In particular, the assumption that - i has an exponential distribution leads to the linear programming techniques, while the assumption that - i has a half-normal distribution leads to the quadratic programming techniques. Therefore, Aigner and chu’s estimates can be viewed as maximum likelihood estimates under particular error specifications. Unfortunately, the observation that the model can be estimated by maximum likelihood techniques, and that under appropriate assumptions linear and quadratic programming are maximum likelihood techniques, is of little practical value. This is so because the usual “regularity conditions” for the application of maximum likelihood are violated. In particular, since yi≤f (xi ; ), the range of the random variable y depends on the parameters to be estimated. Therefore, the usual theorems cannot be invoked to determine the asymptotic distributions of parameter estimates. Under these circumstances it is not clear just how much we know about the frontier after having estimated it. In another recent paper, Aigner Amemiya, and Poirier (1976) construct a more reasonable error structure than a purely one-sided one. Specifically they assume

*i / 1  

if *i  0,

i  1, 2,........N

i 

*i / 

if

Where the errors, variance when

2

*i  0

(1.3)

 are independent normally distributed random variables with zero means and * i

for 0 <  < 1; otherwise,

  1 or   0 respectively.

*i has either the negative or positive truncated normal distribution,

Their justification for this error specification is that firms are presumed to differ in their “production” of y for a given set of values for the “input” according to random variation in (1.1) their ability to utilize “best practices” technology a source of error that is one-sided ( i ≤ 0), and or (1.2) an input quantity or measurement in y, a symmetric error. The parameter  is interpreted as the measure of “relative variability” in these two error sources, its values circumscribing the “full” frontier function   1 , the “average” function    1  , and  

 2

intermediate cases of some interest. A primary contribution of this error structure to the literature is that it allows the placement of the fitted function to be estimated along with the usual parameters of interest through the parameter  . Thus, the criticism levied at the average function by proponents of the frontier [e.g., Aigner and chu (1968)] and criticisms that accompany strict use of the frontier or envelope function as the ”appropriate” industry production function [of., Timmer (1971)] are ameliorated by this more accommodating specification. Nevertheless, the interpretation of  as a measure of the relative variability of error sources is only implicit in the Aigner, Amemiya, Poirier

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 120


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

formulation. A more direct approach is to specifically model the error process implied by the behavioural considerations mentioned above. II. STOCHASTIC FRONTIER We now return to the model as given in equation (1.2), but under the error structure i = vi+ui i=1,----,n

(2.1)

The error component vi represents the symmetric disturbance: the {vi} are assumed to be independently and

identically distributed as N (0,  v ) . The error component ui is assumed to be distributed independently of vi , 2

and to satisfy ui≤ 0. We will be particularly concerned with the case in which u i is derived from a N (0,  u ) 2

distribution truncated above at zero. However, other one-sided distributions are tenable, and we will also briefly consider the case in which -ui has an exponential distribution. This model collapses to a deterministic frontier model when  v

2

 0 , and it collapses to the Zellner, Kmenta

and Dreze (1966) stochastic production function model when  u

 0 . Note that yi = f(xi ; ) + vi, so that the

2

frontier itself is now clearly stochastic. The economic logic behind this specification is that the production process is subject to two economically distinguishable random disturbances with different characteristics. We believe that there is ample precedent in the literature for such a view, although our interpretation is clearly new. And from a practical standpoint, such a distinction greatly facilitates the estimation and interpretation of frontier. The non-positive disturbance ui reflects the fact that each firm’s output must lie on or below its frontier [f (xi ; ) + vi]. Any such deviation is the result of factors under the firm’s control, such as technical and economic inefficiency, the will and effort of the producer and his employees and perhaps such factors as defective and damaged product. But the frontier itself can vary randomly across firms, or over time for the same firm. On this interpretation, the frontier is stochastic, with random disturbance vi≥ 0 being the result of favourable as well as unfavourable external events such as luck, climate, topography, and machine performance. Errors of observation and measurement on y constitute another source of vi≥ 0. One interesting by product of this approach is that we can estimate the variances of v i and ui, so as to get evidence on their relative sizes. Another implication of this approach is that productive efficiency should, in principle, be measured by the ratio yi/ [f(xi ; ) + vi] (2.2) rather than by the ratio yi/ [f(xi ; )] (2.3) This simply distinguishes productive inefficiency from other sources of disturbance that are beyond the firm’s control. For example, the farmer whose crop is decimated by drought or storm is unlucky on our measure (2.2), but inefficient by the usual measure (2.3). Our discussion of estimation will be simplified somewhat if we consider a linear production function. We therefore write, in obvious matrix form: Y = x +  (2.4) In place of (1.2), where now  = v + u . III.

ESTIMATION OF THE STOCHASTIC FRONTIER MODEL

The distribution function of the sum of a symmetric normal random variable a and a truncated normal random variable was apparently first derived by Weinstein (1964). The derivation of the density function of  is straight forward, so we shall not include it here. The result is

 (3.1) f ()  f *   1  F * (  1 )  ,       2 2 2 * * Where    u   v ,    u /  v , and f (.) and F (.) are the standard normal density and distribution functions, respectively. This density is asymmetric around zero, with its mean and variance given by

E   E  u   

2

u

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 121


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

V   V  u   V  V 

 π-2 2 2 =  σu + σv  π 

(3.2)

as can be easily ascertained from elementary considerations and calculation of the moments of u. The particular parameterization in (3.1) is convenient because  is there by interpreted to be an indicator of the relative variability of the two sources of random error that distinguish firms from one another.

 2  0implies  2   

and/or

 u2  0

i.e., that the symmetric error dominated in the determination of

Equation (3.1) then becomes the density of a N(0,2) random variable, as can be seen by inspection.

 v2  0 , the one-sided error becomes the dominant source of random variation in the model

Similarly, when

and (2.2) takes on the form of negative half-normal. The estimation problem is posed by assuming we have available a random sample of n observations and then forming the relevant log-likelihood function,

ln L y / β;λ,σ

2

N 2 1 N 2 -1 * -1   = N ln + N ln σ +  ln 1- F i λσ    2 i 2 i 1 π i=1

(3.3)

Note:

1.

we prefer to use this interpretation of  even though

   2  2 is. Another useful parameterization is to use   u   

2.

for

 0 2 v

2

f t 

 u

e

 u2

is not the variance of u;

 2 along with    u2 2

and thus =∞ , becomes

1 2 2

2

for  0

= 0, otherwise Which is almost exactly the form of the likelihood functions considered by Amemiya (1973). Taking derivatives,

 ln N 1 N 1 L    yi   x i  2 4  2 2  i 1

2

 N fi* 1 yi   x i 3  * 2 i 1 1  Fi 

fi *  d ln 1 N  1 L  yi   X i   d  i 1 1  Fi *  

d ln 1  d  2

   1   yi   X  x i   i i 1  N

N

(3.4)

(3.5)

fi *

 1  F * x i 1

(3.6)

i

i

Where xi is a (k*1) vector consisting of elements in the ith row of x and fi* and Fi* are respectively the standard

normal density and distribution functions evaluated at  yi   1 X i  -1 . N

Given (3.2), we have that

fi *

 1  F *  y   i 1

i

i

1

 X i  =0 at the optimum. Inserting this result into 

(3.1), the ML estimator for 2 is determined through

N 1  4 2 2 2

N

  y   i 1

i

1

 Xi  0 

(3.7)

Which yields 2

1   N

1   y   Xi    i i 1   N

2

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

(3.8)

Page 122


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

The basis for the usual Ml estimator of residual variance in a regression model. But the determination 

of

2

is not independent of

iterative solution scheme

1

 from other equations. In any event, this result can be used as a basis for an premultiplied into (3.6) gives

   1 1 y  X  Xi  2  i  i 1   i  1

N

N

fi *

 1  F *  i 1

1

Xi

(3.9)

i

Adding to this - times equation (3.5) and simplifying, we get

fi *  N   1 1 y   X  X  y  i i 2  i  i 1   i 1 1  Fi * i  1

N

(3.10)

Which, in conjunction with (3.6) gives a system of (k+1) equations that corresponds very closely to the system of first–order equations encountered in the so-called “Tobic” model. Regularity conditions that are sufficient for the estimators defined by (3.4), (3.5), and (3.6) to be consistent and asymptotically normal can be found in Amemiya(1973). Various solution algorithms are available for finding the optimizing values of , , and 2 . Most of these (the Fletcher-Powell algorithm, for example) require analytical first - or second-order derivatives in addition to the likelihood function itself for their best performance at reasonable cost in computer time. Since such algorithms are now readily available, we will not devote any space to a discussion of the ML computational problem, except to note that this likelihood function seems to be well-behaved, based on our experience. Second-order derivatives are presented in the appendix to this paper, for that use and as a basis for calculating asymptotic standard errors of the ML estimates. We note in passing that if estimation of  alone is desired, all but the coefficient in  corresponding to a column of ones in x is estimated un-biasedly and consistently by least squares. Moreover, the components of  can be extracted (i.e., consistent estimators for them can be found) based on the least squares results by utilizing 2

equation (3.2) for since

v 

in terms of

 u2

and

 v2

and a similar relationship for a higher order moment of

v  and higher order mean–connected moments of 

,

can themselves be consistently estimated from

the computed least squares residuals. Similar comments and derivations would apply under alternative distributional assumptions for ui. for example, we could choose the simple one parameter exponential distribution for -ui.

1 f  u   exp  u /   , u  0

(3.11)

Where Φ ≥ 0 is the mean of -ui (the variance is  ). A little algebra reveals that the distribution of 2

i  vi  u i

is given by the density

  v2   v  1 *  f   1  F     exp   2     2    v  

(3.12)

Where again F*(.) represents the cumulative distribution function of the standard normal distribution. The likelihood function for the model follows immediately. IV. STOCHASTIC PRODUCTION FRONTIERS Stochastic production frontiers were initially developed for estimating technical efficiency rather than capacity and capacity utilization. However, the technique also can be applied to capacity estimation through modification of the inputs incorporated in production (or distance) function. A potential advantage of the stochastic production frontier approach over DEA is that random variations in catch can be accommodated, so that the measure is more consistent with the potential harvest under “normal” working conditions. A disadvantage of the technique is that, although it can model multiple output technologies, doing so is somewhat more complicated requires stochastic multiple output distance functions, and raises problems for output that take zero values.

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 123


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

V. THE UNDERLYING THEORY A production function defines the technological relationship between the level of inputs and the resulting level of outputs. If estimated econometrically from data on observed outputs and input usage, it indicates the average level of outputs that can be produced from a given level of inputs. A number of studies have estimated the relative contributions of the factors of production through estimating production functions at either the individual boat level or total fishery level. These include Cobb-Douglas production functions, CES production functions (Canbell and Lindner,1990) and trans-log production functions (Squires, 1987 ; Pascoe and Robinson,1998) An implicit assumption of production functions is that all firms are producing in a technically efficient manner, and the representative (average) firm therefore defines the frontier variations from the frontier are thus assumed to be random, and are likely to be associated with mis-or un-measured production factors. In contrast, estimation of the production frontier assumes that the boundary of the production function is defined by “best practice” firms. It therefore indicates the maximum potential output for a given set of inputs along a ray from the origin point. Some white noise is accommodated, since the estimation procedures are stochastic, but an additional onesided error represent any other reason firms would be away from (within) the boundary. Observations with in the frontier are deemed “inefficient”. So from an estimated production frontier it is possible to measure the relative efficiency of certain groups or a set of practices from the relationship between observed production and some ideal or potential production (Greene,1993). A general stochastic production frontier model can be given by

ln q j  f  ln x   v j  u j

(5.1)

Where qj is the output produced by firm j, x is a vector of factor inputs, v j is the stochastic (white noise) error term and uj is a one-sided error representing the technical inefficiency of firm j. both v i and uj are assumed to be independently and identically distributed (IID) with variance

 v2 and  u2

respectively.

Given that the production of each firm j can be estimated as:

ln qˆ j  f (ln x)  u j

(5.2)

While the efficient level of production (i.e., no inefficiency) is defined as:

ln q j  f  ln x 

(5.3)

Then technical efficiency (TE) can be given by

ln TE j  ln qˆ j  ln q*  u j Hence,

TE j  e uj ,

and is constrained to be between zero and one in value. If u j equals zero, then

(5.4)

T  equals

one, and production is said to be technically efficient. Technical efficiency of the j th firm is therefore a relative measure of its output as a proportion of the corresponding frontier output. A firm is technically efficient if its output level is on the frontier, which implies that q/q* equals one in value. While the techniques have been developed primarily to estimate efficiency, they can be readily modified to represent capacity utilization. In estimation the full utilization production frontier, a distinction must be made between inputs comprising the capacity base (usually capital inputs), and variable inputs (usually days, or variable effort). If capacity is defined only in terms of capital inputs, the implied variation in output, and thus variable effort, from its full utilization level is sometimes termed an indicator of capital utilization. If variable inputs are assumed to be approximated by the number of hours or days fished (i.e., normal units of effort), estimating the potential output producible from the capacity base with variable inputs “unconstrained” implies removing this variable from the estimation of the frontier. The resulting production frontier is thus defined only in terms of the fixed factors of production or k I n particular, it will be supported by observations for the boats that have the greatest catch per unit of fixed input (which generally corresponds to the boats that employ the greatest level of nominal effort for a particular level of k). The resulting measure of technical efficient capacity utilization (TECU); accommodating both the impacts of technical inefficiency and deviations from full utilization of the capacity base. That is, it represents the ratio of the potential capacity output that could be achieved if all fixed inputs were being utilized efficiency and fully to observed output. VI. FUNCTION FORMS FOR THE PRODUCTION FUNCTION Estimation of the SPE requires a particular functional form of the production function to be imposed. A range of functional forms for the production function frontier are available, with the most frequently used being a translog function, which is a second order (all cross-terms included) log-linear form. This is a relatively flexible functional form, as it does not impose assumptions about constant elasticity of production nor elasticities of

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 124


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

substitution between inputs. It thus allows the data to indicate the actual curvature of the function, rather than imposing a priori assumptions. In general terms, this can be expressed as

ln Qit  o   i ln X j,i,t  i

1  i,k ln x j,i,t ln x j,k,t  u j,t  v j,t 2 i k

(6.1)

Where Qj,I is the output of the vessel j in period t and X j,i,t and Xj,k,t are the variable and fixed vessel inputs (i, k) to the production process. As noted above, the error term is separated into two components, where v j,t is the stochastic error term and Uj,t is an estimate of technical inefficiency. Alternative production functions include the Cobb-Douglas and CES(Constant Elasticity of S11ubstitution) production functions. The Cobb-Douglas production function is given by

ln Q ji  o   i ln X j,i,t  u j,t  v j,t

(6.2)

i

As can be seen, the Cobb-Douglas is a special case of the trans-log production function where all bi,k=0. The production function imposes more stringent assumptions on the data than the trans-log, because the elasticity of substitution has constant value of 1(i.e the functional form assumption imposes a fixed degree of substitutability on all inputs). And the elasticity of production is constant for all inputs (i.e., a 1 percent change in input level will produce the same percentage change in output, irrespective of any other arguments of the function). The CES production function is given by.

Q j,t    x1, j,k  1    x 2, j,t 

1/

 u j,t  v j,t

(6.3)

Where q is the substitution parameter related to the elasticity of substitution (i.e., q=(1/s)-1 where ‘s’ is the elasticity of substitution) and d is the distribution parameter. The CES production function is limited to two variables, and is not possible to estimate in the form given in (3.7) in maximum likelihood estimation (MLE) (making it unsuitable for use as the basis of a production frontier). However, a Taylor series expansion of the function yields a functional form of the model that can be estimated, given as:

Q ln  j,t x  2, j,t

  x1, j,t   x , j, t  1   ln    u  1 ln X 2, j,t  v ln  1   u 1    ln    x 2, j,t   x 2, j,t  2

2

    u j,t  v j,t  

(6.4)

The model can be estimated as a standard or frontier production function, and the parameter values derived through manipulation of the regression co-efficient. The functional form in (3.8) can be shown to be a special case of the trans-log function. Given that both the Cobb-Douglas and CES production functions are special cases of the trans-log, ideally the trans-log should be estimated first and the restrictions outlined above, tested. However, the large number of variables required in the process of estimating the trans-log may cause problems if a sufficient data series is not available, resulting in degree of freedom problems. In such a case, more restrictive assumptions must be imposed. VII.

MATHEMATICAL PROGRAMMING FOR THE MEASURMENT OF TECHNICAL EFFICIENCY To assure the technical efficiency one may use a priority specified neo-classical production function such as the Cobb – Douglas Production function.

u   x1 x 2 Where u = output ; x1, x2 = inputs. Let there be n production units whose output lie on or below the Frontier Production function: ˆ uˆ  ˆx1ˆ x 2

(7.1)

Let (ui, x1i, x2i), i=1,2,……. n Be the output – input vector of ith production unit had the ith production unit been technically efficient its outputinput vector is given by.

 uˆ , x i

Thus, we have,

1i

, x 2i  ,i  1, 2............n

uˆ i  u i , i  1, 2......n  ln uˆ i  ln u i ,i  1, 2.....n

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 125


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127

ln ˆ  ˆ ln x1i  ˆ ln x 2i  ln u i ,i  1, 2...........n We hypothesise that the n production units compete with each other to achieve technical efficiency. Thus, the objective is to minimize, n

  ln uˆ i 1

i

 ln u i 

(7.2)

Subject to

ln ˆ  ˆ ln x1i  ˆ ln x 2i  u i ln ˆ  0, ˆ  0, ˆ  0

(7.3)

i=1,2,……. N Minimizing (7.2) is same as minimizing

ln ˆ  ˆ ln x1 ˆ ln x 2

(7.4)

The expression (7.4) is the objective function. Thus, to measure unit wise technical efficiency one may solve the following linear programming problem. Min

ˆo  ˆ x1 ˆ x 2

Subject to

(7.5)

ˆo  ˆ x1i  ˆ x 2i  u i

(7.6)

i=1,2,……. n.

ˆ o ,ˆ , ˆ  0 Where

(7.7)

ˆo  ln ˆ

x1  ln x1 x 2  ln x 2 u  ln u

For the above linear programming problem if optimal solution exists all artificial variables have to assume zero values. Thus, for ith production unit one has,

ˆ0  ˆ x1i  ˆ x 2i  Si  u i Where Si is the surplus variable of ith constraint. Si  0 i=1,2,………..n

Si  u i  ˆ o  ˆ x1i  ˆ x 2i

ln u i  ln uˆ i  ln u i / uˆ i u TE  i   i  e Si uˆ t

(7.8)

The expression (7.8) gives the technical efficiency of ith production unit. Thus, we have 0 ≤ T  (i) ≤ 1, i=1,2,….. n. This method of measuring technical efficiency is due to TIMMER. The Timmer’s method solves a single linear programming problem to compute production unit wise technical efficiency. A different method to measure technical efficiency of ith production unit is, to Maximize

ˆo  ˆ x1i  ˆ x 2i

Subject to

ˆo  ˆ x1i  ˆ x 2i  u i

i=1,2,……. n. If there are n production units, this method requires to solve n linear programming problems. References [1]. [2]. [3]. [4].

Afrait, S.N (1972) , ”Efficiency Esimation of Production Functions,’’ international economic review, vol. 13(October), pp.56898. Aigner, D.J., and S.F.Chu (1968)” On estimating the industry production function, ” American Economic Review, Vol. 58, pp.826-839. Aigner ,D.J., T.Amemiya and D.j.Poirier (1976),”On the Estimation of Production Frontiers, “International Economic Review. Aigner,D., C.A.K.Lovell and P.Schmidt,(1977), Formulation and Estimation of Stochastic Frontier Production Function models, Journal of Econometrics, vol.6,pp.21-37.

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 126


T. Suneetha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 119-127 [5]. [6]. [7]. [8]. [9].

Dugger, R., (1974), An application of Bouded Non parametric estimating functions to the analysis of bank cost and production functions, unpublished Ph.D. dissertation (University of North Carolina, Chapel Hill, NC). Richmod, J.(1974),”Estimating the Efficiency of Production , International Economic Review, vol.15,pp.515-21. Timmer, C.Peter (1971),”using a Probalistic Frontier Production Function to Measure Technical Efficiency, “Journal of Political Economy, vol.79,pp776-794. Zellner, A., Kmenta, J., and J.Dreze(1966), “Specification and Estimation of Cobb-Douglas Production Functions,”Econometrica, vol.34,pp.784-95.

AIJRSTEM 14-162; © 2014, AIJRSTEM All Rights Reserved

Page 127


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Effect of Resin and Thickness on Tensile Properties of Laminated Composites B.V.Babu Kiran1, G. Harish2 Research Scholar, 2Associate Professor R&D Center, Department of Mechanical Engineering, University Visvesvaraya College of Engineering, Bangalore, Karnataka, India. 1

Abstract: The use of composites in the aerospace industry has increased dramatically. Primary benefits that composite components can offer are reduced weight and assembly simplification. The performance advantages associated with reducing the weight of aircraft structural elements has been the major impetus for military aviation composites development. Although commercial carriers have increasingly been concerned with fuel economy, the potential for reduced production and maintenance costs has proven to be a major factor in the push towards composites.The results of tensile tests are used in selecting materials for engineering applications and are frequently included in material specifications to ensure quality. Tensile properties often are measured during development of new materials and processes, so that different materials and processes can be compared.The present investigation was undertaken to determine the influence of Resin & Thickness of Laminates on glass fiber Epoxy, graphite fiber Epoxy and Carbon fiber Epoxy laminates with glass fiber Polyester, graphite fiber polyester and carbon fiber polyester resin under Tensile loads. Keywords: Laminate, Tensile Strength, Resin, Stiffness, Strength I. Introduction The application of laminated composites are increased in all sorts of engineering applications especially in aerospace ,sports, transportation marine and in all sorts of engineering applications due to high specific strength and stiffness. Fiber reinforced composite materials are selected for weight critical applications and these materials have good rating as per the fatigue failure is concerned. Present work is aimed to analyze the mechanical behavior of an each laminate under tensile condition. The basic concepts of composites material along with details of earlier works are explained by author at reference. A brief review is given of techniques which have been employed in attempts to determine the mechanical properties of composite materials under tensile &impact loading by J. Harding and L.M. Welsh, [1]. B. Gommers et.al [2] determined the mechanical properties of composite materials by tensile tests. Mauricio et al [3] predicted the elastic behavior of hybrid plain weave fabric composites with different materials and undulations in the warp and weft directions by formulating a 3D analytical micromechanical model. Ala Tabiei and Ivelin Ivanov [4] developed a micromechanical material model of woven fabric composite materials with failure. The elastic properties and notch sensitivity of untreated woven jute and jute–glass fabric reinforced polyester hybrid composites was investigated analytically and experimentally by Sabeel Ahmed [5].Barbero et al [6] determined the mechanical properties of plain weave fabrics by developing accurate finite element models. The effects of skew angle, aspect ratio and boundary condition on large deflection static behavior of thin isotropic skew plates under uniformly distributed load was investigated by Debabrata Das [7]. Varatharajan [7] has conducted extensive tensile, flexure and interlaminar tests on glass/polypropylene and glass/polyester composites.From the above literature, it is very evident that the effect of thickness and influence of resin system onthe tensile properties of composite materials is very less &scanty. Hence, in this work it is proposed to experimentally investigate the influence of specimen thickness with different resin system on tensile properties of laminated composite specimens. II.

Experimental Procedure

Materials: Bi woven Carbon fiber, Glass fiber & Graphite fiber are used as reinforcement materials in the form of bi directional, Epoxy & Polyester resins are used as Matrix materials for the laminate preparations with the hardeners HY140 & MEPK respectively for epoxy resin and Polyester resin respectively.

AIJRSTEM 14-164; Š 2013, AIJRSTEM All Rights Reserved

Page 128


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

Resin used

MATERIAL

EPOXY RESIN

GRAPHITE FIBER

GLASS FIBER CARBON FIBER POLYESTER RESIN

GLASS FIBER GRAPHITE FIBER CARBON FIBER

Testing:

Fig.: Specimen mounted on UTM The composite laminates were subjected to various loads and computer controlled UTM. The specimens were clamped and tests were performed. The tests were closely monitored and conducted at room temperature. The load at which the completed fracture of the specimen occurred has been accepted as breakage load. III. Specimen preparation Two types of Resins Epoxy resin and Polyester Resins are used to prepare the Glass fiber, Graphite and Carbon fiber laminates as shown in the table 1 & 2. Case 1: Epoxy resin as matrix material Composite laminates were fabricatedat room temperature (24 -26ºC) in a clean and net environment. Composite laminates were fabricated by hand lay-up process, proper care was taken during the preparation of laminates to maintain the uniform thickness and to prevent the voids. The first layer of Bi-woven glass fiber cloth (ranging from 0.25 mm to 0.35 mm) is laid and resin is spread uniformly over the cloth by means of brush. The second layer of the cloth is laid and resin is spread uniformly over the cloth by means of brush. After second layer, to enhance wetting and impregnation, a teethed steel roller is used to roll over the fabric before applying resin. Also resin is tapped and dabbed with spatula before spreading resin over fabric layer. This process is repeated till all the 10 layers (2 mm thickness) and 16 layers (4 mm thickness) are placed. No external pressure is applied while casting and curing because uncured matrix material can squeeze out under high pressure. This results in surface waviness (non-uniformed thickness) in the model material. The casting is cured at oven temperature of about 100º C up to 2 hrs & finally removed from the mold to get a fine finished composite plate. The below picture shows the clear view of the fabrication process. This process is repeated to prepare the Graphite and carbon based epoxy resin laminates. Preparation of test specimens After the cure process, the test specimens are cut from the sheet to the following size as per ASTM standards (ASTM D-790) by using diamond impregnated wheel, cooled by running water. All the specimens are finished by abrading the edges on a fine carborundum paper. Case 2: Polyester resin as matrix material The laminates were fabricated by placing one layer of bi woven fabric over the other. Polyester resin was applied as a matrix material in between each layer, tools were used to distribute resin uniformly as explained earlier and a teethed steel roller is used to roll over the fabric before applying resin. Also, resin is tapped and dabbed with spatula before spreading resin over fabric layer. This process is repeated till all the 10 layers (for 2mm thickness) and 16 layers (4mm thickness) are placed without applying any kind of external pressure. The

AIJRSTEM 14-164; © 2013, AIJRSTEM All Rights Reserved

Page 129


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

surfaces of the laminates were covered to prevent lay up from external disturbance. After proper curing about 2 days at room temperature the specimens were cut in required sizes per ASTM (ASTM D-790) standards. In all, 24 specimens were prepared as indicated in Table 1 & Table 2.

Table 1- Designation of glass, carbon & Graphite specimens reinforced with epoxy resin SI

Specimen Designation

Description

1 2 3 4 5 6 7 8 9 10 11 12

CATE/02/01 CATE/02/02 CATE/04/01 CATE/04/02 GRTE/02/01 GRTE/02/02 GRTE/04/01 GRTE/04/02 GLTE/02/01 GLTE/02/02 GLTE/04/01 GLTE/04/02

CARBON FIBER /2 mm THICKNESS/SAMPLE 01 CARBON FIBER/2 mm THICKNESS/SAMPLE 02 CARBON FIBER/4 mm THICKNESS/SAMPLE 01 CARBON FIBER/4 mm THICKNESS/SAMPLE 02 GRAPHITE FIBER/2mm THICKNESS/SAMPLE 01 GRAPHITE FIBER/2mm THICKNESS/SAMPLE 02 GRAPHITE FIBER/4mm THICKNESS/SAMPLE 01 GRAPHITE FIBER/4mm THICKNESS/SAMPLE 02 GLASS FIBER/2 mm THICKNESS/SAMPLE 01 GLASS FIBER/2 mm THICKNESS/SAMPLE 02 GLASS FIBER/4 mm THICKNESS/SAMPLE 01 GLASS FIBER/4 mm THICKNESS/SAMPLE 02

AIJRSTEM 14-164; © 2013, AIJRSTEM All Rights Reserved

Page 130


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

Table 2- Designation of glass, carbon & Graphite specimens reinforced with Polyester resin. SI

Specimen Designation

Description

1

CATP/02/01

CARBON FIBER /2 mm THICKNESS/SAMPLE 01

2

CATP/02/02

CARBON FIBER/2 mm THICKNESS/SAMPLE 02

3

CATP/04/01

CARBON FIBER/4 mm THICKNESS/SAMPLE 01

4

CATP/04/02

CARBON FIBER/4 mm THICKNESS/SAMPLE 02

5 6

GRTP/02/01 GRTP/02/02

GRAPHITE FIBER/2mm THICKNESS/SAMPLE 01 GRAPHITE FIBER/2mm THICKNESS/SAMPLE 02

7

GRTP/04/01

GRAPHITE FIBER/4mm THICKNESS/SAMPLE 01

8

GRTP/04/02

GRAPHITE FIBER/4mm THICKNESS/SAMPLE 02

9

GLTP/02/01

GLASS FIBER/2 mm THICKNESS/SAMPLE 01

10

GLTP/02/02

GLASS FIBER/2 mm THICKNESS/SAMPLE 02

11

GLTP/04/01

GLASS FIBER/4 mm THICKNESS/SAMPLE 01

12

GLTP/04/02

GLASS FIBER/4 mm THICKNESS/SAMPLE 02

Table 3- Designation and Measured Dimensions of Glass, Graphite and carbon specimens reinforced with Epoxy Resin. Specimen Designation (EPOXY RESIN)

length (mm)

Width (mm)

Thickness (mm)

GLTE/02/01

250.2

25.1

2.2

GLTE/02/02

250.5

24.8

2.1

GLTE/04/01

249.3

25.2

4.2

GLTE/04/02

250.1

25.3

4.1

GRTE/02/01

250.1

24.9

1.9

GRTE/02/02

250.1

25.1

2.1

GRTE/04/01

250.3

24.8

4.1

GRTE/04/02

250.1

25.3

4.2

CATE/02/01

250.3

25.2

2.1

CATE/02/02

249.9

24.9

2.1

CATE/04/01

248.9

24.7

4.2

CATE/04/02

250.4

25.2

4.3

Table 4- Designation and Measured Dimensions of Glass, Graphite and carbon specimens reinforced with Polyester Resin. Specimen Designation (polyester resin)

length (mm)

Width (mm)

Thickness (mm)

GLTP/02/01

250

25.2

2.1

GLTP/02/02

250.2

24.9

2.1

GLTP/04/01

249.9

25.1

4.3

GLTP/04/02

251.1

25.2

4.2

GRTP/02/01

248.1

25.1

2.1

GRTP/02/02

250.1

25.1

2.2

GRTP/04/01

250.9

25.3

4.1

GRTP/04/02

250.9

25.3

4.1

CATP/02/01

250.3

25.1

2.2

CATP/02/02

250.3

24.9

2.1

CATP/04/01

249.6

25.1

4.2

CATP/04/02

251

25.4

4.1

AIJRSTEM 14-164; © 2013, AIJRSTEM All Rights Reserved

Page 131


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

IV. Results Table-5: Tensile Properties Glass, Carbon and Graphite composites with 2mm & 4mm thickness Reinforced with Epoxy Resin.

Specimen (Epoxy resin)

Peak Load, kN

Yield Strength MPa

Ultimate Strength, MPa

Max Deformation, mm

Stiffness N/mm

CATE/02/01 CATE/02/02

30.52 35.21

321.94 302.02

512.03 545.06

18.16 22.15

1680.62 1589.62

CATE/04/01 CATE/04/02

86.15 89.56

415.12 433.32

489.12 504.51

32.24 33.68

2672.15 2659.14

GRTE/02/01 GRTE/02/02

22.45 19.22

294.03 304.21

412.14 416.15

12.11 14.66

1853.84 1311.05

GRTE/04/01 GRTE/04/02 GFTE/02/01

52.22 49.78 19.64

391.56 389.18 332.68

389.15 387.43 377.24

26.33 22.08 8.3

1983.29 2254.53 2366.26

GFTE/02/02 GFTE/04/01

20.32 33.64

316.7 328.62

398.23 408.83

8.78 13.1

2314.5 2567.94

GFTE/04/02

30.44

297.14

366.49

11.4

2670.17

Table-6: Tensile Properties Glass, Carbon and Graphite composites with 2mm & 4mm thickness Reinforced with Polyester Resin. Specimen (polyester resin)

Peak Load, kN

Yield Strength MPa

Ultimate Strength, MPa

Max Deformation, mm

Stiffness N/mm

GFTP/02/01

14.77

218.71

299.13

7.7

1918.18

GFTP/02/02

12.97

200.15

302.16

6.9

1879.71

GFTP/04/01

38.68

150.91

245.6

19.68

1965.45

GFTP/04/02

36.85

154.81

255.9

21.85

1686.5

GRTP/02/01

19.88

264.57

402.33

9.88

2012.15

GRTP/02/02

20.53

288.38

414.12

10.53

1949.67

GRTP/04/01

49.74

303.03

378.19

21.24

2341.81

GRTP/04/02

46.41

389.09

381.12

22.4

2071.88

CATP/02/01

28.67

294.56

498.14

15.35

1867.75

CATP/02/02

30.77

289.94

513.15

18.16

1694.38

CATP/04/01

68.44

395.67

463.18

26.74

2559.46

CATP/04/02

69.15

400.36

455.1

28.13

2458.23

Graphs:

Graph 01: Carbon fiber +Epoxy resin (04mm thickness) Graph 02: Graphite fiber +Epoxy resin (04mm Thickness)

AIJRSTEM 14-164; Š 2013, AIJRSTEM All Rights Reserved

Page 132


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

Graph 03: Glass fiber +Epoxy resin (04mm thickness)

Graph 04: Carbon fiber + polyester resin (04mm thickness) +Polyester resin (04mm thickness)

Graph 05: Graphite fiber

Graph 06: Glass fiber +polyester resin (04mm thickness)

V. Discussions The results obtained from experimental work on the Tensile testing of different fibers of with different resin system in laminated composites are illustrated in Table-5 & 6 and the following observations were recorded. The load increases sharply in case of 4mm thick specimen as compared to 2mm laminates as indicated in the table 5 & 6.Increase in thickness of laminates tends to decrease the tensile strength. Load required to fracture the same is completely depends upon the thickness of the specimen and it has been observed 15-30% of load is required to break the specimen completely.The extension of the specimen under the loads completely depends upon the thickness of the specimen. Load at yield point also increase with the thickness of the specimen.The specimen reinforced with epoxy resin tends to show more strength compared to specimen with polyester resin. It is quite evident that elongation decreases with increase in thickness of all types of specimen. This study is carried out in order to investigate the tensile properties of three types of laminated composites namely Glass fiber, Graphite fiber and Carbon fiber with epoxy and polyester resins as matrix materials. The specimens are subjected to axial load. Tensile test is conducted to establish the strength and modulus of elasticity for the composite laminates. This test had given the basic concept of the effect of thickness and

AIJRSTEM 14-164; Š 2013, AIJRSTEM All Rights Reserved

Page 133


B. Kiran et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 128-134

different resin system onto the tensile properties of the laminated composites. The results of experiments show that the tensile properties of the specimens were increased when epoxy resin was used as matrix system. VI. Conclusions The main conclusions drawn from the experimental investigation of tensile tests on laminated composite material are as follows:  In this work, tensile test on two different thicknesses with two different resins of bi-woven glass epoxy, graphite epoxy and carbon epoxy specimenswere compared with glass polyester, graphite polyester and carbon polyester resin tested and results recorded. The influence of specimen thickness and influence of resin on the tensile properties were evaluated and it is found that the increase in thickness increases the tensile properties such as ultimate strength, stiffness along with the resin used.  Tensile properties of Epoxy Glass, Graphite and Carbon Laminates of 2mm and 4 mm thicknesses with Epoxy Resin and Polyester Resin were successfully conducted and results are recorded.  The laminated specimens with lesser thickness leads to more ultimate tensile strength  Specimens sustain greater loads in carbon epoxy as compared to carbon polyester resin.  Extension is minimum in the case of glass fiber polyester resin as compared to other specimens.  The effects of specimen thickness & resin on tensile properties were evaluated and it is found that the specimen’s reinforced with epoxy resin shows better tensile properties as compared to the specimen reinforced with polyester resin. Finally, it can be concluded that for same thickness and orientation, carbon fiber reinforced with epoxy resin provides better tensile properties as compared to glass and graphite resin reinforced with both epoxy resin and polyester resin under tensile loading conditions. VII. Acknowledgments Authors thankfully acknowledge the Principal and Head of the Department of Mechanical Engineering UVCE, Bangalore for their constant encouragement and support in carrying out this work. IX. References 1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12)

J. Harding and L.M. Welsh, A tensile testing technique for fibre-reinforced composites at impact rates of strain, J Mater Sci18 (1983), pp. 1810–1826. B. Gommers et.al, ―Determination of the Mechanical properties of composite materials by Tensile Tests‖, Journal of composite materials, Vol 32, pp 102 – 122, 1998. Mauricio V. Donadon, Brian G. Falzon, Lorenzo Iannucci and John M. Hodgkinson, 2007, A 3D micromechanical model for predicting the elastic behaviour of woven laminates, Composites Science and Technology, 67, pp 2467–2477. Ala Tabiei and Ivelin Ivanov, 2004, materially and geometrically nonlinear woven composite micromechanical model with failure for finite element simulations, International Journal of Non LinearMechanics, 39, pp175 – 188. Pu Xue, Jian Cao and Julie Chen, 2005, Integrated micro/macro mechanical model of woven fabric composites under large deformation, Composite Structures, 70, pp 69–80. Barbero E.J., Trovillion J., Mayugo J.A. and Sikkil K.K., 2006, Finite element modeling of plain weave fabrics from photomicrograph measurements, Composite Structures, 73, pp 41–52. Debabrata Das, PrasantaSahoo and KashinathSaha, 2010, Large deflection analysis of skew plates under uniformly distributed load for mixed boundary conditions,International Journal of Engineering Science and Technology, 2(4), pp 100 - 112. Bryan Harris, ‘Engineering Composite Materials’, The Institute of Materials, London, 1999. M.M. Schwartz,Composite Materials: Properties, Nondestructive Testing and Repair, V.1,Prentice- Hall Inc., New Jersey, USA, 1997. S. Deng, L. Ye and Y.W. Mai, Influence of fiber cross-sectional aspect ratio on mechanicalproperties of glass fiber/epoxy composites, Tensile and flexure behavior, Composite Scienceand Technology, 59(1999), 1331–1339. ASTM, Standard test method for tensile properties of Plastics,ASTM D638-10, Annual Book of ASTM Standards, American Society for Testing and Materials. K.M. Kaleemulla and B. Siddeswarappa, Influence of fiber orientation on the in-planemechanical properties of laminated hybrid polymer composites, Journal of ReinforcedPlastics and Composites, 29(12) (2009).

AIJRSTEM 14-164; © 2013, AIJRSTEM All Rights Reserved

Page 134


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Robust Color image segmentation using efficient Soft-computing techniques: A survey 1

Janakiraman.S, 2 J.Gowri Pondicherry University, Puducherry, India 2 Research Scholar, Bharathiar University, Coimbatore, Tamil Nadu, India 1

Abstract: Images have always been very important in human life. Images are considered as one of the most important medium of conveying information, in the field of computer vision, by understanding images the information extracted from them can be used for other tasks for example: navigation of robots, extracting malign tissues from body scans, detection of cancerous cells, and identification of an airport from remote sensing data. This paper presents a survey on soft-computing techniques applied for color image segmentation. Further, this paper includes future research directions in the area of color image segmentation using soft-computing techniques. Key words: Color Image Segmentation, Fuzzy, Edge Detection. I. INTRODUCTION People are more interested in certain parts of the image in the research and application of the image. These parts are frequently referred as a target or foreground (other part is called background), they generally correspond to the image in a specific and unique nature of the area. It needs to extract and separate them in order to identify and analyze object, on this basis it will be possible to further use for the target. A digital image is composed of a finite number of elements called pixels, each of which has a particular location and value. One of the most fundamental features of digital image and the basic steps in image processing, analysis, pattern recognition and computer vision is the edge of an image where the preciseness and reliability of its results will affect directly on the comprehension machine system made objective world. Several edge detectors have been developed in the past decades, although no single edge detectors have been developed satisfactorily enough for all application. Now a day the medical field is growing enormously to diagnose and give treatment to the patient. One of the fast growing and important tool to diagnose functional or structural change on soft tissue is Magnetic Resonance Image (MRI). The MRI of head scan contains eyes, nose, ear, neck, scalp, brain tissue and some non-brain tissues. So doctors need some clear perception of MRI head scan to diagnose the disease on brain. For that we need some segmentation process. It may be performed manually, but it takes more time to segment the brain portion. Therefore, an automated method is necessary to segment the brain portion. II. DIGITAL IMAGE PROCESSING Digital image processing plays a vital role in the analysis and interpretation of remotely sensed data. Especially data obtained from Satellite Remote Sensing, which is in the digital form, can best be utilized with the help of digital image processing. Digital image processing refers to processing of digital images by using digital computers. Image enhancement and information extraction are two important components of digital image processing. Image enhancement techniques help in improving the visibility of any portion or feature of the image suppressing the information in other portions or features. Information extraction techniques help in obtaining the statistical information about any particular feature or portion of the image. Early 1920s Bartlane cable picture transmission system used to transmit newspaper images across the Atlantic. Images were coded, sent by telegraph, printed by a special telegraph printer. It took about three hours to send an image, first systems supported 5 gray levels. 1964 NASA’s Jet Propulsion Laboratory began working on computer algorithms to improve images of the moon. Images were transmitted by Ranger 7 probe. Corrections were desired for distortions inherent in on-board camera. Now there is a need of a method, with the help of which, we can understand images and extract information or objects, image segmentation fulfill above requirements. Thus, image segmentation is the first step in image analysis. Some time image denoising is done before the segmentation to avoid from the false contour selection for segmentation to segment the image without loss of information for medical diagnosing purpose is a challenging job.

AIJRSTEM 14-166; Š 2014, AIJRSTEM All Rights Reserved

Page 135


Janakiraman.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 135-139

III. IMAGE SEGMENTATION Image segmentation refers to the process of partitioning a digital image into multiple segments i.e. set of pixels, pixels in a region are similar according to some homogeneity criteria such as color, intensity or texture, so as to locate and identify objects and boundaries in an image. Practical application of image segmentation range from filtering of noisy images, medical applications (Locate tumors and other pathologies, Measure tissue volumes, Computer guided surgery, Diagnosis, Treatment planning, study of anatomical structure), Locate objects in satellite images (roads, forests, etc.), Face Recognition, Finger print Recognition, etc. Many segmentation methods have been proposed in the literature. A boundary in an image is a contour that represents the change from one object or surface to another. This is distinct from image edges, which mark rapid changes in image brightness, but may or may not correspond to salient boundaries. Image segmentation is a procedure that partitions an image into disjointing groups with each group sharing similar properties such as intensity, color, boundary and texture. In general, three main image features are used to guide image segmentation, which are intensity or color, edge, and texture. In other words, image segmentation methods generally fall into three main categories: color-based, edge-based, and texturebased segmentations. Color-based (or intensity based) segmentation assumes that an image is composed by several objects with constant intensity. This kind of methods usually depends on intensity similarity comparisons to separate different objects. Histogram thresholding , clustering and split-and-merge , are typical examples of intensitybased segmentation methods. Edge based segmentation has a strong relationship with color-based segmentation, since edge usually indicate discontinuities in image intensity. Widely used methods in edge-based segmentation include Canny, watershed, and snake. Texture is another important characteristic used to segment objects from background. Most texture-based segmentation algorithms map an image into a texture feature space, then statistical classifications methods are used to segment different texture features. Co-occurrence matrix, directional gray-level energy, Gabor filters, and fractal dimensions, are frequently used methods to obtain texture features. Besides intensity, color, contour and texture: object density is another attribute to be used when image analysis is requested to find appropriate features on microscope measurements. For instance, in pathological tissue images, pathological diagnosis relies on accurate separation of different cells. IV. TECHNIQUES IN EDGE DETECTION Edge detection is a well-developed field on its own within image processing. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries. Edge detection techniques have therefore been used as the base of another segmentation technique. The edges identified by edge detection are often disconnected. To segment an object from an image however, one needs closed region boundaries. The desired edges are the boundaries between such objects. Segmentation methods can also be applied to edges obtained from edge detectors. Lindeberg and Li developed an integrated method that segments edges into straight and curved edge segments for parts-based object recognition, based on a minimum description length (MDL) criterion that was optimized by a split-and-merge-like method with candidate breakpoints obtained from complementary junction cues to obtain more likely points at which to consider partitions into different segments. Soft Computing Techniques (Artificial Neural Networks, Genetic Algorithms, Fuzzy Logic Models, and Particle Swarm Techniques) have been recognized as attractive alternatives to the standard, well established “hard computing” paradigms. Traditional hard computing methods are often too cumbersome for today’s problems. They always require a precisely stated analytical model and often a lot of computational time. Soft computing techniques, which emphasize gains in understanding system behavior in exchange for unnecessary precision, have proved to be important practical tools for many contemporary problems. NNs and FLMs are universal approximators of any multivariate function because they can be used for modeling highly nonlinear, unknown, or partially known complex systems, plants, or processes. Genetic Algorithm and Particle Swarm Optimization Techniques have emerged as potential and robust optimization tools in recent years. V. SURVEYED TECHNIQUES In the image segmentation field, traditional techniques do not completely meet the segmentation challenges for color images. Fuzzy image processing is the collection of all approaches that understand, represent and process the images, their segments and features as fuzzy sets. The representation and processing depend on the selected fuzzy technique and on the problem to be solved. A. A Novel Method for Image Segmentation Using Fuzzy Threshold Selection In this paper a novel method based on fuzzy logic reasoning strategy is proposed for edge detection in digital images using 16 fuzzy edge templates that show the possible direction of the edge in the image and then calculating the divergence between the origins image and the 16 fuzzy templates. Calculation of the maximum

AIJRSTEM 14-166; © 2014, AIJRSTEM All Rights Reserved

Page 136


Janakiraman.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 135-139

of the divergence value between the 16 templates and the original image of the same size. Set a threshold and applying the morphological operators. The general algorithm for image thresholding based on measure proposed above can be formulated as follows: (1) Reading the pixel of the image. (2) Number of row(n) and column(r) of an image is taken. (3) Selection of the 16 fuzzy templates. a=0.3; b=0.8; t1 = [a a a; 0 0 0; b b b]; etc., (distinct templates) (4) Converting into the fuzzy domain from the original image. All value in the interval of [0 1] (ie maximum pixel/element of the image) (5) Finding Hesitation degree or intuitionistic fuzzy index 0.2 (assume) Hpimage for each templates = c*(1-pimagefor each templates) (6) Calculation of the maximum of the divergence value between the 16 templates and the original image of the same size let the original image denoted by this formed by taking the 3x3 matrix in the border matrix. (7) Finite iteration by Selecting the minimum divergence among the 16 divergence values and is positioned at the center of the templates position for the edge image. Fuzzy domain image(i,j)=max n [min r Hpimage(i,j)] (8) Edge image in the fuzzy domain matrix is transformed back in the image pixel domain i.e in the interval [1 255] domain (multiply by 255). Set a threshold, and applying the morphological operators of Matlab. The proposed system was tested with different Images, its performance being compared the existing edge detection algorithms and it was observed that the outputs of this algorithm provide much more distinct marked edges and thus have better visual appearance than the standard existing one. Five 350 X 350 pictures are taken into consideration. Figure 1 shows the image which is used for testing purpose. For various threshold value the pixels ranges are displayed in the Figure 2. The result of various pictures in Berkley dataset are shown in Figure 2 below. Figure 1

It can be observed that the output that has been generated by the fuzzy method has found out the edges of the image more distinctly as compared to the ones that have been found out by the “threshold� edge detection algorithm. Thus the Fuzzy rule based System provides better edge detection and has an exhaustive set of fuzzy conditions which helps to extract the edges with a very high efficiency. VI. TRENDS IN SURVEY AREA Clustering: Three phases in the color images Segmentation: Phase 1: preprocessing: Morphological methods are applied to remove the noises away from images which applied to smooth some spots on uniformed patterns. Phase 2: Transformation: Color space transformed methods are used to transform other color space to RGB. The average intra-cluster distance based method is a traditional method app lied for transformation.

AIJRSTEM 14-166; Š 2014, AIJRSTEM All Rights Reserved

Page 137


Janakiraman.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 135-139

Phase 3: Segmentation: Applying clustering algorithm like K-means algorithm for finding the appropriate cluster numbers and segment images in different color spaces. The cluster with the maximum average variance is split into new clusters. Figure 2

Neural networks: Neural networks can be a useful tool for edge detection. Since a neural network edge detector is a nonlinear filter, it can have a built-in thresholding capability. Thus the filtering, thresholding operation of edge detection is a natural application for neural network processing. An edge-detection neural network can be trained with backpropagation using relatively few training patterns. The most difficult part of any neural network training problem is defining the proper training set. A simple method is given for the edge detection training problem. Not only can neural networks be trained to detect edges, they can also be designed from scratch, without the necessity for training. The weights of the network can be selected to match the characteristics of your favorite linear edge detection filter. The addition of a bias, and a sigmoid non-linearity on the output produces an “engineered” neural network Segmentation approach: Watershed model based on mathematical morphological operators is another budding technology with respect application in remote sensing image segmentation. Further, research on this approach is required. The selection of segmentation approach depends on what quality of segmentation is required. Further, it also depends on what scale of information is required. Fuzzy model would be good choice to represent ambiguity of region boundaries. Neural model would be good choice no prior distribution can be assumed and not very high quality object information is required. Among homogeneity measures, spectral, shape, size, scale, compactness and texture should be concerned when complex landscapes are to be analyzed. VII. CONCLUSIONS AND FURTHER RESEARCH DIRECTIONS Soft computing deals with approximate models and gives solution to complex problems. Color image segmentation is an important and is used in many image processing applications. Color image segmentation increases the complexity of the problem. The result of image segmentation is a set of segments that collectively cover the entire image, or a set of contours extracted from the image. The main aim is to compare the various conventional algorithms and soft computing approaches i.e. fuzzy logic, neural network and genetic algorithms for color image segmentation. Finally, some nice and elegant results are expected to follow which will be having a natural and valuable application in the field of image processing. The edge detectors in question are especially the Sobel and Canny edge detection operators, as well as the 5x5 median, mean and α -trimmed mean edge detectors. These edge detectors were developed with the Euclidean distance in mind. They have shown good performance in the presence of noise. However, their performance on a wide range of color images is not known. Using the Euclidean distance as the similarity measure, one could use the principal eigenvector as the class prototype. For color clustering, it is necessary to devise a way to effectively combine the benefits of k-means/Euclidean distance and Mixture of Principal Components / vector angle algorithms. Furthermore, more experimentation is needed to determine practical applications of this work in the color domain.

AIJRSTEM 14-166; © 2014, AIJRSTEM All Rights Reserved

Page 138


Janakiraman.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 135-139

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

[13]

[14] [15] [16] [17] [18] [19] [20] [21]

Chunxi Ma, et.al.; “An improved Sobel algorithm based on median filter”, Institute of Electrical and Electronics Engineers, 2nd International IEEE conference, China, Volume: 1, pp. 88-93, Aug 1, 2010. WenshuoGao, et.al. ; “An improved Sobel edge detection”, Computer Science and Information Technology (ICCSIT), 2010 3rd IEEE International Conference, China, Volume: 5, pp. 67 – 71, 9-11 July 2010. N. Senthilkumaran, R. Rajesh, "Edge Detection Techniques for Image Segmentation and A Survey of Soft Computing Approaches", International Journal of Recent Trends in Engineering, Vol. 1, No. 2, PP.250-254, May 2009. Chen, J., Pan, D. and Maz, Z., 2009. Image-object detectable in multiscale analysis on high-resolution remotely sensed imagery. International Journal of Remote sensing, 30(14), pp. 3585– 3602. Wuest, B. and Zhang, Y., 2009. Region based segmentation of QuickBird multispectral imagery through band Ratios and fuzzy comparison. ISPRS Journal of Photogrammetry & RemoteSensing, 64(1), pp. 55-64. Du Gen-yuan,Miao Fang,Tian Sheng-li,Guo Xi-rong.,"Remote Sensing Image Sequence Segmentation Based on the Modified Fuzzy C-means", Journal of Software, Vol. 5, No. 1, PP.28-35, 2009 F. Samopa, A. Asano.,"Hybrid Image Thresholding Method using Edge Detection", IJCSNS International Journal of Computer Science and Network Security, Vol.9 No.4, PP.292-299, April 2009. Li, L., Ma, J. and Wen, Q., 2007. Parallel fine spatial resolution satellite sensor image segmentation based on an improved pulsecoupled neural network. International Journal of Remotesensing, 28(18), pp. 4191–4198. Lai Zhiguo, etc, “Image processing and analysis based on MATLAB” , Beijing: Defense Industry Publication, 2007. S. C. Pei and J. J. Ding, “Improved Harris’ algorithm for corner and edge detections,” accepted by ICIP 2007. Jung, M., Yun, E. and Kim. C., 2005. Multiresolution approach for texture segmentation using MRF models. In: Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, 6, pp. 3971-3974. Nor Ashidi Mat Isa, " Automated Edge Detection Technique for Pap Smear Images Using Moving K-Means Clustering and Modified SeedBased Region Growing Algorithm", International Journal of The Computer, the Internet and Management Vol. 13.No.3 (SeptemberDecember, 2005) pp 45-59. Benz, U. C., Hofmann, P., Willhauck, G., Lingenfelder, I. and Heynen, M., 2004. Multi-resolution, object-oriented fuzzy analysis of remote sensing data for GIS-ready information. ISPRS Journal of Photogrammetry & Remote Sensing, 58(3-4) pp. 239–258. R. K. K. Yip, “A hough transform technique for the detection of reflectional symmetry and skew-symmetry.” Pattern Recognition Letters, vol. 21, no. 2, pp. 117–130, 2000. Y. Lei and K. C. Wong, “Detection and localisation of reflectional and rotational symmetry under weak perspective projection.” Pattern Recognition , vol. 32, no. 2, pp. 167–180, 1999. M. A. Ruzon and C. Tomasi, “Color-edge detection with the compassoperator,” inIEEE Conf. CVPR , 1999, pp. 160–166. G.I. Sanchez-Ortiz, A. Noble,“Fuzzy clustering driven anisotropic diffusion: enhancement and segmentation of cardiac MR images”, Nuclear Science Symposium, Vol. 3 , 1998, pp. 1873 -1874. K. Cheung and W. Chan, "Fuzzy One –Mean Algorithm for Edge Detection," IEEE Inter. Conf. On Fuzzy Systems, 1995, pp. 2039- 2044. D. Reisfeld, H. Wolfson, and Y. Yeshurun, “Context-free attentional operators: the generalized symmetry transform,” Int. J. Comput. Vision , vol. 14, no. 2, pp. 119–130, 1995. P. K. Sahoo, S. Soltani, and A. K. C. Wong, “A survey of thresholding techniques,” Comput. Vis. Graph. Image Process., vol. 41, pp. 233–260, 1988. D. Marr and E. Hildreth, “Theory of edge detection,”Proc. R. Soc. Lond. B, vol. 207, pp. 187–217, 1980

AIJRSTEM 14-166; © 2014, AIJRSTEM All Rights Reserved

Page 139


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Selection Of Industrial Robots using Complex Proportional Assessment Method Dayanand Gorabe1, Dnyaneshwar Pawar2, Nilesh Pawar 3 1, 2, 3 Students (Department of Mechanical Engineering), Sinhgad Institute of Technology, Lonavala, Maharashtra, INDIA. Abstract: A Multi Attributes Decision Making (MADM) methods are gaining importance for selection of best robot among the available robots by considering the important attributes of robot. Manufactures are continuously incorporated advanced features and facilities into the robots due to increasing complexity. It has become more complicated to select a robot for specific industrial application. In this problem seven robots each having five attributes are considered and solved by Complex Proportional assessment (COPRAS) method. The rank of the robots are calculated and it is found that the Unimation PUMA 500/600 is the best robot among the other robots. Keywords: Robot selection, COPRAS, MADM, Preference ranking method, Ranking, etc. I. Introduction Robots can perform repetitious, difficult and hazardous tasks with precision. Therefore, it is an important tool for diverse industrial applications like material handling, spray painting, welding and machine loading. Industrial robots are usually costly and have many characteristics, their selection calls for a careful examination and assessment of the requirements. The recent growth of information technology and engineering sciences have been the key reason for the increased utilization of robots. The control resolution, accuracy, repeatability, load carrying capacity, degrees of freedom, man–machine interfacing ability, maximum tip speed, memory capacity and supplier’s service quality are the most important attributes to be taken into consideration while selecting an industrial robot for a particular application [1]. The selection of the robot to suit a particular application and production environment, from the large number of robots available in the market today has become a difficult task. In order to evaluate the overall effectiveness of the candidate alternatives and to select the best option, the multicriteria decision making method requires decomposing the problem into step such as defining a set of attributes which mostly influence the alternative, preparing a decision matrix, weighing the criteria based on past experience or using an appropriate method, evaluating the alternatives and ranking them from best to worst. The literature shows some applications in the field of multi criteria analysis in material selection. Multiple attribute decision making (MADM) addresses the problem of choosing an optimum choice containing the highest degree of satisfaction from a set of alternatives which are characterized in terms of their attributes [2]. Voogd established Environmental planning used to site sludge processing plants, to organize water resource plans, to develop groundwater management strategies, to prioritize watersheds for implementation of watershed restoration measures [3, 4], state of-art survey for MCDA proposed by Martel and Matarazoo. [5], comparison of various alternatives in water resource management carried by Hajkowicz and Higgins Water resource management decisions are typically guided by multiple objectives measured in a range of financial and non-financial units. Often the outcomes are highly intangible and may include items such as biodiversity, recreation, scenery and human health. [6], attributes based specification, comparison and selection of a robot by P.P. Bhangale The selection of the robot to suit a particular application and production environment, from the large number of robots available in the market today has become a difficult task. [7].This paper presents selection of industrial robots using COPRAS method and rankings of the robot are found out. II. Complex Proportional assessment method This method selects the best decision considering both the ideal and the ideal-worst solutions. It takes into the account the performance of the alternatives with respect to different criteria and also the corresponding criteria weights. The COPRAS (Complex Proportional Assessment) method assumes direct and proportional dependences of the significance and utility degree of available alternatives under the presence of mutually conflicting criteria (Kaklauskas et al. 2006; Kaklauskas et at.2007; Zavadskas et al. 2008.) [8]. The COPRAS method which is used here for evaluating and selecting the alternative flexible manufacturing systems adopts a stepwise ranking and evaluating procedure of the alternatives in terms of their significance and utility degree.

AIJRSTEM 14-167; Š 2014, AIJRSTEM All Rights Reserved

Page 140


Dayanand Gorabe et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 140-143

III. Problem statement and solution Selection of a robot for a specific industrial application is one of the most challenging problems in real time manufacturing environment. It has become more and more complicated due to increase in complexity, advanced features and facilities that are continuously being incorporated into the robots by different manufacturers. Manufacturing environment, product design, production system and cost involved are some of the most influencing factors that directly affect the robot selection decision. The decision maker needs to identify and select the best suited robot in order to achieve the desired output with minimum cost and specific application ability. This example deals with the selection of the most appropriate industrial robot for some pick-n-place operations where it has to avoid certain obstacles. Performance of an industrial robot is often specified using different attributes. Repeatability, accuracy, load capacity and velocity are observed to be the most important attributes affecting the robot selection decision. In this example five different robot selection attributes are considered as load capacity (LC), maximum tip speed (MTS), repeatability (RE), memory capacity (MC) and manipulator reach (MR), among which load capacity, maximum tip speed, memory capacity and manipulator reach are the beneficial attributes (where higher values are desirable), whereas repeatability is a non-beneficial attribute (where lower value is preferable). Thus, the industrial robot selection problem consists of five criteria and seven alternative robots. Rao [9] estimated the criteria weights as WLC = 0.036, WRE = 0.192, WMTS = 0.326, WMC = 0.326, WMR = 0.120 using analytic hierarchy process (AHP). Table I Quantitative data for the Industrial Robot selection problem SR NO.

Robot

1

ASEA-IRB 60/2

2 3

Cincinnati Milacrone T3-726 Cybotch V15 Electric Robot Hitachi American Process Robot Unimation PUMA 500/600 United States Robots Maker110 Yaskawa Electric Motoman

4 5 6 7

Load Capacity (Kg) 60

Repeatability (mm)

Memory capacity (point) 500

Manipulator reach (mm) 990

6.35 6.8 10

0.15 0.10 0.2

1016 1727.2 1000

3000 1500 2000

1041 1676 965

2.5 4.5

0.10 0.08

560 1016

500 350

915 508

3

0.1

1778

1000

920

0.4

Max. tip speed (mm/s) 2540

COPRAS method are presented as below: Step 1: Normalize the decision matrix using linear normalization procedure. (kaklauskas.et.al………2006) rij = xij / ∑xij Table II Normalized decision matrix Robot 1

LC 0.6441

RE 0.3539

MTS 0.2635

MC 0.0564

MR 0.1411

2

0.0681

0.1327

0.1054

0.3389

0.1483

3

0.0730

0.0884

0.1792

0.1694

0.2389

4

0.1073

0.1769

0.1037

0.2259

0.1375

5

0.0268

0.0884

0.0581

0.0564

0.1304

6

0.0483

0.0707

0.1054

0.0395

0.0724

7

0.0322

0.0884

0.1844

0.1129

0.1311

Step 2: Determine the weighted normalized decision matrix, D. D = [yij] m x n = rij x wj

(1)

(2)

(i=1, 2, 3,….. m; j=1, 2, 3,…. n) Table III Weighted Normalized decision matrix Robot 1

LC 0.0232

RE 0.0679

MTS 0.0859

MC 0.0184

MR 0.0169

2

0.0024

0.0254

0.0343

0.1105

0.0178

3

0.0026

0.0169

0.0584

0.0552

0.0286

4

0.0038

0.0339

0.0338

0.0736

0.0165

5

0.0009

0.0169

0.0189

0.0184

0.0156

6

0.0017

0.0135

0.0343

0.0128

0.0086

7

0.0011

0.0169

0.0601

0.0368

0.0157

AIJRSTEM 14-167; © 2014, AIJRSTEM All Rights Reserved

Page 141


Dayanand Gorabe et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 140-143

Step 3: The sum of weighted normalized values are calculated for both the beneficial attributes and nonbeneficial attributes. Using the following equation: S+i = (3) +ij S-i = S+ = S- =

(4) (5) (6)

-ij +i =

+ij

-1 =

ij

Table IV Sums of the Weighted normalized values Robots 1 2 3 4 5 6 7

S+i 0.1444 0.1651 0.1449 0.1278 0.0539 0.0576 0.1187

S-i 0.0679 0.0254 0.0169 0.0339 0.0169 0.0135 0.0169

Step 4: Determine the significance of the attributes on the basis of defining the positive alternative S+iand negative alternative S-i characteristic. Step 5: Determine the relative significance or priorities (Qi) of the alternative. Step 6: Calculate the quantitative utility (Ui) for ith alternative. Ui = [ ] x 100%

(8)

Table V Qi and Ui values Robots

Qi

Ui

Rank

1

0.1529

81.4659

3

2 3

0.1877 0.1788

100.0000 95.2772

1 2

4

0.1448

77.1402

5

5

0.0878

46.8031

7

6

0.1000

53.2918

6

7

0.1477

78.7103

4

IV. Conclusion For the robot end user or purchaser it will help to select and buy the correct robot from the market. The suggested methodology can be used for any type of selection problem having any number of attributes.Therefore, alternative Unimation PUMA 500/600 is the first choice, Hitachi American Process Robot is the second choice, Cincinnati Milacrone T3-726 is the third choice, ASEA-IRB 60/2 is the fourth choice and United States Robots Maker110 is the fifth choice, Yaskawa Electric Motoman L3C is the sixth choice, Cybotch V15 Electric Robot is the seventh choice. Thus, the problem helps to understand the importance of multi criteria decision making and proper ranking of the alternatives from a set of attribute for industrial robot selection. The methodology is capable of taking into account important requirements of gas cylinder and it strengthens the existing procedure by proposing a logical and rational method of evaluation and selection rather than a selection which is done groundlessly. References [1] [2] [3] [4] [5] [6] [7] [8]

P. Chatterjee, V. M. Athawale, S. Chakraborty, “Selection of industrial robots using compromise ranking and outranking methods,” Robotics and Computer-Integrated Manufacturing, vol. 26, pp. 483–489, 2010. P. Chatterjee, V. M. Athawale, S. Chakraborty, “Material Selection Using Complex Proportional Assessment and Evaluation of Mixed Data Methods,” Material and Design, Vol. 32, No. 2, pp. 851-860, 2011. H. Voogd, “Multicriteria Evaluation with Mixed Qualitative and Quantitative Data,” Environment and Planning Bulletin, Vol. 9, No. 2, 1982, pp. 221-236. H. Voogd, “Multicriteria Evaluation for Urban and Regional Planning,” Pion, London, 1983. J. M. Martel, B. Matarazzo, “Other Outranking Approaches,” In: F. J. Salvatore and G. M. Ehrgott, Eds., Multiple Criteria Decision Analysis: State of the Art Surveys, Springer, New York, pp. 197-262, 2005. S. Hajkowicz, A. Higgins, “A Comparison of Multiple Criteria Analysis Techniques for Water Resource Management,” European Journal of Operation Research, Vol. 184, No.1, pp. 255-265, 2008. P.P. Bhangale, V.P. Agrawal, S. K. Saha, “Attribute based specification, comparison and selection of a robot,” Mechanical Machinery Theory, vol.39, pp.1345–66, 2004. P. Chatterjee, S. Chakraborty,” Flexible manufacturing system selection using preference ranking methods: A comparative study,” International Journal of Industrial Engineering Computations, vol.5, 2014.

AIJRSTEM 14-167; © 2014, AIJRSTEM All Rights Reserved

Page 142


Dayanand Gorabe et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 140-143 [9]

R.V. Rao, “Decision making techniques in manufacturing environment using graph theory and fuzzy multiple attribute decision making methods,” springer- Verlag, pp .3-6, 2007.

Acknowledgments It gives us great pleasure to present a seminar on ‘Selection of industrial robots using Complex Proportional assessment method’. In preparing this seminar number of hands helped us directly and indirectly. Therefore it becomes our duty to express gratitude towards them.

AIJRSTEM 14-167; © 2014, AIJRSTEM All Rights Reserved

Page 143


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Study and Review of Genetic Neural Approaches for Data Mining Nilakshi P. Waghulde1, Nilima Patil2 CSE Department, SSBT’s COET Bambhori, North Maharashtra University Jalgaon, Maharashtra, INDIA Abstract: Data mining techniques are used to explore analyze and extract data using complex algorithms in order to discover unknown patterns. Neural network which is one of the data mining techniques was proved to be universal approximator. Neural network is able to learn a mapping between input and output nodes while the hidden nodes and weights between them contain the internal representation of the input which trains the network with local convergence. As the initialization of neural network weights is a blind process and neural network is slow to converge then it is difficult to find global optimal solution. A fixed structure of the neural network may not provide the optimal performance within the training period so the number of hidden layers and hidden nodes for particular neural network also plays an important role. Hence, this paper presents a Genetic Neural Network technique that takes advantage of global optimization of genetic algorithm for initialization of neural network along with the calculation of the number of hidden nodes and hidden layers for neural network which train the network with proper selection of neural network architecture. Keywords: Neural network, genetic algorithm, data mining

I. Introduction Data mining is one of the most important steps of the knowledge discovery in databases process and is considered as significant subfield in knowledge management.[1] Data mining an interdisciplinary subfield of computer science is the computational process of discovering patterns in large dataset involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systems.Thus data mining is decision tree, neural network, rule induction, nearest neighbour, genetic algorithm. Artificial neural network, a data mining practice, is a interconnected group of artificial neurons that uses mathematical model or computational model for information processing based on connectionist approach of computation.[2] Neural Networks are form of a multiprocessor computer system, with simple processing elements, a high degree of interconnection, simple scalar messages, and adaptive interconnection between elements. Thus, neural computing is a information processing paradigm, inspired by biological system, composed of a large number of highly interconnected processing elements(neurons) working in unison to solve specific problem. As the Neural networks are models of biological neural structures. The starting point for most neural networks is a model neuron, as in Figure 1. This neuron consists of multiple inputs and a single output. Each input is modified by a weight, which multiplies with the input value. The neuron will combine these weighted inputs and, with reference to a threshold value and activation function, use these to determine its output.

AIJRSTEM 14-170; Š 2014, AIJRSTEM All Rights Reserved

Page 144


N. Waghulde et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 144-148

A. Neural Network Architecture A neural network is usually a layered graph with the output of one node feeding into one or many other nodes in the next layer. An Artificial Neural Network Architecture is a data processing system, consisting of large number of interconnected processing elements such as neurons in a network structure that can be represented by using directed graph G, an order 2 tuple(V,E), consisting of set V of vertices and set E of edges. The Vertices may represent neurons (input/output) and the edges may represent synaptic links labeled by weights attached.[3] The multiple layers are arrayed one succeeding the other so that there is an input layer, multiple intermediate layers and finally an output layer, as in Figure 2. Intermediate layers, that is those that have no inputs or outputs to the external world, are called hidden layer. In fully connected network each neuron is connected to every output from the preceding layer or one input from the external world if the neuron is in the first layer and, correspondingly, each neuron has its output connected to every neuron in the succeeding layer.

v11

1

x1

w11 v1m v21

x2

1

2 v31

y1

2

y2

w12

w21

v2m

1

m

w22 w2n w1n

v3m

xl

l

n

yn

Figure 2: Multi-Layer Feedforward Network

The output of each neuron is a function of its inputs. In particular, the output of the jth neuron in any layer is described by two sets of equations: Uj=∑(Xi*wij)[Eqn 1] and Yj=Fth(Uj+tj)[Eqn 2] For every neuron, j, in a layer, each of the i inputs, Xi, to that layer is multiplied by a previously established weight, wij. These are all summed together, resulting in the internal value of this operation, Uj. This value is then biased by a previously established threshold value, tj, and sent through an activation function, Fth. This activation function is usually the sigmoid function, which has an input to output mapping as shown in Figure 4. The resulting output, Yj, is an input to the next layer or it is a response of the neural network if it is the last layer.

Figure 3:Sigmod Function In essence, Equation 1 implements the combination operation of the neuron and Equation 2 implements the firing of the neuron [3].

AIJRSTEM 14-170; Š 2014, AIJRSTEM All Rights Reserved

Page 145


N. Waghulde et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 144-148

From these equations, a predetermined set of weights, a predetermined set of threshold values and a description of the neural network it is possible to compute the response of the neural network to any set of inputs. The weights in artificial neurons are adjusted during training process.[4] Neural network acquires knowledge through learning. In general, the learning steps of a neural network are as follows. First, a network structure is defined with a fixed number of inputs, hidden nodes and outputs. Second, an algorithm is chosen to realize the learning process. The recent vast research activities in neural classification have established that neural networks are a promising alternative to various conventional classification methods.[5] Although significant progress has been made in classification related areas of neural networks, a number of issues in applying neural networks still remain and have not been solved successfully or completely. Neural network trains the input data pattern through different layers with local convergences which does not provides optimal solution to the problem. The network architecture plays important role in training the input data. Thus proposed system in this paper focus on uses of genetic algorithm for finding the global optimal solution along with the proper selection of the neural network architecture. The rest of the paper is organized as follows. In Section II, neural network structures and learning processes for data mining. In Section III, the Genetic Neural Network with proper selection of neural network architecture for data mining is proposed. II. LITERATURE SURVEY Neural networks are trained by using various learning techniques. The learning process of neural network uses different approaches for knowledge discovery. Different researchers have studied different neural networks learning approaches using different optimization techniques. As the neural network learn about the input data by its own this leads to mine the data to discover pattern. Neural network process the data pattern between input and output layer along with the hidden layers. The synaptic weights initialization and hidden layer identification is done by various ways to have optimal convergence. Frank H. F. Leung, H. K. Lam, S. H. Ling, and Peter K. S. Tam[6] proposed a three-layer neural network with switches introduced in some links is proposed to facilitate the tuning of the network structure in a simple manner. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Teresa B. Ludermir, Akio Yamazaki, and Cleber Zanchettin[7] developed the approach that combines the advantages of simulated annealing, tabu search and the backpropagation training algorithm in order to generate an automatic process for producing networks with high classification performance and low complexity. Chi-Keong Goh, Eu-Jin Teoh, and Kay Chen Tan[8] described a hybrid multiobjective evolutionary approach to ANN design which stochastic or performance driven and estimate the necessary number of neurons to be used in training of only single-hidden-layer feedforward neural network. David Hunter, Hao Yu, Michael S. Pukish, Janusz Kolbusz, and Bogdan M. Wilamowski. [9] performed comparison between different learning algorithms the Error Back Propagation (EBP) algorithm, the Levenberg Marquardt (LM) algorithm, and the recently developed Neuron-by-Neuron (NBN) algorithm, and different network topologies, including traditional Multilayer Perceptron (MLP) networks, Bridged Multilayer Perceptron (BMLP) networks, and Fully Connected Cascade (FCC) networks. They used the trial and error approach to find number of layers and number of neuron in each hidden layer. Out of this FCC architectures are proved to be powerful with largest number of connections across layers per number of neurons. Since the power of the FCC network increases dramatically with the number of neurons, usually not many trials are required, but FCC is required to provide the architecture with acceptable training error and the smallest number of neurons. Syed Umar Amin, Kavita Agarwal, Dr Rizwan Beg[10] developed the hybrid system that uses global optimization advantage of genetic algorithm for initialization of neural network weights. This technique uses the backpropagation algorithm to train the networks using the weights optimized by Genetic Algorithm. To determine the structure of network is challenging. A small network may not provide good performance owing to its limited information processing power. A large network, on the other hand, may have some connections redundant. As the structure of the network plays important role. Hence, it is possible by the proposed approach to determine the number of inputs, layers and hidden neurons of the neural network and then to initialize the neural weights by genetic algorithm. III. PROPOSED SYSTEM Architecture design of the neural network can be formulated as optimal problem. Optimization of neural network architectures and weights is an interesting approach for the generation of efficient networks. As the neural network learn through different layers which maps the input between input and output layer with hidden layers and weights as internal part till the network converge locally so it is important to select proper size, topology of the network and proper weight initialization for the network.

AIJRSTEM 14-170; Š 2014, AIJRSTEM All Rights Reserved

Page 146


N. Waghulde et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 144-148

Dataset

Calculate number of input and output nodes and number of hidden layers and number of hidden layer nodes

Use fitness function for initialization of synaptic weights

For each pattern in the dataset

Repeat until converge

Train the network using backpropagation learning technique Figure 4: Proposed System The proposed system of neural network with genetic algorithm shown in Figure 4 uses the genetic algorithm fitness function to initializes the weights that would make possible to have global optimal convergence. Neural network architecture can be constructed by identifying the input and output layer neuron along with number of hidden layers and hidden nodes identification. So when this improved genetic neural network approach applied to the data set can construct the network depending on the input dataset and can initializes the weight by fitness function of genetic algorithm. Genetic Algorithm is adaptive heuristic search algorithm based on the evolutionary ideas of natural selection and genetics. Thus Genetic-Neural Network takes the advantage of global optimization of genetic algorithm for initialization of neural network. IV. CONCLUSION In this paper, a Genetic Neural approach which uses Neural Networks and Genetic Algorithm to optimize the connection weights of ANN so as to improve the performance of the Artificial Neural Network along with the determination of the structure of the neural network is proposed. This proposed method could be able to provide fast learning, more stable and select the neural network architecture with acceptable training error and smallest number of neurons. Considering appropriate dataset the proposed method can be implemented in future. REFERENCES [1]

[2] [3] [4] [5]

Tipawan Silwattananusarn and KulthidaTuamsuk,” Data Mining and Its Applications for Knowledge Management : A Literature Review from 2007 to 2012”, International Journal of Data Mining & Knowledge Management Process (IJDKP), vol.2, no.5, September 2012. Hongjun Lu, Rudy Setiono, and Huan Liu, “Effective Data Mining Using Neural Networks”,IEEE Trans on Knowledge and data Engineering,vol.8,no.6,Dec1996. S. Rajasekaran and G. A. Vijayalakshmi Pai, “Neural Networks, Fuzzy logic and Genetic Algorithms:Synthesis and Application”,Eight Economy Edition, PHI 2003. B. M. Wilamowsik,”Neural Network Architectures and Learning Algorithm”, IEEE Trans. Ind. Electron Mag.,vol.3,no.4,5563,1994. Guoqiang Peter Zhang, “Neural Networks for Classification: A Survey”, IEEE Trans. On Systems, Man and Cybernetics,vol.30,no.4,Nov 2000.

AIJRSTEM 14-170; © 2014, AIJRSTEM All Rights Reserved

Page 147


N. Waghulde et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 144-148

[6] [7] [8] [9] [10]

Frank H. F. Leung, H. K. Lam, S. H. Ling, and Peter K. S. Tam, “Tuning of the Structure and Parameters of a Neural Networks Using an Improved Genetic Algorithm”, IEEE Trans. on Neural Networks,vol.14,no.1,Jan.2003. Teresa B. Ludermir, Akio Yamazaki, and Cleber Zanchettin, “An Optimization Methodology for Neural Network Weights and Architectures”, IEEE Trans. on Neural Networks, vol.17,no.6,Nov.2006. Chi-Keong Gob, Eu-Jin Teoh, and Kay Chen Tan, “Hybrid Multiobjective Evolutionary Design for Artificial Neural Networks”,IEEE Trans. on Neural Networks,vol.19,no.9,Sept. 2008. David Hunter, Hao Yu, Michael S. Pukish, Janusz Kolbusz and Bogdan M Wilamowski, “Selection of Proper Neural Network Sizes and Architectures-A Comparative Study”, IEEE Trans. on Industrial Informatics,vol.8, no.2,May 2012. Syed Umar Amin, Kavita Agarwal, Dr. Rizwan Beg, “Genetic Neural Network Based Data Mining in Prediction of Heart Disease Using Risk Factor”, in Proc. IEEE Conference on Information and Communication Technologies(ICT), 2013.

ACKNOWLEDGMENTS The authors feel a deep sense of gratitude to Prof. Dr. G. K. Patnaik HOD of Computer Science and Engineering Department for his motivation and support during this work. The authors are also thankful to the SSBT’s COET, Jalgaon for being a constant source of inspiration.

AIJRSTEM 14-170; © 2014, AIJRSTEM All Rights Reserved

Page 148


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Evaluation of Resistance Offered by 304 Stainless Steel Alloy Against Corrosion and Pitting in H3PO4- HCl Medium by Determination and Study of Corrosion Resistance Parameter Rita Khare Department of Chemistry, Government Women’s College, Gardanibagh, Patna, INDIA Abstract: Parameter for evaluation of resistance offered by 304 stainless steel alloy against corrosion and pitting in H3PO4-HCl medium is suggested. Parameter has been calculated on the basis of anodic parameters obtained from potentiodynamic polarization curves. Corrosion resistance parameter (R p) is found to vary with change in temperature and with change in concentration of HCl. There is found to be direct relationship between R p and pitting attack on alloy surface. The minimum value of Rp above which presence of chloride ions in the form of HCl may be tolerated without undergoing pitting attack is dependent on temperature of the medium. At a fixed temperature there is a minimum value of R p below which pitting starts depending on concentration of HCl in the medium. Key words: Stainless steel, corrosion, phosphoric acid, corrosion resistance, anodic parameters. I. Introduction Austenitic stainless steels are extensively used as construction material of containers during production, storage and transport of phosphoric acid. Phosphoric acid is utilized in production of phosphatic fertilizers on a large scale. Stainless steels offer good corrosion resistance in pure phosphoric acid but in practical situations phosphoric acid is often contaminated with small amounts of aggressive ions in the form of HCl. Hydrochloric acid as an added impurity enters into phosphoric acid as an added impurity during its production by acidulation process. In presence of phosphoric acid-HCl medium, stainless steel 304 SS loses its corrosion resistance i.e., ability to withstand this particular environment without undergoing disintegration due to the presence of aggressive ions and often undergoes pitting attack. This limits the utilization of this important alloy in the studied medium. Studies have been conducted earlier to study pitting and stress corrosion of other stainless steel alloys in other mediums [1, 2] but studies on 304SS in phosphoric acid-HCl medium are rare. The present work involves the evaluation and study of corrosion resistance parameter in case of 304SS in concentrated phosphoric acid at different concentrations of HCl and at different temperatures. II.

Experimental

The composition of 304 SS was 18Cr, 10Ni, and balance Fe. The electrode system consisted of austenitic stainless steel working electrode, a counter electrode of platinum and saturated calomel electrode with KNO 3 salt bridge. Electrochemical experiments were conducted in an air thermostat maintained at 298, 308 and 318 K under still condition. The solution contained 14M phosphoric acid with different concentrations of HCl present in it. Potentials were impressed on the working electrode of area 1cm2 by a fast rise power potentioscan Wenking Model POS 73. Potentiodynamic polarization curves were recorded starting from open circuit potential with a scan rate of 1 mV.sec-1. Anodic polarization curves are of a similar nature independent of HCl concentration and temperature [3]. General nature of curves is an active zone followed by a passive region which enters into transpassive region at nobler potentials. Anodic parameters i.e., passivation potential (Ep), passive current density (ip) and breakdown potential (Eb) are found out from anodic polarization curves and are shown in Table 1, 2 and 3 at 298, 308 and 318 K respectively. Newly introduced parameter Rp is also calculated at various concentration of HCl in 14M phosphoric acid medium. Table-4 shows corrosion resistance parameter Rp at 298, 308 and 318 K. III. Result and Discussion It is observed from anodic polarization curves that passivation starts at E pp and is completed at Ep. If the potential is increased beyond Ep current density does not increase rather it remains constant till E b is approached.

AIJRSTEM 14-174; Š 2013, AIJRSTEM All Rights Reserved

Page 149


Rita Khare, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 149-152

It is witnessed from SEM analysis of stainless steel samples that in the range of potential E P to Eb passive film is present on alloy surface. Passivity continues throughout the passive range of potential (E p to Eb) and breakdown of passive film occurs at Eb. Passive film formation is influenced by concentration of chloride ions and temperature of medium leading to variation in the quality (composition and porosity) of passive film. Resistance offered by passive film against corrosion is decided by the difference between the rate of formation of vacancies and their disappearance in the bulk [4]. During passive range of potential, rate of formation of vacancies is compensated by the rate of their disappearance in the bulk. At Eb and beyond Eb, rate of formation of vacancies exceeds the rate of their disappearance in bulk hence cation vacancies start piling up creating a void leading to the formation of holes (pits). As passivity is maintained in the potential ranging from E p to Eb, the change in resistance offered by the alloy in the range Ep to Eb decides the resistance offered by the alloy against pitting corrosion. Change in resistance of alloy (resistance offered by the passive film) in the range Ep to Eb is given by Rp = (Eb/ip) – (Ep/ip) = (1/ip) {Eb – Ep} As R= k Rp Where Rp is the resistance offered by passive film against corrosion, R is corrosion resistance and k is a constant. Thus, R = K {Eb- Ep}/ip From anodic polarization curves it is clear that passive range (Eb-Ep) is a measure of potential to be increased to nobler values so that passive state is lost and localized breakdown of passive film starts. Hence the range of potential from Ep to Eb decides the extent of resistance offered by alloy against localized corrosion. Larger is the range, greater is the resistance offered by the passive film. Hence, Rp α Eb- Ep Passive current density (ip) is the current flowing through the passive film [5]. With increase in the concentration of HCl in the medium ip is found to increase. Variation of passive current density (ip) with concentration of HCl is attributed to the presence of chloride ions in the medium which compete with oxide ion adsorption at film-solution interface creating defects in passive film. At higher concentration of chloride ions possibility of formation of metal-halide complexes is more thus resulting in increase in defect density in the passive film. At higher temperature chemisorption of chloride ions is facilitated leading to more defective passive film thus increasing the ip value. Alloy with a passive film having more defect density is prone to suffer localized attack and to undergo corrosion in comparison to alloy having a compact passive film. Hence resistance offered by 304SS in presence of a certain concentration of halide ions depends on i p which in turn depends on compactness of passive film i.e. whether it is with defects or without defects. Rp α 1/ip Resistance offered by the alloy against corrosion [6] and pitting is related to passive range in the potentiodynamic polarization curves i.e. Eb-Ep as is reported elsewhere [5].. R = K Rp Where K is a constant. Rp is proportional to Eb-Ep. Rp is inversely proportional to ip Hence Rp = K (Eb-Ep.)/ ip Taking this into account Rp is assigned the name “corrosion resistance parameter” and its values have been calculated from anodic parameters ip, Eb and Ep. Values of corrosion resistance parameter (Rp) are shown in Table1, 2 and 3. There is found to be a decrease in corrosion resistance (Rp) with increase in concentration of HCl at 298 K. Similarly at 308 and 318 K also decrease in corrosion resistance (Rp) is observed with increase in concentration of HCl in the medium. Up to 4000 ppm of HCl concentration, temperature effect shows a regular trend. Corrosion resistance decreases by 1/15th to 1/20th with rise in temperature from 298 to 308 K whereas it is reduced to ½ with rise in temperature from 308 to 318 K at a fixed concentration of HCl. Critical concentration values of HCl above which pitting starts at a particular temperature is reported elsewhere [3]. Critical concentration of HCl beyond which pitting starts is found to change by a greater magnitude (7600 to 3800ppm HCl) in the temperature range 298 – 308 K in comparison to change in critical concentration from 3800 to 2900 ppm HCl in the temperature range 308 to 318K. In lower concentration range (1000ppm to 4000ppm) of HCl concentration R p shows a large decrease as the temperature is raised from 298 to 308 K whereas the decrease is by a smaller magnitude for temperature change from 308 to 318 K. It is found that at 298 K if corrosion resistance Rp attains a value below 7.3 Ω pitting starts. The minimum value of corrosion resistance below which pitting corrosion is observed is found to be 0.2Ω and 0.1 Ω at temperatures 308 and 318 K respectively. It is inferred from the results that the minimum value of

AIJRSTEM 14-174; © 2013, AIJRSTEM All Rights Reserved

Page 150


Rita Khare, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 149-152

resistance below which pitting starts shows a greater shift in 298-308 K temperature range than in 308-318 K range. IV. Conclusion Susceptibility of alloy 304SS towards localized corrosive attack increases with increase in concentration of HCl and temperature as was observed earlier [7]. Similar result reflecting dependence of susceptibility of another stainless steel alloy towards pitting has been reported earlier [5]. Mathematically evaluated corrosion resistance parameter also shows decrease with increase in HCl concentration and temperature. A greater increase in susceptibility to corrosive attack is observed from the variation of corrosion resistance parameter (R p) when temperature is raised from 298 to 308 K in comparison to that observed when temperature is raised from 308 to 318 K. Critical concentration of HCl beyond which pitting starts and minimum value of Rp below which pitting starts are found to show a greater shift in 298 to 308K range than in higher temperature range (308 – 318 K). Corrosion resistance evaluated mathematically using equation R = K (E b-Ep.)/ ip is a measure of resistance offered by stainless steel alloy 304SS as is found from agreement between experimental observation and theoretical calculation. Table -1 Anodic parameters of AISI 304 SS in 14 M phosphoric acid at 298 K having different concentrations of HCl. Concentration of HCl (ppm)

Ip ( mA.m-2)x104

Ep (mV)

Eb (mV)

Eb-Ep (mV)

Blank solution 1000 2000 3000 4000 5000 10000 15000 20000

0.004 0.006 0.008 0.008 0.010 0.012 0.020 0.060 0.150

-180 -120 -100 -60 -30 +30 +80 +100 +120

1130 950 940 920 910 900 690 680 680

1310 1070 1040 980 940 870 610 580 560

Rp (Eb-Ep/ip) ( Ω.m-2) 32.8 17.8 13.0 12.3 9.4 7.3 3.1 0.9 0.4

Table -2 Anodic parameters of AISI 304 SS in 14 M phosphoric acid at 308 K having different concentrations of HCl. Concentration of HCl (ppm)

Ip ( mA.m2)x104

Ep (mV)

Eb (mV)

EbEp (mV)

Blank solution 1000 2000 3000 4000 5000 10000 15000 20000

0.02 0.10 0.16 0.24 0.63 1.00 1.58 1.73 0.90

-190 -180 -180 -200 -80 -60 +50 +70 +80

1130 980 970 950 615 605 560 500 460

1320 1160 1150 1150 695 665 510 430 380

Rp (EbEp/ip) ( Ω.m-2) 6.6 1.2 0.7 0.5 0. 1 0. 07 0.03 0.02 0.02

Table -3 Anodic parameters of AISI 304 SS in 14 M phosphoric acid at 318 K having different concentrations of HCl. Concentration of HCl (ppm)

Ip ( mA.m-2)x104

Ep (mV)

Eb (mV)

Eb-Ep (mV)

Blank solution

0.06

-200

1100

1300

Rp (Eb-Ep/ip) ( Ω.m-2) 2.2

1000

0.14

-180

850

1030

0.7

2000

0.20

-180

840

1020

0.5

3000

0.63

-60

600

660

0. 1

4000

0.76

-50

560

610

0.08

5000

1.20

-50

530

580

0.05

10000

1.72

+40

430

390

0.02

15000

2.00

+80

310

230

0.01

20000

2.40

+100

260

160

0.01

AIJRSTEM 14-174; © 2013, AIJRSTEM All Rights Reserved

Page 151


Rita Khare, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013Februery 2014, pp. 149-152

Table-4: Variation of corrosion resistance (RP in Ω.m-2) with concentration of HCl at various temperatures. Concentration of HCl (ppm)

Corrosion resistance (RP) at 298K

308K

318K

1000

17.8

1.2

0.7

2000

13.0

0.7

0.5

3000

12.3

0.5

0.1

4000

9.4

0.1

0.08

5000

7.3

0.07

0.05

10000

3.1

0.03

0.02

15000

0.9

0.02

0.01

20000

0.4

0.02

0.01

References [1] [2] [3] [4] [5] [6] [7]

EA Abd El Meguid, A.A.El Latiff, “Electrochemical and SEM Study on Type 254 SMO Stainless Steel in Chloride Solutions”, Corros Sci.,vol 46 (2004) pp 2431-2444. EA Add El Meguid, N.A.Mahmoud and V.K.Gouda, “Pitting Corrosion Behaviour of 316 L Steel in Chloride Containing Solutions”, British Corros J.,vol 33(1), (1997) pp 42-48. Rita Khare, M.M.Singh and A.K.Mukherjee, “The Effect of Additions of HCl on the Corrosion Behaviour of 304SS in Concentrated H3PO4 at different temperatures”, Bulletin of Electrochemistry”, vol 11(10) (1995) p 457-461. I.F.Lin, C.Y.Chao and D.D.Macdonald, “A Point Defect Model for Anodic Passive Films:II Chemical Breakdown and Pit Initiation”, J.Electrochem.Soc, vol 128,issue 6 (1981) p 1195. Rita Khare, M.M.Singh and A.K.Mukherjee, “The Effect of Temperature on Pitting Corrosion of 316SS in Concentrated Phosphoric Acid Containing Hydrochloric Acid”, Indian Journal of Chemical Technology, vol 9, no 5 (2002) pp 407-410. Rita Khare, “Corrosion of Materials: A Study”, Ideal Research Review, vol 27 no ii, Sep 2010, pp 21 -23. Rita Khare, “Surface Analysis of Steel Samples Polarized Anodically in H3PO4 – HCl Mixtures by SEM and EDAX Techniques”, International Journal “Manthan” vol 13 (2012) pp 22-25.

Acknowledgements Author gratefully acknowledges the fellowship awarded in Institute of Technology, Banaras Hindu University, Varanasi during conduction of experimental part of this work. Insightful suggestions from Prof M.M.Singh and Prof. A.K.Mukherjee, Department of Applied Chemistry, B.H.U, have been inspiring author to proceed towards evaluating and discussing above mentioned parameter.

AIJRSTEM 14-174; © 2013, AIJRSTEM All Rights Reserved

Page 152


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

EXPORT GROWTH AND DIVERSIFICATION OF SRI LANKA'S MAJOR PRODUCT SECTORS P.D. Talagala1*, T.S. Talagala2 Department of Computational Mathematics, University of Moratuwa, Katubedda, Moratuwa, Sri Lanka 2 Department of Statistics and Computer Science, University of Sri Jayewardenepura, Sri Lanka 1

Abstract: Growth rate is one of the most common indicators use to assess the progress of an economy in any area of economic activity. In export sector, growth rates are widely employed to identify ‘dynamic sectors’ as they have important policy implications. However, a drawback to the existing methods is that, most do not measure the statistical significance of these results, while others measure statistical significance making some unrealistic assumptions. The first phase of this study was therefore dedicated to undertake a new approach based on bootstrap methodology to test the significance of export growth rates of major product sectors. The second phase of the study reveals that Sri Lanka has increasingly diversified its export markets by moving from exports of traditional commodity to exports of processed goods and manufactures. Keywords: Export Sector, Growth Rate, Bootstrap Method, Destinational Concentration Index I. INTRODUCTION Export sector of Sri Lanka plays a vital role in fostering the socio-economic development of the country. The export development effort is a long term and a continuous process. Therefore, it is important to ascertain the current performance of the export sector in formulating future export development strategies. Export growth rate is one of the most commonly used indicators to assess the development of the sector. Comparison of such indicators over many countries might be of interest to producers, exporters, trade associations, investors, policymakers and trade negotiators. Unfortunately, sometimes loose statements are made about rising or falling or constancy over time of export growth rates. In a few cases, these statements are based on fitting a particular growth curve without examining its empirical appropriateness. Another drawback to these studies is that most do not measure the statistical significance of these results, while others measure statistical significance making some unrealistic assumptions on distribution of parameters. To avoid these drawbacks, this study employed a novel approach, based on bootstrap methodology, to test the significance of export growth rates of major product sectors. Further, analysis of exports by country of destination, results in a number of useful insights relating to Sri Lanka’s exports. II. METHODOLOGY The basis used to assess performance in the current year is particularly important. At one extreme, the current year’s performance could just be measured against that of the previous year. Such an assessment is not particularly useful since the previous year may have been an unusually good or bad one. At another extreme, the current year performance could be assessed against the average performance in the previous 20 or more years. But this again is unsatisfactory as various structural and other changes could have occurred during such a long period, making the 20 year averages not particularly meaningful. Therefore the approach used in the present study is an intermediate one where the current year’s performance is measured against the performance during the nine years immediately preceding it. Compounded Annual Growth Rate (CAGR) which is a geometric mean growth rate on an annualized basis is one of the most commonly used models for computing growth rate in export sector in many countries. If continuous compounding of the series is assumed (i.e. the compounding period is infinitesimally small), then we have

Yt  Ae Bt where A is a constant and B is the CAGR (also called exponential growth rate) during the period under review. Yt is the value of exports in tth time point. Taking natural logarithms on both sides, we get

AIJRSTEM 14-175; © 2013, AIJRSTEM All Rights Reserved

Page 153


P.D. Talagala et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 153-157

ln(Yt )  ln( A)  Bt Taking Y = ln(Yt)

and C=ln(A), we get

Y  Bt  C

B and C can be estimated by Ordinary Least Square (OLS) method used in Regression Analysis. Then the Average Growth rate = (B*100) % The most important statistical test in Simple Linear Regression is the test of whether or not the slope parameter, B, is equal to zero. If we conclude any particular case that the true regression slope is “not statistically significant”, this means that the estimated growth rate is not reliable and the “true” growth rate in that case could very well be zero. There is an extremely close relationship between confidence intervals and hypothesis testing. When a 95% confidence interval is constructed, all values in the interval are considered plausible values for the parameter being estimated. Values outside the interval are rejected as relatively implausible. If the value of the parameter specified by the null hypothesis is contained in the 95% confidence interval then the null hypothesis cannot be rejected at the 0.05 level. If the value specified by the null hypothesis is not in the interval then the null hypothesis can be rejected at the 0.05 level. If a 99% confidence interval is constructed, then values outside the interval are rejected at the 0.01 level. There are three basic assumptions underlying parametric tests should hold, as described by Johnson et al.[2]. First, the tests require data that can be treated as at least in interval scale; second, data should be normally distributed or closely so, and third, the amount of random, or error variance should be equally distributed among the different analyses. If these assumptions are violated, then non-parametric tests can be used to analyze the data [3], as non-parametric or distribution-free tests do not specify conditions about the shape or character of the distribution of the population from which samples are drawn. In this study to estimate the commodity wise average export growth rates, only 10 data points were considered. However the basic assumptions were violated for some product sectors. As a result, usual parametric t test could not be used in such situation to test the significance of the parameters. Further, to use parametric approach there should be sufficient sample size to assess the form of the distribution. So, to overcome these problems, nonparametric bootstrap confidence limits for the parameter estimates were used to test the significance of the parameters as it imposes only few assumptions about the shape of the distribution and therefore it is more flexible than usual parametric approaches. "Coconut Fiber & Shell production" of the country during 2002 to 2011 is considered for illustration purpose. As there is no software package available for the purpose, computer programs are developed in R version 2.13.2 (2011-09-30) for this specific task and the same is appended in Annexure-I. In addition MINITAB 14 was also used for the analysis. The second phase of the study; analysis of exports by country of destination, results in a number of useful insights relating to Sri Lanka’s exports. One approach to analyzing destination data on exports is to assess whether exports, by individual sector and for all sectors together, are distributed over a large number of countries or not. Ideally markets should be widely distributed to reduce the vulnerability of exports to the loss, or reduction of a single country’s market or that of a few countries. The index of destinational concentration is a statistical parameter which serves as a measure of how dispersed a country’s exports are. The higher the index of concentration, the less distributed are the export markets, and lower the index of concentration, the more widely distributed are the same markets. At one extreme, if all exports go to a single country, the concentration index would be 1(or 100%). At the other extreme, the concentration index would be 0(or 0%) if exports were distributed among an infinite number of countries. The degree of destination concentration of a country’s total exports can be measured using a Destinational Concentration Index defined by, 1

2 2  X  2  X j  2 Destinational  Xn   i   ........        Concentration    X    X   X  Index Where Xi = total exports to country i, (i=USA, UK, Italy, …) Xj = total exports to country j, (j=USA, UK, Italy, …) ...………………………… Xn = total exports to country n, (n=USA, UK, Italy, …) where (i ≠ j ≠…….≠ n) and X = total exports of all exporting countries = Xi + Xj +…...+ Xn The same formula can also be used to calculate the destination concentration index for a single sector or a subsector, where the terms Xi, Xj, …,Xn and X can be reinterpreted as: Xi = sector’s exports to country i, Xj = sector’s exports to country j,

AIJRSTEM 14-175; © 2013, AIJRSTEM All Rights Reserved

Page 154


P.D. Talagala et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 153-157

……………………………….. Xn = sector’s exports to country n, and X = total sector’s exports of all exporting countries. III. RESULTS AND DISCUSSION A. Export Growth Rates of Major Product Sectors Table 3.1 depicts Sri Lanka’s CAGR estimates for major product sectors. Table 3. 1: Export Growth Rates of Major Product Sectors 2002 – 2011 % Contribution to Total Export

2002- 2011 Average Growth (%)

2010- 2011 Growth (%)

Agricultural Products

23.23

10.76***

10.69

1.1

Tea

13.98

9.89***

2.57

1.2

Natural Rubber

1.96

21.46***

19.14

1.3

Coconut

4.01

11.69***

52.65

1.4

Other Export Crops

3.29

9.79***

6.34

2

Fisheries Products

1.85

10.71***

-3.43

3

74.84

7.07***

27.57

5.03

5.23**

30.04

3.2

Industrial Products Diamonds, Gems & Jewellery Textiles & Garments

39.51

5.02***

24.85

3.3

Manufacturers

25.05

10.31***

21.29

3.4

Petroleum Products

5.24

18.92***

110.27

Product Unclassified Total Merchandise Exports

0.08

-26.79**

-34.96

100

7.75***

22.41

Index

1

3.1

4

Product Description

Notes: 1. Estimates in the table are developed using the data presented in “Export performance by Major Product Sectors 2002 -2011”. 2. ***, ** denotes that the estimate is statistically significant at 1% and 5% level respectively.

As can be seen in the Table3.1 Sri Lanka’s exports are dominated by Industrial products which account for about three quarter of overall weight of the total exports. Examined more closely, the growth performance of textile and garments sector which was the largest single major export sector of the country was nearly five times better in 2011 compared with the corresponding compound annual growth rate during the period under review (2002 - 2011) In order to reduce dependence on traditional commodity exports, Sri Lanka has been moving into exports of processed goods and manufactures. Recorded growth of manufactures sector which makes the second highest contribution to the total exports in 2011 (~10%) is just over twice than the corresponding average annual growth rate (~21%). Diamonds, gems and jewellery sector grew at a high amount of over 30% in 2011. This is approximately 6 times higher than the corresponding compound annual growth rate of that sector during the period under review. Petroleum products which performed the best in terms of export growth showed a phenomenal increase in exports in 2011 compared to corresponding compound annual growth rate of that sector during the ten year period considered. As a whole in 2011 the agriculture product sector increased at approximately the same rate as it did in the corresponding compound annual growth for the last ten year period. However, as the table shows, the performance is very uneven in terms of major sectors under agriculture products. Although the agriculture sector as a whole exhibited no significant change in growth in 2011 from its average annual growth rate, the coconut sector did exhibit a strong growth in 2011 compared to the corresponding average annual growth rate. However except for coconut sector all the other subsectors, tea, natural rubber and other export crops showed a significant turn down in their export performance in 2011 compared with average annual growth performance. The export performance of the fisheries product sector worsened in 2011, compared with its average annual growth rate during the period considered. The sector slipped from positive average growth rate to negative growth in 2011. However since the sector only accounted for nearly 2% of Sri Lanka’s total exports in 2011, the decline did not make that much influence to the country’s economy.

AIJRSTEM 14-175; © 2013, AIJRSTEM All Rights Reserved

Page 155


P.D. Talagala et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 153-157

A.1 An Illustration Coconut Fiber & Shell production of the country during 2002 to 2011, as given in Exports performance indicators 2002- 2011[2], is used for the study. Figure 3.1.1: Residual Plots for natural logarithm of Coconut Fiber and Shell Products

Residual Plots for ln (Coconut Fiber & Shell products) Normal Probability Plot of the Residuals 90

Percent

Residuals Versus the Fitted Values Standardized Residual

99

50 10 1 -3.0

-1.5 0.0 1.5 Standardized Residual

2 1 0 -1

3.0

4.0

Histogram of the Residuals Standardized Residual

Frequency

3.0 1.5 -0.5 0.0 0.5 1.0 1.5 Standardized Residual

5.5

Residuals Versus the Order of the Data

4.5

-1.0

5.0 Fitted Value

6.0

0.0

4.5

2.0

2 1 0 -1 1

2

3

4 5 6 7 Observation Order

8

9

10

Hypothesis to be tested: H0: Errors are normally distributed Vs H1: Errors are not normally distributed Since p value < 0.005 we can reject the null hypothesis (H0) at 0.05 level of significant. That is we can conclude that the errors are not normally distributed. In addition to that, according to the plot of “residuals versus fitted values” the residuals are not fluctuated around zero within a horizontal constant band. Since the basic assumptions are violated for the product sector, usual parametric t test could not be used to test the significance of the parameter. Table 3.1.1: Results obtain from Bootstrap Method lower.limit upper.limit B Alpha level No. of Bootstrap samples 0.1115 0.1478 0.1302 0.05 1000 Hypothesis to be tested : H0: B=0 vs H1:B  0 Since the value specified by the null hypothesis is not in the confidence interval, (0.1115, 0.1478) the null hypothesis can be rejected at the 0.05 level. That is we can conclude that the parameter B is significant. Then the Average growth rate of Coconut Fiber & Shell products sector = B*100 = 13.02 % B. Diversification of Major Product Sectors The analysis of destination concentrations of Sri Lanka’s major products reveals that the destination concentration of Sri Lanka’s total exports declined from 41% in 2002 to 27% in 2011, suggesting that Sri Lanka has increasingly diversified its export markets over the period under review. The largest single major export sector: Textiles and garments which accounted for nearly 40% of the country’s exports in 2011, has a significantly high destination concentration in 2011 although its’ concentrations have decline at a great deal from a higher 65% in 2002 to 46% in 2011. The destination concentration of manufactures which makes the second highest contribution to the total exports was nearly 27% in 2011 and this was a decline from a higher 33% in 2002. The destination concentration of diamonds, gems and jewellery sector, which accounted for only 5% of the total exports in 2011 remain almost unchanged between the years 2002 to 2011. Petroleum Products have demonstrated an increase in concentration from 82% in 2002 to 92% in 2011. This trend highlights the vulnerability of the sector.

AIJRSTEM 14-175; © 2013, AIJRSTEM All Rights Reserved

Page 156


P.D. Talagala et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-Februery 2014, pp. 153-157

The destination concentration of the agricultural products which accounted for 23% of the total exports in 2011 remained almost unchanged between the years 2002 and 2011. On the positive side, the low concentration of the fisheries products in 2011 and the large declines in these concentrations from 2002 to 2011 is a desirable trend. However since the sector only accounted for nearly 2% of Sri Lanka’s total exports in 2011, the concentrations are not too important. IV. CONCLUSIONS The growth performance of Sri Lanka’s total exports in the year under review (2011) was far superior to the corresponding compound annual growth rate for 2011 and the nine years immediately preceding it. This change in export growth is a good sign for the country’s economy. Further, it is hoped that this study will simulate further research into this important phenomenon which enable continuous improvement and ensure the continuous success of the country’s economy. REFERENCES [1]. Boiroju, N.K., Yerukala, R., Rao, M.V. and Reddy, M.K. (2011). BOOTSTRAP TEST FOR EQUALITY OF MEAN ABSOLUTE ERRORS. Department of Statistics, Osmania University, Hyderabad, India. [2]. Export Performance Indicators 2002-2011, Policy and Strategic Planning Division, Sri Lanka Export Development Board, Sri Lanka. [3]. Johnson, R.A. and Wichern, D. W. (2001). Applied Multivariate Statistical Analysis. Prentice-Hall International, Inc. [4]. Tabachnick, B.G. and Fidell, L.S. (2001). Using Multivariate Statistics, 4th Edition, Boston, MA: Allyn and Bacon.

R Code function(rep.num, alpha) { #rep.num - number of replicates library(MASS) # load MASS packages # Read data file data=read.table("data.txt",header=TRUE) attach(data) data.lm <- lm(lny~t,data=data) resample <- function(data) { sample(data,size=length(data),replace=TRUE) } resample.data <- function() { sample.rows <- resample(1:nrow(data)) return(data[sample.rows,]) } # define the estimator estYonT <- function(data) { fit <- lm(lny ~ t, data=data) return(coefficients(fit)) } data.lm.cis <- function(rep.num, alpha) { tboot <- replicate(rep.num,estYonT(resample.data())) B=mean(tboot[2,]) low.quantiles <- apply(tboot,1,quantile,probs=alpha/2) high.quantiles <- apply(tboot,1,quantile,probs=1- alpha/2) lower.limit <- 2*coefficients(data.lm) - high.quantiles upper.limit <- 2*coefficients(data.lm) - low.quantiles cis <- rbind(lower.limit,upper.limit, B) return(cis) } signif(data.lm.cis(rep.num,alpha)[,2],4) }

AIJRSTEM 14-175; © 2013, AIJRSTEM All Rights Reserved

Page 157


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Determination of Confined Aquifer Parameters by Sushil K. Singh Method Dr. Rajasekhar.P, Vimal Kishore.P, Mansoor Mansoori Department of Civil Engineering/Osmania University University college of Engineering, Osmania University, Hyderabad, Andhra Pradesh INDIA Abstract: For the evaluation of confined aquifer parameters, namely, transmissivity T, storage coefficient S and hydrological boundaries, from pump-test data a simple method by Sushil K. Singh has been presented. The method does not require curve matching, initial guess of the parameters,etc.,. Early drawdown data, for which the argument u of the well function is >0.01, have often been considered unimportant in evaluating aquifer parameters. This paper shows that these early drawdown data, especially in the neighborhood of u= 0.43, can yield accurate values of aquifer parameters. Using the present method the confined aquifer parameter of Darlaman Sub-Basin of Kabul Basin is estimated in only one point. The transmissivity of confined aquifer of Darlaman is estimated about 94.76m^/day and the storage coefficient is 0.00241. The reliability of these values is judged by calculation of the Standard Error of Estimate (SEE) considering variations of observed and computed drawdown for early as well as late drawdown data. As the confined aquifer of Kabul basin is like fossil there is no leakage between confined and unconfined is reported and also there is no recharge boundary is available [JICA]. The hydrological impervious boundary of the aquifer is not determined because of location of the observation and pumped wells far from the boundary and no well is located near to the boundary. Keywords: Aquifer parameters, Transmissivity, Storage coefficient, Confined aquifer, early pump drawdown data I. Introduction Groundwater is a precious and the most widely distributed resource of the earth and unlike any other mineral resource, it gets its annual replenishment from the atmospheric precipitation. The total volume of fresh groundwater stored on earth is believed to be in the region of 8 million km3 to 10 mil-lion km3 which is more than two thou-sand times the current annual withdrawal of surface water and groundwater combined. This is a huge volume, but where are these fresh-water buffers located??? As our purpose is to evaluate the parameters of the aquifers, here we start from the formations which store water (referred as aquifers). A geological formation that will yield significant quantities of water has been defined as an aquifer. An aquifer may be defined as a formation that contains sufficient saturated permeable material to yield significant quantities of water to wells and springs. Aquifer may be classified as unconfined or confined, depending on the presence or absence of a water table, while a leaky aquifer represents a combination of the two types. An unconfined aquifer is one in which a water table varies in undulating form and in slope, depending on areas of recharge and discharge, pumpage from wells, and permeability. Confined aquifers, also known as artesian or pressure aquifers or deep aquifers, occur where groundwater is confined under pressure greater than atmospheric by overlying relatively impermeable strata. For mathematical calculations of the storage and flow of groundwater, aquifers are frequently assumed to be Ideal which is homogeneous and isotropic. A homogeneous aquifer possesses hydrologic properties that are everywhere identical. An isotropic aquifer’s properties are independent of direction. The principle objective of the groundwater studies is to determine how much groundwater can be safely withdrawn perennially from the aquifers in the area under study. This determination involves the transmissibility and storage coefficient, lateral extend of aquifer and its hydraulic boundaries, leakage if any, and the effect of proposed developments on recharge and discharge conditions. Storage coefficient or storativity is defined as the volume of water that an aquifer releases from or takes into storage per unit surface area of aquifer per unit change in the component of head normal to that surface. The coefficient is a dimensionless quantity involving a volume of water per volume of aquifer. In most confined aquifers, values fall in the range 0.00005< S < 0.005, indicating that large pressure changes over extensive areas are required to produce substantial water yields.

AIJRSTEM 14-178; Š 2014, AIJRSTEM All Rights Reserved

Page 158


Rajasekhar.P et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 158-163

For practical work in groundwater hydrology, where water is the prevailing fluid, hydraulic conductivity K is employed. A medium has a unit hydraulic conductivity if it will transmit in unit time a unit volume of groundwater at the prevailing kinematic viscosity (equals dynamic viscosity divided by fluid density) through a cross section of unit area, measured at right angles to the direction of flow, under a unit hydraulic gradient and the term transmissivity T is widely employed in groundwater hydraulics. It may be defined as the rate at which water of prevailing kinematic viscosity is transmitted through a unit width of aquifer under a unit hydraulic gradient. The most reliable method for estimating aquifer transmissivity and Storage Coefficient is by pumping tests of wells based on observations of water levels near pumping wells. When a well penetrating an extensive confined aquifer, is pumped at a constant rate, the influence of the discharge extends outward with time. The rate of decline of head times the storage coefficient summed over the area of influence equals the discharge. Because the water must come from a reduction of storage within the aquifer, the head will continue to decline as long as the aquifer is effectively infinite; therefore, unsteady flow or transient, flow exists. The rate of decline, however, decreases continuously as the area of influence expands. The applicable partial differential equation in polar coordinate is

(1) Where in this partial differential equation h = head (m), r = radial distance from the well that was pumped (m), S = storage coefficient (dimension less), T= transmissivity (m2/min, and t = time since the beginning of pumping (min).Theis obtained solution for the Eq.1 based on the analogy between groundwater flow and heat conduction. By assuming the well is replaced by a mathematical sink of constant strength and imposing the boundary conditions h=h0 for t=0, and h h0 as r ∞ for t>=0, the solution is obtained as; (2)

where s is drawdown, Q is the constant well discharge and argument u as; (3)

The equation 2 is known as the non-equilibrium or Theis equation. The integral is function of the lower limit u and is known as an exponential integral. It can be expanded as a convergent series as shown in second part of equation 2 and is termed the well function W (u). The non-equilibrium equation permits determination of the aquifer parameters S and T by means of pumping tests of wells. The equation is widely applied in practice and is preferred over the equilibrium equation because (1) a value for S can be determined, (2) only one observation well is required, (3) a shorter period of pumping is generally necessary, and (4) no assumption of steady-state flow conditions is required. II. Study Area (Darlaman Sub-Basin of Kabul Basin) Afghanistan with an area of 647,500 Km2 is a land-locked country in south central Asia. It is bordered by Pakistan, Iran, Turkmenistan, Uzbekistan, Tajikistan, and China. The climate is arid to semi-arid with cold winters and hot summers. Its terrain is dominated by the Hindu Kush Mountains and associated mountain ranges extending across the north-central part of the country in a northeast to southwest arc. The Kabul Basin is an 80-kilometer-long valley, formed by the Paghman Mountains to the west and the Kohe Safi Mountains to the east that contains Kabul City and surrounding communities in Afghanistan. Sub-basins of the Kabul Basin, formed by inter-basin ridges (Mountain ranges) and river drainages, include Central Kabul (419 km2), Paghman and Upper Kabul (348 km2), Logar (190 km2), Deh Sabz (464 km2), Shomali (785 km2) and Panjsher (unknown), with total area of about 2206 km2 excluding Panjsher sub basin. The boundaries for the six areas generally coincide with drainage basins and encompass the major rivers flowing through the Kabul Basin. The Study Area covers a part of the Kabul Basin where the Kabul City exists with current population of 4.5 million in (2010). The Kabul Basin is located between longitude 68°59’30.9” and 69°22’27.4” E and from latitude 34°24’18.0” to 34°36’33.1” N (from 500500 to 534200 E and from 3807750 to 3830300 N in UTM zone 41N), with an area of around 480 km2. The basin is enclosed by low but quite steep mountain ranges, and divided into two sub-basins by also low but steep mountain range with NW-SE direction (this range is called as

AIJRSTEM 14-178; © 2014, AIJRSTEM All Rights Reserved

Page 159


Rajasekhar.P et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 158-163

“Barrier mountain range”, hereafter) as shown in Figure 1. Its western sub-basin is called the Darlaman subbasin, and in its eastern side the North Kabul, Pol-e-Charkhy, and Logar sub-basins are located. In this paper our focus is on Darlaman Sub-Basin of Kabul Basin. The water-table surface generally mirrors topography, and ground water generally flows in the directions of surface-water discharge. Steep gradients exist along mountain-front recharge areas and in the upper reaches of the Kabul, Logar, and Paghman valleys. Gradients decrease across the area beneath central Kabul, in the agricultural and village zones and in the sparsely populated Deh Sabz area.

0.2 m/d

North Kabul

0.08 m/d

Polecharkhi Logar

0.08 m/d

Darlaman

Fig. 1: Darlaman sub-basin and other sub-basins in the study area (Central Kabul Basin) Average annual precipitation (1957–77 and 2003–06) in the Kabul Basin is about 330 mm/yr (millimeters per year).Depths to water in unconfined aquifers of the Kabul Basin range from less than 5 m to more than 60 m below land surface. Chemical and isotopic analysis of surface-water and groundwater samples indicates that shallow groundwater (less than 100 meters below land surface) typically is 20 to 30 years old, whereas groundwater in deeper aquifers or confined aquifers likely is thousands of years old. Aquifer in the Kabul Basin so-called fossil water is outside of the natural water cycle. Due to the limited, or no, recharge to the aquifer, fossil water is not usually developed as a water source. However, other water sources in Kabul basin are quite limited to the ever increasing population, and thus there is a high risk of water shortage crisis anytime when drought or any technical problem happens. Thus, deep aquifer is analyzed mainly to examine the development potential for emergency purpose. That is to study how much of water can be discharged for how long time based on groundwater simulation. Nevertheless the depths of Neo-gene Aquifer or Confined Aquifer is very deep, as lower than around 300m below ground surface, the water heads of the aquifer were very shallow as less than 3m to around 12m in the maximum[JICA]. III. Data Collection The data collected from the Kabul City were pump test data of more than 200 wells but some of these wells are drilled in unconfined aquifers and some of them are not penetrated in full thickness of confined aquifer. So, finally we have chosen about only one well penetrated in full thickness of confined aquifer in Darlaman SubBasin of the Kabul Basin. The name of well, its location and its depth is given the table 1. Table 1: Names of well with its depth in deep aquifer Name of Well

Depth (m) Below GL

Location

TW-4

850

Darlaman

IV. Analysis of the Pump Test Data The method (Sushil K. Singh) described in the last part has been applied on one set of pump test data collected from Darlaman Sub-basin of Kabul Basin. For this set of data; values of drawdown/time (s/t) are plotted against time (t), and the peak (s/t)* and t* are located by drawing a smooth best-fit curve through the plotted points for each selected case by using SigmaPlot software. By using peak values α, transmissivity (T) and Storage Coefficient (S) are calculated one by one using Equations 2 and 3. The values of the above parameters are given

AIJRSTEM 14-178; © 2014, AIJRSTEM All Rights Reserved

Page 160


Rajasekhar.P et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 158-163

on the top part of each data set table. At this point, it is worth mentioning that s* and t* can be obtained with the same accuracy using even fewer data points than actually given in the data set table. Here the detail of the pump test data of well TW-4 is given. For this well the pump test data, graph and procedure is discussed fully. This well is located in Darlaman sub-basin. The table 3 shows the Pump Test data collected in an observation well located in a radial distance of 26.5m from pumped well. The value of the constant discharge is (Q) =0.378m3/min. The first column in the table 2 gives the time interval of the pump test, the observed drawdowns associated with time intervals are given in second column and the drawdown/time values which are calculated by Excel is give in third column. Table 2: Pump Test data of TW-4 well Constant Discharge (Q) (m^3/min) = Observation Well Distance (m) = Peak Details of the Curve t* (min)= s*=t* x (s/t)* (m)= 0.2978415 t (min) s (m) 0 0.000 1 0.011 2 0.025 3 0.040 4 0.055 5 0.070 10 0.160 20 0.385 22 0.398 24 0.410 26 0.430 28 0.445 30 0.462 32 0.477 34 0.487 36 0.498 38 0.508 40 0.518 45 0.538 50 0.548 55 0.568 110 0.688 120 0.698 130 0.718 140 0.738 150 0.748

0.378 26.9 16.41 a=t*/r^2 (min/m^2)= s/t (m/min) #DIV/0! 0.01100 0.01250 0.01333 0.01375 0.01400 0.01600 0.01925 0.01809 0.01708 0.01654 0.01589 0.01540 0.01490 0.01432 0.01382 0.01336 0.01294 0.01195 0.01095 0.01032 0.00625 0.00581 0.00552 0.00527 0.00498

Transmissivity (m^2/day)= Storage Coefficient= (s/t)* (m/min)= 0.015536095 Computed Drawdown (s’) 0.000 0.010 0.026 0.039 0.056 0.069 0.159 0.384 0.399 0.409 0.429 0.446 0.461 0.477 0.487 0.498 0.508 0.517 0.538 0.547 0.567 0.687 0.699 0.717 0.737 0.749

94.15534 0.002579 0.01815

In the fourth column the computed drawdowns using estimated parameters (T&S) are given. About 47 drawdowns or in other word 150min of the pump test duration is chosen (which is not shown fully) from actual data sheet of this pump test. The values of s/t versus t values based on the table 2 are plotted and a smooth bestfit curve is drawn through the values and is shown in figure 2.

Fig. 2: Variation of s/t with t (Data of well TW-4) The figure 2 shows the variation of drawdown/time with time and a smooth curve is drawn through these points. As it is obvious from the plot, this curve can be developed with same accuracy with less number of early drawdowns. But, we have chosen this much drawdown to show the reliability of this method for estimation of

AIJRSTEM 14-178; Š 2014, AIJRSTEM All Rights Reserved

Page 161


Rajasekhar.P et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 158-163

late drawdown. Here about 18 drawdown or about 18min of the pump test duration are also sufficient for the peak to be accurately developed. This plot is drawn in the SigmaPlot software based on table 2. The peaks of the curve is situated on t*=16.41min and (s/t)*= 0.01815m/min. By using these values α is calculated and then the values of T and S are calculated using Equations 2 and 3 respectively. The values of T=94.15534m2 and the S= 0.002579. As it is mentioned in details of the study area in the last part, based on the geological investigations on Kabul Basin Aquifers, there is not leakage in the aquifer and also there is no recharge boundary to effect the T and S values and also this well is situated in a place enough far from the impervious boundaries of the aquifer, to be effected by presence of the boundary. The SEE value is calculated as 0.0007761729 which show the better reliability of this method. V. Results The table 3 summarizes the result of analysis of pump test data done in the last part. Table 3: Aquifer parameters estimated by Present method with SEE values Name of Well TW-4

Location Darlaman

Estimated parameters Transmissivity Storage (m^2/day) Coefficient 94.16 0.00258

Considered Drawdowns No. 47

Actually Required Drawdowns 18

SEE 0.00078

VI. Discussion about the Results For this set of data in table 3 you can see that the transmissivity estimated using present method is 94.16m2/day and the Storage Coefficient estimated as 0.00258. The accuracy of the estimated parameters are depended on the accuracy of placing the peak of curve in s/t vs. t plot and also it depend on the points having u=0.44. At least there will be 1 points among points of (time, drawdown) with u values of greater or equal to 0.44 to locate the peak accurately. As in this study the best-fit curve through points are drawn by SigmaPlot software and the peak is also determined by it, so there in no doubt on the accuracy of the placing of peak and also in these data sets more points had the u values greater or equal to 0.44. The estimation of confined aquifer parameters by present method are very economical and needs little time and resource than the other existing methods in the literature. As it is shown in column 5 and 6 of the table 3 the duration of pump test required and considered for the estimation of aquifer parameters are so less than the other existing methods. This shows that present method is very economical and need less time compared to other existing methods. While it needs less duration of time for pump test, definitely it needs less budget and resource compared to other methods. As it mentioned in the 6th column of table 4, the required number of drawdown data for estimation of aquifer parameters with the same accuracy in the results is less than the considered values. This method is also applicable on an abundant pump test data because of its less requirements of duration of pump test which are considered insufficient data for other methods. According to the analysis we have done on the data the minimum number of drawdown data needed for estimation of aquifer parameters are 18 drawdowns or 18min. However, we have drawn the curves in SigmaPlot software for determining the peak as accurate as possible but, if we drew the curve manually we will achieve nearly the same results. It means no more subjectivity is included in this method. Every plot can be drawn by hand in this method in the site and accurate values can be achieved through simple calculations using a simple calculator. This shows the site applicability of the method. The reliability of this method on estimation of confined aquifer parameters is judged by Standard Error of Estimate (SEE). The low values shown in the last column of the table 4, indicating the reliability of the present method. These values show that the present method is more reliable than the other existing methods. No such method is available to have such a low SEE values. According to the geological investigation about Kabul Basin aquifers, the aquifers of Kabul Basin contain fossil water with more than 1000 years old and there in no leakage with unconfined aquifer and also there is no recharge boundary exists. The impervious boundary cannot be located because all the wells are located far from the boundaries and there is no observation well is near to the boundary to facilitate the determination of impervious boundary. VII. Conclusions The main conclusions drawn from the study are:  A simple method is chosen from the existing methods in the literature for estimation of confined aquifers of Darlaman Sub-Basin of Kabul Basin, Afghanistan.  This method is chosen to estimate the parameters, locate the location of boundaries of aquifer and its effect on estimation of parameters, leakage if any, with high accuracy and short duration of pump test and also to be compatible with time and source constraints.  This method is applied on one set of the data collected from Kabul Basin aquifers.  The Transmissivity estimated by this method in average basis is 94.16m^2/day.

AIJRSTEM 14-178; © 2014, AIJRSTEM All Rights Reserved

Page 162


Rajasekhar.P et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 158-163

    

The Storage coefficient estimated by this method is 0.00258 in the Darlaman sub-basin in average basis. As in this study the best-fit curve through points are drawn by SigmaPlot software and the peak is also determined by it, so there in no doubt on the accuracy of the placing of peak and also in these data sets more points had the u values greater or equal to 0.44 so, the estimated parameters are accurate. The estimation of confined aquifer parameters by present method are estimated very economically and needed short time and resource than the other existing methods; for this study the minimum duration of pump test proved 18min and the maximum 18min. The reliability of this method on estimation of confined aquifer parameters is judged by Standard Error of Estimate (SEE). The SEE value is 7.8*10^-4. These values show that the present method is more reliable than the other existing methods. The present method has high site compatibility. Because the curve fitting can be done manually also with negligible error and the parameters can be achieved by simple calculation with a calculator. VIII.

[1] [2] [3] [4] [5] [6] [7] [8]. [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23]

References

Ajayi, Owolabi and T. O. Obilade., (1989). “Numerical Estimation of Aquifer Parameters Using Two Observation wells.” Journal of Hydraulic Engineering, Vol. 115, ASCE, 982-988. “Availability of Water in the Kabul Basin, Afghanistan[2010]”, USGS, p(1-4). Bayless, Kristi and Avdhesh K. Tyagi1, (2005). “Aquifer Parameters in Confined Porous Media.”School of Civil and Environmental Engineering, Oklahoma State University.Published on ASCE. Çimen, Mesut (2009). “Effective Procedure for Determination of Aquifer Parameters from Late Time-Drawdown Data.” Journal of Hydrologic Engineering, Vol. 14, ASCE, 446-452. Çimen, Mesut (2008). “Confined Aquifer Parameters Evaluation by Slope-Matching Method.” Journal of Hydrologic Engineering, Vol. 13, ASCE, 141-145. M. Amin Akbari, Mohammad Tahir, David W. Litke, and Michael P. Chornack “Ground-Water Levels in the Kabul Basin, Afghanistan, 2004–2007”, USGS, p(1-25) Matanga, G. B. Member (1995), “Characteristics in Evaluating Stream Fumctions in Groundwater Flow” Journal of Hydrologic Engineering, Vol. 1, ASCE, 49-53. Mohammad Hassan Saffi (2011), “Groundwater natural resources and quality concern in Kabul Basin, Afghanistan” DACAAR, 1-14. Ojha,C. S. P., (2003). “Aquifer Parameters Estimation Using Artesian Well Test Data.” Journal of Hydrologic Engineering, Vol. 9, ASCE, 64-67. Papadopulos, Istavros S. An Cooper, Hiltonh J., (1967). “Drawdown in a Well of Large Diameter.” Water Resources Division, U.S. Geological Survey Washington, D.C., Vol. 3, 241-244. Raghunath, H. M., (2007). Groundwater. 3rd. edition., New Age Publishers, New Delhi Robert E. Broshears, M. Amin Akbari, Michael P. Chornack, David K. Mueller, and Barbara C. Ruddy 2005, “Inventory of Ground-Water Resources in the Kabul Basin, Afghanistan”, USGS, p(9-25) Sen, Zekai, (1996). “Volumetric Leaky-Aquifer Theory and Type Straight Lines” Journal of Hydraulic Engineering, Vol. 122, ASCE, 272-280. Singh, Sushil K., (2009). “Simple Method for Quick Estimation of Leaky-Aquifer Parameters.” Journal of Irrigation and Drainage Engineering, Vol. 136, ASCE, 149-153. Singh, Sushil K., (2003). “Storage Coefficient and Transmissivity from Residual Drawdowns.” Journal of Hydraulic Engineering, Vol. 129, ASCE, 637-644. Singh, Sushil K., (2002). “Aquifer Boundaries and Parameter Identification Simplified.” Journal of Hydraulic Engineering, Vol. 128, ASCE, 774-780. Singh, Sushil K., (2001). “Identifying Impervious Boundary and Aquifer Parameters From Pump-Test Data.” Journal of Hydraulic Engineering, Vol. 127, ASCE, 280-285. Singh, Sushil K., (2000). “Simple Method for Confined Aquifer Parameter Estimation.” Journal of Irrigation and Drainage Engineering. Vol. 126 (6), ASCE, 336-338. “The Study on Groundwater Resources Potential in Kabul Basin.” Japan International Cooperation Agency (JICA), all reports (16). Todd, David K., and Larry W. Mays (2005). Groundwater Hydrology. 3rd. ed,Wiley, New York. Ward, Nicholas Dudley and Fox Colin, (2011). “Identification of Aquifer Parameters from Pumping Test Data with Regard for Uncertainty.” Journal of Hydrologic Engineering, Vol. 17, ASCE, 769-781. “Water for Kabul Consultants 2011”, “Extension of the Kabul Water Supply System.”, p(1-161). Wikramaratna, R. S., (1985). “A New Type Curve Method for the Analysis of Pumping Tests in Large-Diameter Wells.” Water Resources Research, Vol. 21, American Geophysical Union. 261-264.

AIJRSTEM 14-178; © 2014, AIJRSTEM All Rights Reserved

Page 163


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

MULTI-AGENT MODEL OF HYBRID ENERGY SYSTEM IMPLEMENTATION Mohammad Asif Iqbal1, Shivraj Sharma2 Department of Electrical Engineering Poornima College of Engineering Jaipur, Rajasthan, INDIA Abstract: Development and adoption of reliable sources of renewable energy nowadays has become a major challenge to the most part of the India. Sustainable development of energy sector is a potential factor to maintain economic competitiveness and progress. The feasibility of a stand-alone hybrid wind–photovoltaic (PV) system incorporating diesel generator was studied for Barmer (Rajasthan-India). Barmer has an average annual solar and wind energy availability of 1784 and 932 kWh per m2, respectively. The combination of solar panels, wind turbines and DG will ensure continuous electricity no matter what the weather conditions are. In this paper, we introduce advanced agent technology into the wind-solar-DG hybrid power generation system. Then, based on Multi-agent model, establish the decision-making model of wind-PV-DG hybrid power generation system. According to its functions, system is divided into various agent modules and each agent module is again divided into some detailed agents. Paper shows decisionmaking process by the flexible collaboration and communication of these agents. The scheme can be applied into Barmer region, and it can enhance power system intelligence. Index term: wind-solar energy hybrid power generation, optimization control, decision-making model. Plant load factors (PLF)

I. INTRODUCTION Wind solar hybrid power generation is a novel and promising power system. But because of random nature and complexity of the environment, it makes wind-solar hybrid power generation system to be a complicated system. In Order to reduce these complications we add on diesel generator in system. Multi-agent system (MAS) is an important branch of distributed artificial intelligence. MAS can decompose a complicated system into several agents and the collaboration. Wind-solar energy hybrid power generation multi-agent system (WSHGMAS) is very useful for Barmer region because of rich source of solar, wind, and oil. This system is based on decision making where system choice one or more than one source to supply the load demand. This paper presents an economic analysis, design process and climate considerations of a mini-grid hybrid power system with reverse osmosis desalination plant for providing electricity and clean water supplies for remote areas. The design steps are presented for supplying electricity and clean water in remote areas by utilizing renewable energy sources (wind and photovoltaic) and a diesel generator as alternate solution for continuity. The economic analysis considers the annual cost needed, the fuel consumption and initial capital cost, the total net cost and the cost of electricity generated per kWh. II. ABOUT BARMER Barmer is the second largest district of Rajasthan. Recently, a large oil field has been discovered and made functional in Barmer district. The total area of the district is 28,387 square kilometres (10,960 sq mi). The district is located between 70, 05’ to 72, 52’ E Longitudes and 24, 58’ to 26, 32’N Latitudes. In 2009, the Barmer district came in news due to its large Oil basin. The British company Cairn Energy is already started the production from year 2010 on the large scale. Mangala, Aishwariya and Bhagyam are the major oil fields in the district. This is India's biggest oil discovery in 23 years. Cairn works in partnership with Oil and Natural Gas Corporation (ONGC). In March 2010, Cairn increased oil potential from this field to 6.5 billion barrels of oil – from an earlier estimate of 4 billion barrels. In summers the temperature soars to 46 °C to 51 °C. In winters it drops to 0°C (41 °F). III. MULTI-AGENT CONTROL MODEL Multi-agent technology is a new technology of artificial intelligence farther development. A MAS is composed of many interaction agents that together accomplish a complicated task on the basis of communication and

AIJRSTEM 14-182; © 2014, AIJRSTEM All Rights Reserved

Page 164


Mohammad Asif Iqbal et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 164-168

cooperation one another so as to optimize a system. Agent is a software entity with self-adaptation and intelligence on behalf of customers or other programs to accomplish a task by the way of initiative service. That is to say, Agent is an encapsulated module with independent functions includes its own data and algorithms of operating these data can accept and process the messages from other Agents and also send messages to other Agents, so it is an entity which has its independent problem-solving ability and can change along with the changing environment. Multi-agent system is a loose coupling agent network and these agents who have autonomous behavior are dispersive in physical unit and in logistic unit. The agents which associate one another by some protocol can solve a problem beyond single agent’s solving ability. It is MAS’ goal that disassembles a big complicated system (software or hardware system) into some mutual, easy managed, small systems. The MAS which is characteristic of “divide and rule”; “inter cooperation” is likely to solving some complicated questions.

Figure 1: Multi-Agent system model in MACMWSHPS IV. RUNNING MODE OF WSHGMAS The four running modes of the WSHGMAS are as follows:  According to climate condition, when wind energy resource is abundant and solar energy resource is lacking, the system starts up wind energy power generator agent, then calculates the capacity of load, directly supplies power into load. If energy is superfluous, system starts up charging and discharging manage agent, superfluous energy is charged into storage battery. Then if necessary, storage battery supplies energy into load.  When solar energy resource is abundant and wind energy resource is lacking, the system starts up solar energy power generator agent under the control of control agent module, then calculates the capacity of load, directly supplies power into load. Superfluous energy is charged into storage battery under the control of charging and discharging manage agent and if necessary, storage battery supplies energy into load.  When both wind energy and solar energy are abundant, they are both started up, the system come into hybrid power generating state. At the same time, supply energy into load or storage battery.  When the weather is cloudy and windless, under the control of discharging agent, storage battery alone provides power to load.

AIJRSTEM 14-182; © 2014, AIJRSTEM All Rights Reserved

Page 165


Mohammad Asif Iqbal et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 164-168

V. COMPONENTS OF HYBRID MODEL The model includes 7.5kW wind turbine, 5.6kW PV, 20KW diesel generator, and 48V battery. All of these components are connected to the load, and also connected to the dump load to control the system. Battery Two strings of EXIDE XL3000 batteries are used at Barmer each battery is 2V and has a capacity 3000Ah with 48V total bus voltage. These batteries are replaced every 10 years. Diesel Generator The main purpose of the diesel generator is to use as a backup energy sources when the renewable resources under performs. The sizing results show the energy production from the generator is around 2% of the total power demand. This generator is running at 1800rpm and used to deliver AC power to the system through the inverter. This generator has capacity of 20kW. Photovoltaic Array Each PV panel provides 280W with 24V. So, two PV modules are connected in series to meet the bus voltage which is 48V. 6.5kW PV rated capacity used in this system connected in 10 strings each one has two modules with twenty modules total. Wind Turbine Each turbine has rated capacity of 7.5kW and provides 48V DC. The hub height of 30 meters was selected. The HOMER software generated relationship (power curve) between the wind speed and the generated power, which is shown in figure 4. Wind data of 10m heights is used.

Figure 2: Hybrid Model VI.

GRAPHICAL RESULTS

Figure 3: Load pattern of a microwave repeater

AIJRSTEM 14-182; Š 2014, AIJRSTEM All Rights Reserved

Page 166


Mohammad Asif Iqbal et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 164-168

Figure 4: Wind turbine power curve

Figure 5: Average wind speed over a year

Figure 6: Monthly solar radiation

Figure 7: Average wind speed over a year

AIJRSTEM 14-182; Š 2014, AIJRSTEM All Rights Reserved

Page 167


Mohammad Asif Iqbal et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 164-168

Figure 8: Plant load factor data of 6 month VII. CONCLUSION This paper presented a proposed hybrid power system for a remote site in Barmer. Graphical results are presented. It is shown that hybrid energy system for that site can have a renewable energy fraction of 98%. Dynamic modeling of the proposed hybrid system in MATLAB/SIMULINK is also presented. Transient response indicates a stable result. Hybrid power plants are green and clean unlike thermal or nuclear power plants. Earlier studies have shown that a hybrid power plant of 10 kW capacity, over its lifetime can prevent the release of considerable quantum of environmental pollutants, such as CO2- 107.2 tones, H2O– 17.66 tones, SO2- 0.58 tones, O2- 17.385 tones and N2- 348.35 tones [13]. This certainly is an achievement towards the environment conservation and hence research in this field and installation of many such hybrid projects needs to be promoted on a much larger scale. VIII. [1] [2] [3] [4] [5] [6]

[7] [8 [9] [10] [11]

REFERENCES

Chunyi Shi, Wei Zhang, The Introduction of Multi-Agent system[M], Publishing House of Electronics Industry, Beijing, 2003. Hongchun Shu, Lan Tang,Jun Dong,, A Survey on Application of Multi-agent System in Power System, Power System Technology, Vol.29, No.6, pp. 27-31. Chuang Fu, Luqing Ye, Yongqian Liu, Yuanchu Cheng, Intelligent Control-Maintenance-Management System (ICMMS) based on Multi-agent System, Control and Decision, Vol.18,No.3, pp.371-374,may 2003. Rachid Belfkira, Lu Zhang, Georges Barakat, “Optimal sizing study of hybrid wind/PV/diesel power generation unit” Elsevier trans on solar energy 2011, P.p 100–110 D.B. Nelson, M.H. Nehrir, and C. Wang “Unit Sizing of Stand-Alone Hybrid Wind/PV/Fuel Cell Power Generation Systems” Elsevier trans on solar energy 2005,pp.89 Picklesimer, D. ; Rowley, P. ; Parish, D. ; Carroll, S. ; Bojja, H. ; Whitley, J.N; , “Techno-economic optimization of sustainable power for telecommunications facilities using a systems approach”, International Symposium on Sustainable Systems and Technology (ISSST), 2010 IEEE, pp1-6, May 17-19, 2010. J.K. Kaldellis, “Optimum hybrid photovoltaic-based solution for remote telecommunication stations”, Renewable Energy, Vol, 35, Pages 2307-2315, October 2010. Smith, S.S.; Iqbal, M.T; , “Design and Control of a Hybrid Energy System for a Remote Telecommunication Facility ”, presented at IEEE 17, NECEC conference, St.John’s NF, 2007. X. S. Han, H. B. Gooi, and D. S. Kirschen, “Dynamic economic dispatch: feasible and optimal solutions,” IEEE Trans. Power Systems, vol. 16, pp. 22-28, Feb. 2001. S. S. Rao, Engineering Optimization, Theory and Practice, 3rd Edition. John Willey & sons, 1996. Rai GD. Non Conventional Energy Sources. 4th Edition. India: Khanna Publishers. 2008: 37- 42

AIJRSTEM 14-182; © 2014, AIJRSTEM All Rights Reserved

Page 168


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

FAILURE ANALYSIS OF INTERNAL COMBUSTION ENGINE VALVES BY USING ANSYS Goli Udaya Kumar 1, Venkata Ramesh Mamilla 2 M.Tech. Student, 2Associate Professor, Department of Mechanical Engineering QIS College of Engineering & Technology , Ongole – 523272, Andhra Pradesh, INDIA. 1

Abstract: Intake and exhaust valves are very important engine components that are used to control the flow of intake and exhaust Gases in internal combustion engines. They are used to seal the working space inside the cylinder against the manifolds; and are opened and closed by means of what is known as the valve train mechanism.. These valves are loaded by spring forces and subjected to thermal loading due to high temperature and pressure inside the cylinder. The present study is forced on different failure modes of internal combustion engine valves. Failures due to fatigue, high temperature effects, and Failures due to impact load that depends on load and time. For the study of fatigue life, a combined S-N (max. stress v/s number of cycles) curve is prepared. Such a curve helps in comparing the fatigue failure for different materials at different high temperatures and may also assist the researchers in developing the valve materials with a prolonged life.For achieving above sad goals couple –field, fatigue and transient analysis will be done on valves to determine structural and thermal behavior in working condition. Keywords: Failure, Internal Combustion Engine Valves, High Temperature, Fatigue, Wear. I. INTRODUCTION Internal combustion four-stroke engine consists of two valves known as inlet and exhaust valve. The inlet valve allows the air or air-fuel mixture into the chamber. The exhaust valve forces out the exhaust gases. Many things can make a valve fail. The usual causes are thermal and mechanical overstresses, longitudinal cyclic stress, and creep conditions, forging defects etc. these leads to many troubles. The valve troubles include valve breakage; valve face wear. Steam and guide wear, necking of valve stem and other valve problems. “VALVE” is a component used to open or close a passage. To admit the-fuel mixture I the engine cylinder and to force the exhaust gases out at correct timing. Some control system is necessary, which is provided by the valve. In motor vehicle engines, tow valves are used for each cylinder an intake valve and an exhaust valve. The inlet valve is located at the junction of cylinder and intake port and the exhaust valve is located the junction of exhaust port and the cylinder. Generally inlet valve are larger than exhaust valves, because speed of incoming air-fuel mixture is less than the velocity of exhaust gases which leave under pressure. Further because of pressure, the density of exhaust gases is also comparatively high. Moreover, smaller exhaust valve is also preferred because of shorter path of heat flow in this case and consequent reduced thermal loading. Generally inlet valves and exhaust valves are 45% and 38% of the cylinder bore respectively. II. PURPOSE OF VALVES The purpose of the valve in the cylinder of the engine is to admit the air-fuel mixture and to force out the exhaust gases. The inlet valve also known as intake valve admits the charge into the cylinder and exhaust valves are used to send the exhaust gases out of the cylinder. In a 4-stroke engine the inlet valve and exhaust valve operate once in two revolution of the crankshaft. Each of the valves must operate once in one turn and this is done by a camshaft, which turns at half speed of the crankshaft. The firing order of cylinder establishes the sequence in which the valves opening and closing. The main components of the mechanism are valves, rocker arm, valve spring, push rod, cam and camshaft .The fuel is admitted to the engine by the inlet valve and the burnt gases are escaped through the exhaust valve. The cam moving on the rotating cam shaft pushes the cam follower and push rod up wards, there by transmitting the cam action to rocker arm. When one end of the rocker arm is pushed up by the push rod, the other end moves downwards. This pushes down the valve stem causing the valve to move down, there by opening the port. When the cam follower moves over the circular portion of the cam, the pushing action of the rocker arm on the valve is released and the valve returns to its seat and closes it by the action of valve spring. III. VALVES MATERIALS The materials used for inlet and exhaust valves are generally different because of the different operating conditions to which these are subjected. The material for exhaust valve must the following mechanical properties which to operate in more severe conditions.

AIJRSTEM 14-183; © 2014, AIJRSTEM All Rights Reserved

Page 169


Goli Udaya Kumar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 169-173

      

Sufficient strength and hardness to resist tensile forces and wear Adequate fatigue strength High creep strength Resistance to corrosion Resistance to oxidation at the high operating temperatures Small coefficient of thermal expansion to avoid excessive thermal stresses High thermal conductivity for good heat dissipation

Figure 1: Valve dimensions IV. FAILURE ANALYSIS Failure analysis is a systematic examination of failed devices to determine the root cause of failure and to use such information to eventually improve the product reliability. The material engineers and failure analysts generally provide the expert opinions regarding materials for future analysis. The analyst approach is to examine or test materials to evaluate the cause of product failure. The failure analyst must also have a procedure or precisely a method for evaluation when a failure occurs. A method of evaluation that is logical and well planned will enable the analyst to prevent future failure by suggesting corrective measures or by selecting the appropriate material for application. V. STRUCTURAL, THERMAL AND TRANSIENT STRUCTURAL ANALYSIS

Figure 2: Above image is showing stress range with the help of colour bar. This value is obtaind at atatic condition.

AIJRSTEM 14-183; © 2014, AIJRSTEM All Rights Reserved

Figure 3: Above image is showing flux range with the help of colour bar. This value is obtaind at a static condition.

Page 170


Goli Udaya Kumar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 169-173

Figure 4: Above image is showing stress range with the help of colour bar. This value is obtaind with the variation of time(cycles 5000 open and closed possition).

VI.

Figure 5: Each loading history value refers to 5000*1e007 cycles

ANALYSIS WITH FIN BODY SEGMENT (ALUMINUM AS FIN MATERIAL)

Figure 5: Above image is showing stress range with the help of colour bar. This value is obtaind at atatic condition.

Figure 6: Above image is showing flux range with the help of colour bar. This value is obtaind at atatic condition.

Figure 7: Above image is showing stress range with the help of colour bar. This value is obtaind with the variation of time(cycles 5000 open and closed possition).

Figure: 8

AIJRSTEM 14-183; Š 2014, AIJRSTEM All Rights Reserved

Page 171


Goli Udaya Kumar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 169-173

VII. RESULTS AND DISCUSSION

vavle WITH alu WITH mag

STATIC STRUCTURAL Deformation Equivatent stress 0.0085947 23.363 0.0024984 18.289 0.0003903 11.804

The above table compares the stress and deformation in static conditions for the three conditions. STEADY STATE THERMAL TEMPERATURE TOTAL HEAT THERMAL FLUX ERROR vavle 1000 59.298 6.9169e6 WITH alu 1000 50.032 8.0943e6 WITH mag 1000 35.99 6.9235e6 The above table compares the temperature, heat flux and thermal error in static conditions for the three conditions. TRANSIENT STRUCTURAL DEFORMATION Equivalent Elastic strain Equivalent stress Structural error vavle 0.60219 0.015715 2921.5 93.67 WITH alu 0.15358 0.017501 3499.8 87.442 WITH mag 0.91751 0.014527 2902.2 101.84 The above table compares the stress and deformation at 5000cycles conditions for the three conditions. TRANSIENT THERMAL TEMPERATUR TOTAL HEAT DIRECTION E FLEX HEAT FLEX vavle 600 18.494 15.473 WITH alu 600 12.758 10.29 WITH mag 600 12.821 10.29 The above table compares the temperature, heat flux and thermal error at 5000cycles conditions for the three conditions. COUPLE FIELD TOTAL Equivalent elastic srain Equivalent stress DEFORMATION vavle 0.21878 0.0037337 734.52 WITH alu 0.0603 0.01575 3149.8 WITH mag 0.066731 0.025387 2263 The above table compares the stress and deformation at 5000cycles conditions using both thermal and structural loads for the three conditions. FATIGUE LIFE SAFETY FACTOR Maximum 1e10 15 WITH alu 1e10 15 WITH mag 1e12 15 The above table compares the life and safety factors for the three conditions. Static and transient analysis for both structural and thermal conditions coupled field and fatigue analysis is also completed for direct valve and including fin and seat segments. Coupled field analysis has different values due combine lodes of static and thermal. VIII. CONCLUSION Static analysis is done on valve, valve with seat and fin segments by varying two materials. Study-state thermal analysis is done on valve, valve with seat and fin segments by varying two materials. Transient structural

AIJRSTEM 14-183; Š 2014, AIJRSTEM All Rights Reserved

Page 172


Goli Udaya Kumar et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 169-173

analysis is done on valve, valve with seat and fin segments by varying two materials @5000 cycles. Transient thermal analysis is done on valve, valve with seat and fin segments by varying two materials @5000 cycles. Coupled field analysis (combined analysis of static and thermal) is done on valve, valve with seat and fin segments by varying two materials. Fatigue analysis is done on valve, valve with seat and fin segments by varying two materials. As per the analytical results valve with meg alloy fin is the right choice for maximum life. IX. REFERENCES 1. 2. 3. 4.

5. 6.

7. 8.

9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21.

Naresh Kr. Raghuwanshi, Ajay Pandey, R. K. Mandloi, International Journal of Innovative Research in Science, Engineering and Technology Vol. 1, Issue 2, December 2012. Combating Automotive Engine Valve Recession By Roger Lewis and Rob S.Dwyer-Joyce University of Sheffield Dept. of Mechanical Engineering Sheffield, United Kingdom Engine valve recession 9/11/03 Tribology & Lubrication Technology Investigation of Exhaust Valve Failure in Heavy – duty Diesel Engine Nurten VARDAR1, Ahmet EKERİM2 Gazi University Journal of Science GU J Sci 23(4):493-499 (2010). Modelling of the heat loads of the engine valves and the accuracy of calculations P. Gustof, A. Hornik Faculty of Transport, Department of Vehicle Service, Silesian University of Technology, ul. Krasińskiego 8, 40-019 Katowice, Poland Received 03.04.2007; published in revised form 01.08.2007 Journal of Achievements in Materials and Manufacturing Engineering. Modelling and analysis of radial thermal stresses and temperature field in diesel engine valves with and without air cavity Subodh Kumar Sharma, P. K. Saini, N. K. Samria. Failure Analysis Of Exhaust Valve Spring Of C.I.Engine Patel Sujal V., Pawar Shrikant G., Research Scholar, Department of Automobile Engineering, RIT Sakharale 415414, Sangli, Maharashtra, India), Associate Professor, Department of Automobile Engineering, RIT Sakharale 415414, Sangli, Maharashtra, India) International Journal of Engineering Research & Technology (IJERT) Vol. 2 Issue 3, March - 2013 ISSN: 2278-0181. Engine Cylinder Head Thermal and Structural Stress Analysis Ing. Radek Tichánek, Ing. Miroslav Španiel, CSc. Czech Technical University in Prague Technická 4, Prague 6 CZ-166 07, Czech Republic. Computational Visualization and Simulation of Diesel Engines Valve Lift Performance Using CFD Semin, Rosli Abu Bakar and Abdul Rahim Ismail , Automotive Focus Group, Faculty of Mechanical Engineering, University Malaysia Pahang, Locked Bag 12, 25000 Kuantan, Pahang, Malaysia American Journal of Applied Sciences 5 (5): 532-539, 2008 ISSN 1546-9239. M. I. Karamangil A. Avci and H. Bilal, Investigation of the effect of different carbon film hickness on the exhaust valve. Heat Mass Transfer 44 (2008) 587–598 H. J. C. Voorwalda, R. C. Coisse, and M. O. H.Cioffi. Fatigue Strength of X45CrSi93 stainless steel applied as internal combustion engine valves. Procedia Engineering 10 (2011) 1256–1261. V.Kazymyrovych, Very high cycle fatigue of engineering materials. Karlstad University Studies 2009:22. ISSN 1403-8099, ISBN 978-91-7063-246-4. Dowling, Norman E.: Mechanical Behavior of Materials, Engineering Methods for Deformation, Fracture and Fatigue. PrenticeHall. D. J. Benac and R. A. Page, Integrating Design, Maintenance, and Failure Analysis to Increase Structural Valve Integrity. ASM International 3 (2001) 31- 43. Y.B. Liu, Y.D. Li, S.X. Li, Z.G. Yang,S.M. Chen, W.J. Hui, and Y.Q. Weng , Prediction of the S–N curves of high-strength steels in the very high cycle fatigue regim. International Journal of Fatigue 32 (2010) 1351–1357. Oh Geon Kwon and Moon Sik Han, Failure analysis of the exhaust valve stem from a Waukesha P9390 GSI gas engine. Engineering Failure Analysis 11 (2004) 439-447. Z.W. Yu and X.L. Xu, Failure analysis and metallurgical investigation of diesel ngine exhaust valves. Engineering Failure Analysis 13 (2006) 673–682. Alan V. Levy, Solid particle erosion and erosion-corrosion of materials. Copyright 1995 by ASM International. C.G. Scott, A.T. Riga and H. Hong. The erosion-corrosion of nickel-base diesel engine exhaust valves. Wear 181-183 (1995) 485-494. ZHAO Yun-cai and YAN Hang-zhi, Experimental study on wear failure course of gas-valve/valve-seat in engine. J. Cent. South University. Technology 12 (2005) 243-246. P. Forsberg, P. Hollman, and S. Jacobson, Wear mechanism study of exhaust valve system in modern heavy duty combustion engines. Wear 271 (2011) 2477– 2484. Keyoung Jin Chun, Jae Hak Kim, and Jae Soo Hong, A study of exhaust valve and seat insert wear depending on cycle numbers. Wear 263 (2007) 1147– 1157.

AIJRSTEM 14-183; © 2014, AIJRSTEM All Rights Reserved

Page 173


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Direct smelting of gold concentrates, a safer alternative to mercury amalgamation in small-scale gold mining operations *Abbey, C E, +Nartey, R S, #Al-Hassan, S, and *Amankwah, R K, University of Mines and Technology *Mineral Engineering Department +Geological Engineering Department #Mining Engineering Department P.O. Box 237, Tarkwa, Ghana Abstract: Mercury is used in small-scale mining to amalgamate gold particles, facilitating their separation from heavy sands. This method of extraction finds application in over 50 developing countries. However, due to careless handling, mercury may be lost to the environment mainly through inefficient amalgam distillation techniques and spillage. The negative environmental/health-related effects of mercury in mining communities in many countries have necessitated research interest into development of safer alternatives. This paper reports on a study regarding a safer alternative method of gold recovery; direct smelting. This process generates a final product in one-step as compared to the three-step amalgamation, retorting/heating and smelting, currently practiced. Direct smelting has the potential to replace amalgamation and retorting because it is more effective, easier, quicker and cheaper. Key words: Gold, small-scale mining, mercury, direct smelting I. Introduction Mercury is used in small-scale mining to amalgamate gold particles, facilitating their separation from heavy sands. It may be added during comminution or after gravity concentration, when the gold is concentrated along sands with a characteristic black colour generally known as black sands. Amalgamation involves addition of mercury to the gold concentrate and vigorous mixing to induce mercury and gold particles to make contact, which results in a whitish amalgam. This is subsequently separated from the black sands, and after pressing out excess mercury, the amalgam is often heated in the open air to obtain a sponge gold. This method of extraction finds application in over 50 developing countries (Appleton et al., 1999; Drasch et al., 2001; Babut et al., 2003; Serfor-Armah et al., 2004; Viega et al, 2006; Hilson et al., 2007; Tschakert and Singha, 2007). However, due to careless handling, mercury may be lost to the environment mainly through disposal of amalgam residues, inefficient amalgam distillation techniques and spillage. It is estimated that about 1000 tonnes of mercury is released into the general environment per annum. About 200–250 tonnes of mercury are released in China, and 100–150 tonnes in Indonesia. Other countries including Bolivia, Brazil, Colombia, Peru, Philippines, Venezuela, Ghana and Zimbabwe release up to 30 tonnes (Viega, 2006). According to the United Nations Industrial Development Organisation (UNIDO) there are about 30 million miners involved in this industry worldwide and virtually all of them utilize mercury in gold recovery (Anon, 2009a). The negative environmental/health-related effects of mercury in mining communities in many countries have necessitated research interest into development of safer alternatives. Some safer methods that have been prescribed include winnowing, the coal gold agglomeration process, smelting, and leaching processes such as IGoli, Haber and cyanidation (Hilson and Monhemius, 2006; Amankwah et al, 2009). In coal agglomeration the final float concentrate has to be processed and smelted. On the other hand, IGoli, Haber and cyanidation processes utilise leaching reagents to dissolve gold, and solutions have to be processed further for gold recovery. By smelting the concentrates directly, a process could be developed that generates a final product in one step. Smelting is a technique that is already known to the miners, as they smelt sponge gold after amalgamation and retorting/heating. The one-step smelting technique was tested both under laboratory conditions and in the field, under the Ghana Mercury Abatement Project sponsored by the European Union. II. The amalgamation/heating/smelting process Amalgamation is a process were gold particles are incorporated into mercury to effect separation from associated sands. The process of amalgamation normally begins after gravity concentration, though in some cases mercury may be added during gravity concentration. Mercury is added to the gold concentrate and agitated

AIJRSTEM 14-185; Š 2014, AIJRSTEM All Rights Reserved

Page 174


Abbey, C E et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 174-179

mechanically to effect contact with gold particles. The amalgam is then recovered as illustrated in Figure 1. The use of mercury is simple as it collects most of the gold particles and glues them together as shown in Figure 2. After heating the amalgam, mercury is vapourised, and the gold produced has a network of pores, which is referred to as sponge gold. The sponge gold is then smelted to obtain the gold bullion or what small-scale miners refer to as ‘refined gold’. Unfortunately, these small-scale miners don’t use personal protective wear in their operations (Figure 1), and this is a source of concern. The use of bear hands in handling mercury could introduce mercury into the operator's body as the skin can absorb through sweat pores. Mercury can also enter the body by inhalation, ingestion and injection.

(a)

(b)

(c)

(d)

Figure 1. The amalgamation process showing (a) the clean gold concentrate (b) vigorous mixing after addition of mercury (c) separation of amalgam from sands (d) ball of amalgam

Figure 2. Gold particles glued together by mercury (Styles et al, 2006) The impact of mercury on the public health of small-scale mining communities has been reported by several investigators (Appleton et al., 1999; Drasch et al., 2001; Babut et al., 2003; Serfor-Armah et al., 2004; Hilson et al., 2007; Tschakert and Singha, 2007). In a study at an Artisanal and Small-scale Mining (ASM) community in Ghana (Babut et al., 2001; 2003), it was reported that 20% of sampled persons had tremors and 65% had sleep disorders. Examination of mercury in biological samples indicated that between 86% and 91% of people tested had varying levels of mercury intoxication, and that the metal is having an impact on the public health of the community. Due to these problems, retorts have been introduced to improve on the efficiency of amalgam distillation and diminish the escape of mercury fumes. Two major retorts available for use in small-scale operations around the world are the steel and Thermex glass retorts (Babut, 2001). Unfortunately, the miners do not patronize the Thermex glass retort due to its fragile nature and long heating times, especially for large pieces of amalgam. The steel retort is also not accepted due to its opaque nature which prevents miners from observing the process directly and darkening of gold after retorting. III. The Direct Smelting Technique The direct smelting technique was developed as a method with the potential to replace amalgamation and heating (Amankwah et al, 2009). This method begins with the gold concentrate that is prepared for amalgamation as presented in Figure 1a. The technique has been tried on concentrates obtained from all the main types of gold ores, namely refractory sulphidic gold ores, free milling and alluvial material. Refractory sulphidic concentrates, however, require pretreatment due to the mineralogy. Depending on the source of the material concentrated and the type of comminution process conducted, the concentrate may contain abraded steel components that have to be removed by a pretreatment stage. A. Pretreatment of concentrates A.1 Oxidation of sulphides Sulphide concentrates have to be oxidized before or during the smelting process. Though the addition of potassium nitrate during smelting may lead to oxidation, it was realised in the study that the addition leads to

AIJRSTEM 14-185; © 2014, AIJRSTEM All Rights Reserved

Page 175


Abbey, C E et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 174-179

excessive boiling, with the potential for molten material to spill over from the crucible. Thus pre-smelting oxidation of sulphides is more acceptable. During oxidation of sulphides, the concentrate is spread in a roasting pan and heated over an open fire for a period of up to 20 minutes while stirring. Microscopic investigation has shown that with ample supply of air, 20 minutes of roasting is enough to convert most of the pyrites to hematite. The sequence of oxidation observed is presented in Figure 3. In the cause of oxidation, some oxides of sulphur are released. The furnace developed is equipped with an extractor that directs the gases into a lime bath where they are neutralised.

0.15 mm

(a)

(b)

(c)

(d)

Figure 3. Photomicrographs showing the conversion of pyrite to hematite during roasting (a) pyrite particle in the concentrate (b) partially oxidized pyrite with a boundary of hematite (reddish) after 5 min roasting (c) pyrite in an advanced stage of conversion to hematite after roasting for 10 min (d) hematite produced after 20 min A.2 Removal of abraded steel Some gold ores are very hard and thus, generally abrade and break sections of the mild steel grinding surfaces of the comminution equipment utilised. Since steel is heavy, the particles introduced into the ore eventually end up in the concentrate. Steel is magnetic and therefore magnetic separation is a method of choice for removing the pieces. The concentrate is pulped and the magnet, enclosed in a transparent polythene bag, is then brought into contact with the pulp. The magnetic material is picked up and later washed into another bowl after demagnetization. This is done several times until all the magnetic materials have been removed. Due to the ferromagnetic properties of the abraded steel, it is necessary to use a very low intensity magnet (as low as 5.0x10 -3 Tesla) to prevent trapping of free gold particles between steel pieces. Magnets of the strength similar to that of the magnetic alphabet set (that are usually stuck on fridges) were found to be suitable. B. Smelting furnaces Two furnaces have been constructed for the field application of this process; a gas and a charcoal fired furnace. The furnaces have been christened sika bukyia by small-scale miners in Ghana. The charcoal-fired furnace has a steel shell and a heating chamber lined with refractory bricks as shown in Figure 4. The external dimensions of the furnace are 50 cm x 35 cm, and the combustion chamber has dimensions 35 cm x 20 cm. It can accommodate two smelting crucibles at a time. The unit stands at a height of about 80 cm, and a removable hood of height 120 cm is equipped with an exhaust fan which propels gases out of the combustion chamber and away from the operator. Removable chimney

Pipe for exhaust fan

Combustion chamber

Refractory bricks

Stabilizer bar

Steel shell

Figure 4. The charcoal fired furnace (Amankwah et al, 2009) The use of charcoal, although regarded as both cheap and easily available to small-scale miners, has environmental implications, and so, a gas-fired furnace was developed to ameliorate this. The gas-fired furnace

AIJRSTEM 14-185; Š 2014, AIJRSTEM All Rights Reserved

Page 176


Abbey, C E et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 174-179

shown in Figure 5 stands at a height of 30 cm, has an internal diameter of 16 cm and is equipped with an air inlet valve and a hood. It is capable of taking one crucible at a time. A cylindrical cross section was chosen with the fuel inlet tangential to the cylinder. This was to allow the flame to circulate in the chamber for uniform distribution of heat. Slots/grooves were cut on the fuel injection pipe to aid oxygen injection. Ceramic fibre was used for insulation. Hood Handle Gas cylinder hose

Ceramic fiber

Casting

Steel casing

Furnace chamber

Figure 5. Pictorial illustration of the gas-fired furnace C. Fuels utilised for the smelting operation Gold melts at 1063oC, thus a furnace for smelting gold concentrates should be able to maintain a temperature above this value. The temperature profile achieved by each of these fuels is presented in Figure 6. The profile of LPG shows that, there was a rapid rise in the temperature of the combustion chamber from room temperature to about 300oC within the initial 3 min. The temperature increased further to about 1300oC after 10 min. the temperature attained after 30 min was 1600oC. The profile for charcoal shows a slow initial rise in temperature reaching about 50oC in the initial 3 min. There was a sudden rise to above 1200 oC after 10 min, beyond which the profile approached a plateau.

1600

Temperature, oC

1200

800 charcoal L P gas

400

0 0

5

10

15

20

25

30

35

Processing time, minutes Figure 6. Temperature profile for L P gas and charcoal It was observed that about 0.7 kg of LPG was used for a full direct smelting cycle, whiles between 10 kg and 12 kg, in the case of charcoal. This is due to the higher calorific value of LPG over that of charcoal (Anon, 2009b). In a related study, palm kernel shells were used as fuel in place of charcoal. Though the temperature attained was high enough for smelting gold, the bed on which to mount the crucibles was not stable and the flames generated were high and difficult to control (Amankwah et al, 2009). D. Fluxing and smelting Smelting is basically a melting process, and the charge for smelting is heated till it becomes molten. In preparing material for smelting, some components known as fluxes are added in order to reduce the melting

AIJRSTEM 14-185; Š 2014, AIJRSTEM All Rights Reserved

Page 177


Abbey, C E et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 174-179

temperature. On melting, the flux serves as a collector of the impurities. In addition, the flux protects the furnace liners or crucibles from direct attack of the molten material and inhibits volatilization. The common fluxes used in smelting gold concentrates include sodium tetraborate or borax (Na 2B4O7), silica sand (SiO2), and potassium nitrate (nitre, KNO3). Silica melts at very high temperatures, and is able to dissolve most of the base metal oxides to form a viscous slag. Borax is added to reduce the melting point of the charge and the viscosity of the resulting slag. Nitre is an oxidizing agent, and oxidizes many base metals and metal sulphides present so that they can be slagged. Sodium carbonate reduces slag viscosity and improves slag clarity. The major reactions of borax, soda ash and nitre are represented by Equations 1 to 4. Silica is a flux but it may not be added directly in this process, as the gold concentrates generally retain enough silica for the process. Borax melts to form a colourless transparent glass, consisting of a mixture of sodium metaborate and boric anhydride (Equation 1). The boric anhydride then reacts with the metal oxide (e.g. zinc oxide) to form the metal borate (Equation 2). Na2B4O7 ZnO + B2O3 Na2CO3 + Na2SiO3 10 KNO3 + 4 FeS2

Na2B2O4 + B2O3 ZnB2O4 NaSiO4 + CO2 4 FeO + 5 K2SO4 + 3 SO2 + 5 N2

1 2 3 4

The percentage of each chemical in the smelting charge depends on the type of concentrate to be smelted. Freemilling concentrates and alluvials use the same composition. For refractory sulphidic concentrates, there is the need to pretreat by roasting. After the pretreatment process, it responds just like the initial two types of concentrates. Both laboratory investigations and fields trials indicate that one part of the concentrate can be mixed with two parts of borax and two parts of soda ash for a good smelting operation (Figure 7).

+

+

Black sands Borax Soda ash Figure 7. The combination of gold concentrates (black sands) and fluxes for the direct smelting process After mixing, the charge is poured into crucibles and placed in the already lit furnace. The smelting process using either of the fuels can be completed within 30 mins. On removing the crucible with tongs, the molten material is poured into moulds and allowed to cool before gold is separated from the slag (Figure 8). The crucibles used could take a maximum of 60 g of gold concentrate, and the minimum weight of gold recovered by this method was 0.2 g.

(a)

(b)

(c)

(d)

(e)

Figure 8. The direct smelting process (a) removing crucibles from the furnace (b) pouring of molten charge (c) molten material cooling down (d) separation of slag from gold (e) smelted gold Direct smelting has the advantage of recovering occluded/coated gold particles that report in the concentrate that cannot be picked up by mercury. Sulphides in the concentrate will decompose and release the contained gold, and very fine particles that escape during squeezing of excess mercury from the amalgam will be also be retrieved. It was realised from both the laboratory study and field trials that direct smelting is a viable option. By using the right combination of borax, soda ash and silica, gold-bearing black sands were smelted with relatively high recoveries. Earlier work done by Amankwah et al (2009) with several small-scale miners in Ghana shows that gold recovery via amalgamation, heating and smelting gave an average recovery of about 88% while figures for direct smelting are above 95%. In addition, direct smelting takes a shorter time to execute. The two furnaces (sika bukyia) constructed for direct smelting application which may be powered by either liquefied petroleum gas or charcoal work well, but miners prefer the gas furnace. For a method to replace mercury amalgamation

AIJRSTEM 14-185; Š 2014, AIJRSTEM All Rights Reserved

Page 178


Abbey, C E et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 174-179

and become widely accepted, it has to be cheap, transparent, fast, safe, and the materials utilized should be readily available. Direct smelting meets these requirements and thus has the potential to replace gold recovery by amalgamation and heating. After the fieldwork the next steps would be to highly subsidize and disseminate the technology to the small scale miners to try. Full adoption of this technology would depend on massive educational outreach. Acknowledgements The authors are grateful to the European Union for sponsoring the project. Contributions from Mike Styles (British Geological Survey), Kevin D’Souza (Wardell Armstrong LLP, UK), Wilson Mutagwaba (MTL Consulting, Tanzania) who were key players in the team are gratefully acknowledged. References Amankwah R K, Styles M T, Al Hassan S and Nartey R S (2010). The application of direct smelting of gold concentrates as an alternative to mercury amalgamation in small-scale gold mining operations. International Journal of Environment and Pollution, 41 (3/4), 304-315. Anon (1999). Assistance in Assessing and Reducing Mercury Pollution Emanating from Artisanal Gold Mining in Ghana – Phase I. www.natural-resources.org/minerals/CD /docs/unido/sub2igoatt6part2.pdf Anon (2009). United Nations Industrial Development Organization. Global Mercury Project. http://www.globalmercuryproject.org/.2009. Access on Aug/05/2009. Anon (2009b). www.home.att.net/~cat6a/fuels-VII.htm, www.engineeringtoolbox.com/fuels-Higher-calorific-values-d 169.html www.kayelaby.npl.co.uk/chemistry/3 11/3 11 4 .html. Appleton, J.D., Williams, T.M., Breward, N., Apostol, A., Miguel, J. and Miranda, C. (1999). Mercury contamination associated with artisanal gold mining on the Island of Mindanao, The Philippines, The Science of the Total Environment, Vol. 228, Nos. 2–3, pp.95–109. Babut, M., Sekyi, R., Rambaud, A., Pottin-Gautier, M., Tellier, S., Bannerman, W., Beinhoff, C., (2001). Assessment of environmental impacts due to mercury used in artisanal gold mining in Ghana. United Nations International Development Organization (UNIDO), unpublished study. Babut, M., Sekyi, R., Rambaud, A., Potin-Gautier, M., Tellier, S., Bannerman, W. and Beinhoff, C. (2003). Improving the environmental management of small-scale gold mining in Ghana: a case study of Dumasi. Journal of Cleaner Production, Vol. 11, No. 2, pp.215–221. Drasch, G., Böse-O’Reilly, S., Beinhoff, C., Roider, G. and Maydl, S. (2001). The Mt. Diwata study on the Philippines 1999 – assessing mercury intoxication of the population by small scale gold mining. The Science of the Total Environment, Vol. 267, Nos. 1–3, pp.151–168. Hilson, G., Hilson, C.J. and Pardie, S. (2007). Improving awareness of mercury pollution in small-scale gold mining communities: Challenges and ways forward in rural Ghana. Environmental Research, Vol. 103, No. 2, pp.275–287. Hilson, G and Monhemius, A.J. 2006. Alternatives to Cyanide in the Gold Mining Industry: What Prospects for the Future? Journal of Cleaner Production 14(12-13): 1158-1167 Serfor-Armah, Y., Nyarko, B.J.B., Adotey, D.K., Dampare, S.B. and Adomako, D. (2004). The impact of small-scale mining activities on the levels of mercury in the environment: The case of Prestea and its environs. Journal of Radioanalytical and Nuclear Chemistry, Vol. 262, No. 3, pp.685–690. Styles, M T, DSouza K P C, Al-Hassan, S, Amankwah, R, Nartey R S and Mutagwaba, W (2006). Ghana Mining Sector Support Programme Project ACP GH 027, Mercury Abatement Phase 1 Report, 143 pp. Tschakert, P. and Singha, K. (2007). Contaminated identities: Mercury and marginalization in Ghana’s artisanal mining sector. Geoforum, Vol. 38, No. 6, pp.1304–1321. Veiga, M M, Maxson, P A and Hylander, L D (2006). Origin and consumption of mercury in small-scale gold mining. Journal of Cleaner Production, 14 (3-4): 436-447.

AIJRSTEM 14-185; © 2014, AIJRSTEM All Rights Reserved

Page 179


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Review: Apriori Algorithms and Association Rule Generation and Mining Minal Ingle1, Nitin Suryavanshi2 Research Student, 2Assistant Professor, Department of Computer Engineering, SSBT's College of Engineering and Technology Bambhori, Jalgaon. North Maharashtra University INDIA 1

Abstract: The prototypical application of Association Rule Mining is market-basket-analysis. However, Apriori algorithm is efficient to generate and mine association rules from large database or data repository. Apriori algorithm generates interesting frequent or infrequent candidate itemsets with respect to support count. Apriori algorithm can require to produce vast number of candidate sets. To generate the candidate sets, it needs several scans over the database. Apriori acquires more memory space for candidate generation process. While it takes multiple scans, it must requires lot of I/O load. The approach to overcome the difficulties is to get better Apriori algorithm by making it reverse. Also will improve Pruning Strategy as it will reduce the scans required to generate candidate item sets and accordingly and a valence or weightage to strong association rule. So that, memory and time required to generate candidate item sets in Apriori will reduce. And the Apriori algorithm will get more effective and efficient. Keywords: Association Rules, Apriori algorithm, frequent itemsets I.

Introduction

A. Association Rules and Association Rule Mining Association Rule Mining (ARM) is one of the most important and well researched technique of data mining. ARM was first introduced by Agrawal et al. 1993 [1]. ARM aims to extract interesting correlations, frequent patterns, association or casual structures among set of items or database or other data repositories. Association Rules are if/then statements that help to discover relationships among unrelated data in a data repository. An Association rules consist of two parts, First is an antecedent which represents the if part and Second is a consequent which represents the then part. An antecedent is an item occurred in a data. A consequent is an item that is found in combination with the antecedent. Association rule uses two criteria support and confidence to identify the relationships and rules are generated by analyzing data for frequent if/then pattern. Association rules are generally needs to satisfy a user specified minimum support and a user specified minimum confidence at the same time. B. Association Rule Mining Association rule mining usually split into two separate steps: 1. First, minimum support is applied to find all frequent item sets in a database. 2. Second, these frequent item sets and the minimum confidence constraint are used to form rules. While the second step is straight forward, the first step requires more attention. Support(S) of an association rule is described as the percentage of records that holds union of X and Y to the total number of records in the database. Confidence(C) of an association rule is defined as the percentage of the number of transactions that contain union of X and Y to the total number of records that include X. Confidence is a measure of strength of the association rule.

Figure 1.1: Association Rule

AIJRSTEM 14-190; Š 2014, AIJRSTEM All Rights Reserved

Page 180


Minal Ingle et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 180-183

Association Rule Mining is to find out association rules that satisfy the user defined minimum support and confidence from a given database. ARM is normally decomposed into two sub problems. First is to locate those item sets whose occurrences go beyond a user defined threshold in the database, these item sets are known as Frequent item sets. The Second sub problem is to produce association rules from those large item sets with a limitation of minimal confidence. C. Association Rule Generation Steps Normally, an association rules generation contains the following steps : 1. The set of candidate k-item sets is generated by 1-extensions of the large (k-1) item sets generated in the previous iteration. 2. Supports for the candidate k-items sets are generated by a scan over the database. 3. Item sets that do not have the minimum support are discarded and the remaining itemsets are called large kitem sets. D. Model of Association Rule Generation Association Rules Generation contains many process, they can easily understand by the following model. The conceptual model of Association Rule Generation is in Fig. 1.2.

Figure 1.2: Conceptual model of Association Rule Generation Association Rule Generation consist of processes as selection of database from the large repository, then preprocessing on selected data, after that mine the candidate frequent itemsets from the preprocessed data, then prune the frequent itemsets according to a given threshold. Such a way rules are generated then association rules are mined according to given support and confidence. ARM is mainly used for mining the frequent and infrequent itemsets from the large databases. It is based on two principles - support and confidence. Association rules are the if/then sentences to show the relationship among the item sets. One of the most important ARM algorithm is Apriori algorithm. Section 2, it gives the survey of papers on Apriori algorithms with their merits and demerits. In Section 3, it gives short description about the improved Apriori algorithm. II.

Literature Survey

To understand the tradeoffs in today's ARM algorithms as Apriori algorithm, it is helpful to briefly examine their history. Apriori algorithm needs pruning techniques in such a way it can reduce the multiple scans over the database. A. Related Work To recognize the Apriori algorithm, it must needed to know about their variations. 1) Apriori Algorithm Rakesh Agrawal et al. proposed Apriori algorithm in 1994. Apriori is one of the most improvement in the history of association rule mining. It solved the problem of association rule mining little bit, in generating many

AIJRSTEM 14-190; Š 2014, AIJRSTEM All Rights Reserved

Page 181


Minal Ingle et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 180-183

candidate itemsets which requires more space and wastage of efforts. Apriori is more capable during the candidate generation process because it uses a different candidate generation process and an efficient pruning strategy. Apriori exploits the Downward Closure property. It uses Join and Prune steps to mine frequent item sets. Apriori takes the merit as any subset of a frequent item sets is also a frequent item set. Still, it has the drawback that it takes more time and more memory for candidate generation process but less as compared to original problem. To generate the candidate set it requires multiple scan over the database [2]. 2) 3DApriori Algorithm With the quick increase in log data, there was a need that how to handle such logging data. SHAO Xiao-dong et al. proposed 3DApriori algorithm in 2009. The authors tried to introduce association rules behind the logging data transactions. 3D-Apriori is to interpret the logging data. Attribute data discretion and Spatial predicate extraction are the main concepts of 3DApriori algorithm for association rule generation. It increases the efficiency of ARM for the logging data transformation. Time complexity of the 3D-Apriori algorithm is mainly caused by the space predicates [4]. 3) 2.1.3 Apriori Mend Algorithm The association rules are used to discover interesting rules from large collections of data which states an association between items or sets of items. D. Magdalene Delighta Angeline et al. proposed Apriori Mend algorithm in 2012. Many of the databases systems too long in size, so that, the efficiency of the algorithm could not get good. To improve the mining efficiency, to reduce the computation scans, the author proposed Apriori Mend algorithm. It generates item sets using a Hash function. Apriori Mend for association rule mining based on the concept of closed frequent itemsets. Apriori Mend algorithm is found to be more admirable than the traditional method Apriori algorithm in terms of efficiency. But it has disadvantage as it’s execution time is increased [6]. 4) Parallel Apriori Algorithm Exploring frequent patterns from transactional databases is considered as one of the association rule mining trouble. Apriori is one of the classic algorithm for this mining process. So, it is a challenging task to develop rapid and efficient algorithm that can handle large volume of transactional databases. Hence, Ning Li et al. proposed Parallel Apriori Algorithm Based on MapReduce in 2012. It is a structure or framework for processing huge databases from transactional databases. Map function and Reduce function are used to generate the association rules. It can scale well and efficiently processed large datasets on service hardware. Still, It has a drawback as, It requires more computation power and memory to find association rules [7]. 5) Apriori Algorithm for Multidimensional Data Feri Sulianta et al. used Apriori Algorithm for Multidimensional Data in 2013. Multidimensional Data Reduction process is a pre-processing. Validation levels are implemented to verify the reliability of the association rules - Data training after reduction, Data training without reduction, Data testing. It explores multidimensional data handling methods to build association rules more specific to product effectively. It must requires the data reduction [8]. 6) Apriori-TFP Algorithm Z. Yang et al. proposed Apriori-TFP algorithm in 1999. This algorithm works like taking or extracting Total from Partial. In an ARM process, raw data are firstly pre-processed and stored in a partial support tree (Ptree).Then, Association Rule Generation is done. ARM process time is reduced, due to efficient pre-processing. It takes number of scans for dense data [3]. 7) GPApriori Algorithm Fan Zhang et al. proposed GPApriori Algorithm in 2011. GPApriori performs a parallelized version of the support counting step on the GPU(Graphical Processing Unit). Support counting procedure is optimized for GPU execution. It gives speed up on modern GPU compared with CPU Based Apriori implementations. But the Complexity is increased in terms of ARM due to GPU [5]. Apriori algorithms have various variations. Apriori algorithm is a classic algorithm of association rules, which enumerate all of the frequent item sets. When this algorithm come across dense data due to the huge number of lengthy patterns appear, this algorithm’s performance declined dramatically. Apriori algorithm inherits the drawback of number of scans over the database and more memory space required to candidate generation process in each variance. It highly needs to reduce the scans over the database and reduce memory size, in such a way that Apriori will work efficiently and accurately.

AIJRSTEM 14-190; © 2014, AIJRSTEM All Rights Reserved

Page 182


Minal Ingle et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 180-183

III.

Proposed System

Apriori algorithm was first introduced by Agrawal et al. in 1994 [5]. It uses breadth first search strategy to count the support of item sets and uses a candidate generation function. Apriori algorithm uses downward closure property. The Apriori algorithm based on Apriori principle states that if an itemset is frequent, then all of its subsets must be frequent. Apriori algorithm gets the advantage that any subset of a frequent item set is also a frequent item set. However, Apriori algorithm still have the drawback of requirement of more time, space and memory needed for candidate generation process. When this algorithm come upon dense data due to the large number of long patterns emerge, this algorithm’s performance declined dramatically. With the intention of generating more valuable rules, the paper proposes the concept of an improved apriori algorithm of association rules. A. Improved Apriori Algorithm Apriori algorithm must to improve in such a way it provides efficiency in Association rule generation and mining. The paper proposes the concept of Improved Apriori algorithm. It will work exactly reverse than the traditional Apriori algorithm. It will generate large candidate k-itemsets first and then decreasing one by one, it generates candidate itemsets. Improved Apriori algorithm will use a pruning strategy to reduce the space required to store the candidate itemsets. Pruning strategy based on reducing the number of candidates itemsets are generated which are infrequent. In such a way, only frequent item sets are taken out, and then it requires less memory. Improved Apriori algorithm provides the valence to a strong association rule. So that, highly strong association rule will generate and find out. Improved Apriori algorithm will process in a reverse fashion. Also will improve Pruning Strategy as it will reduce the scans required to generate candidate item sets and accordingly find a valence or weightage to strong association rule. So that, memory and time required to generate candidate item sets in Apriori will reduce. And the Apriori algorithm will get more effective and efficient. IV. Conclusion In this paper, authors have studied the various variations of Apriori algorithm. Apriori algorithm catches the merit that any subset of a frequent item set is also a frequent item set. However, Apriori algorithm still have the drawback of requirement of more time, space and memory needed for candidate generation process. This review is focused on how to solve the efficient problems of Apriori algorithm and lift another improved apriori association rule generation algorithm which may give solution in order to improve the efficiency of Apriori algorithm. Apriori algorithm used in various applications such as Market Basket Analysis, Market and Risk management, Inventory control, Telecommunication networks. References [1] R. Agrawal, T. Imielinski, and A. Swami, “Mining association rules between sets of items in large databases," Proceedings of ACM SIGMOD Record, vol. 22, no. 2. ACM, 1993, pp. 207-216. [2] R. Agrawal, R. Srikant et al.,” Fast algorithms for mining association rules," Proceedings of 20th Int. Conf. Very Large Data Bases, VLDB, vol. 1215, 1994, pp. 487-499. [3] Z. Yang, W. Tang, A. Shintemirov, and Q. Wu, “Association rule mining-based dissolved gas analysis for fault diagnosis of power transformers," Proceedings of IEEE Transactions on, Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 39, no. 6, 2009, pp. 597-610. [4] X. dong Shao, “The application of improved 3dapriori three dimensional association rules algorithm in reservoir data mining," Proceedings of CIS (1). IEEE Computer Society, 2009, pp. 64-68. [Online Available : http://dblp.uni-trier.de/db/conf/cis/cis20091.html#Shao09]. [5] F. Zhang, Y. Zhang, and J. D. Bakos, “GPApriori: Gpu accelerated frequent itemset mining," Proceedings of CLUSTER. IEEE, 2011, pp. 590-594. [Online Available: http://dblp.unitrier.de/db/conf/cluster/cluster2011.html#ZhangZB11a]. [6] I. Samuel Peter James, D. Magdalene Delighta Angeline, “Association rule generation using Apriori mend algorithm for students placement," Proceedings of International Journal of Emerging Sciences, vol. 2, no. 1, 2012, pp. 78-86. [7] N. Li, L. Zeng, Q. He, and Z. Shi, “Parallel implementation of apriori algorithm based on MapReduce," Proceedings of Software Engineering, Artificial Intelligence, Networking and Parallel Distributed Computing (SNPD), 2012 13th ACIS International Conference on, 2012, pp. 236-241. [8] F. Sulianta, T. H. Liong, and I. Atastina, “Mining food industry's multidimensional data to produce association rules using apriori algorithm as a basis of business strategy,” Proceedings of 2013 International Conference of, 2013, pp. 176-181.

Acknowledgments Authors would like to express gratitude to Prof. Dr. Girish K. Patnaik, Head of the Computer Science and Engineering Department, SSBT’s College of Engineering and Technology, Jalgaon, Maharashtra, India for his guidance and support in this work. The authors are also thankful to the Principal, SSBT’s College of Engineering and Technology, Jalgaon, Maharashtra, India for being a constant source of inspiration.

AIJRSTEM 14-190; © 2014, AIJRSTEM All Rights Reserved

Page 183


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

AN OPTIMAL APPROACH FOR THE TASKS ALLOCATION BASED ON THE FUSION OF EC AND ITCC IN THE DISTRIBUTED COMPUTING SYSTEMS Abhilasha Sharma DIT University, Dehradun, Uttarakhand, India Surbhi Gupta AMITY University, Noida, Uttar Pradesh, India Abstract: Distributed computing systems (DCS) are of current interest due to the advancement of microprocessors technology and computers networks. It consists of multiple computing nodes that communicate with each other by message passing mechanism. The advancement of the new technologies in communication and information lead to the development of the Distributed System. The task allocation is an essential phase in the Distributed computing systems. We consider the problem of m tasks and n processors (where m>>n). In this paper a heuristic model is presented, which performs static allocation of a set of “m” tasks of a program to a set of “n” processors (where, m >> n) with the constraints of minimizing Inter Task communication Cost (ITCC) and maximize the overall throughput of the system in such a way that allocated load on all the processors should be balanced. While designing the algorithm the Execution Cost (EC) and ITCC have been taken into consideration. Keywords: Distributed Computing Systems (DCS), Task Allocation, Execution Cost, Inter Task Communication Cost. I. Introduction Distributed Computing systems have become popular due to the advances in computer hardware and interconnection network technology. Distributed processing system is a computer system in which multiple processors connected together through a high-bandwidth communication link. These links provides a medium for each processor to access data and programs on remote processors . The performance of any Distributed Processing System may suffer if the distribution of resource is not carefully implemented. In order to make the best use of the resources in a DCS, it becomes essential to maximize the overall throughput by allocating the tasks to processors in such a way that the allocated load on all the processors should be balanced. Partitioning of the application software into a number of small groups of modules among dissimilar processors is important parameter to determining the efficient utilization of available resources. It also enhances the computation speed. The task partitioning and task allocation activities influence the distributed software properties such as IPC [1, 2]. Many approaches have been reported for solving the task assignment problem in DCS. The problem of finding an optimal dynamic assignment of a modular program for a two-processor system is analyzed by [3]. One measure of usefulness of a general-purpose distributed computing system is the system’s ability to provide a level of performance commensurate to the degree of multiplicity of resources present in the system. Taxonomy of approaches to the resource management problem is reported [4]. The taxonomy, while presented and discussed in terms of distributed scheduling, is also applicable to most types of resource management. A model for allocating information files have been reported by [5] the model considers storage cost, transmission cost, file lengths, and request rates, as well as updating rates of files, the maximum allowable expected access times to files at each computer, and the storage capacity of each computer. The model is formulated into a nonlinear integer zero-one programming problem, which may be reduced to a linear zero-one programming problem. Known solutions of a large number of important (and difficult) computational problems called NP-complete problems depend on enumeration techniques which examine all feasible alternatives. The design of enumeration schemes in a distributed environment have been reported by [6]. A task allocation model that allocates application tasks among processors in distributed computing systems satisfying: 1) minimum inter-processor communication cost, 2) balanced utilization of each processor, and 3) all engineering application requirements has been reported by Perng-Yi Richard Ma et.al [7]. This problem of task allocation in heterogeneous distributed systems with the goal of maximizing the system reliability has been addresses [8]. The model is based on the well-known simulated annealing (SA) technique. Yadav et al have reported an algorithm for Reliability Evaluation of Distributed System Based on Failure Data Analysis [9]. An Efficient Algorithm for Optimal Tasks Allocation through Optimizing Reliability Index in Heterogeneous Distributed Processing

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

Page 184


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189

System has been discussed by [10]. J. B. Sinclair[11]considers the problem of finding an optimal assignment of the modules of a program to processors in a distributed system. A module incurs an execution cost that may be different for each processor assignment, and modules that are not assigned to the same processor but that communicate with one another incur a communication cost. An optimal solution to the problem of allocating communicating periodic tasks to heterogeneous processing nodes (PNs) in a distributed real-time system have been reported [12]. Matrix reduction technique has been use by Sagar et al according to the criteria given therein a task is selected randomly to start, with and then assigned to a processor [13]. A fast algorithm for allocation task in distributed processing system has been reported by Kumar et al [14, 16] in this method the author try to clustered heavily communicated tasks and allocate them to same processor. An efficient Algorithm for allocating tasks to processors in a distributed system has also been reported by Kumar et al. [15]. Distributed computing systems offer the potential for improved performance and resource sharing. To make the best use of the computational power available in multi processing system, it is essential to assign the tasks dynamically to that processor whose characteristics are most appropriate for the execution of the tasks in distributed processing system [17, 18]. The new methodology augments the Maximally Linked Module concept by using stochastic techniques and by adding constructs which take into account the limited and uneven distribution of hardware resources often associated with heterogeneous systems has been discussed by Elsade et al. Authors used pure simulated annealing and the randomized algorithm to randomly-generated systems and synthetic structures which were derived from real-world problems [19]. Communication technology has also opened many avenues for processing, sharing and transferring of data Yadav, et al [20] reportedtask scheduling in computer communication network. An Artificial Neural Network (ANN) base task scheduling model has been discussed by Yadav et al [21] while developing the model authors use feedback neural network architecture. Sing et al reported an ANN based model for Load Distribution in fully connected client server Network [22]. Analysis of Load Distribution in Distributed Computing systems through Systematic Allocation Task and an exhaustive approach of performance analysis to the distributed systems based on cost assignments have been reported by Kumar et al [23, 24]. The main objective of this paper is to minimize the total program execution cost. A heuristic model is presented, which performs static allocation. While designing the algorithm it is assumed that a program is a collection “m” tasks to be executed on a set of “n” processors, which have different processing capabilities. A task may be portion of an executable code or a data file. The number of tasks to be allocated is more than the number of processors (m>n), as normally is the case in the real life distributed processing environment. It is assumed that the EC of a task on each processor is known, if a task is not executable on any of the processor due to absence of some resources. The EC of that task on that processor is taken to be () infinite. We assume that once a task has completed is execution on a processor, the processor stores the output data of the task in its local memory, if the data is needed by some another task being computed on the same processor, it reads the data from the local memory. The overhead incurred by this is negligible, so for all practical purposes we will consider it as zero. Using this fact, the algorithm tries to assign heavily communicating tasks to the same processor. Whenever a group of tasks is assigned to the same processor, the ITC cost between them is zero.The proposed method is based upon the matrix reduction techniques. The performance of the algorithm have been tested by generating the random data and found that the algorithm is suitable for arbitrary number of processor with the random program structure and workable in all the cases. II. Task Assignment Problems The specific problem being addressed is as follows: Given application software that consists of “m” communicating tasks, T = {t1 , t2 ,….tm}, and a heterogeneous distributed computing system with “n” processors, P = {p1 ,p2 ,….pn}, where it is assumed that m>>n, assign (allocate) each of the “m tasks to one of the “n” processors in such a manner that the ITCC time is minimized and the processing load is balanced. The processing cost of these tasks on all the processors and ITCC between the task aretaken in the form of Execution Cost Matrix [ECM (,)] of order m x n, and Inter Task Communication Cost Matrix [ITCC(,)] of order m respectably and will be used throughout this text. Definitions: Execution Cost: The execution cost ecij Where 1  i  m, 1  j  n of each task ti depends on the processor pj to which it is assigned and the work to be performed by each of tasks of that processor p j. The processing Execution cost EC of the tasks on all the processors is given in the form of Execution Cost Matrix (ECM) of order m x n. The Execution Cost of a given assignment on each processors are calculated by Equation (1) n (1) PEC ( j )  ec x , i  1,2,...m

 j 1

Where xij is the

ij

ij

1, if task ti is assigned to procerssor pj 0, otherwise

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

Page 185


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189

Inter Task Communication Cost: The Inter Task Communication cost ecik of the interacting tasks ti and tk is the minimum cost required for the exchange of data units between the processors during the process of execution. (2 ) 1, if task t is assign to processor p   i j i k where x ij is the   0, otherwise  Response Time (RT) of the System: Response time of the system is a function of amount computation to be performed by each processor and the computation time. This function is defined by considering the processor with the heaviest aggregate computation and communication load. Response time of the system for a given ITCC( j )   cik xij , j  1,2,....n

assignment is defined by RT (Aalloc) =

max {PEC (Aalloc) j +IPC (Aalloc) j} 1 j  n

(3)

Average load: For each node there are N values representing the execution cost required for the corresponding module to be processed on each of the N processors. These values are in the matrix ECM. Each edge is labeled with a value that represents the communication cost needed to exchange the data when the modules reside on different processors. The total workload W is the summation of the maximum module execution cost on the different processors as shown Lavg( Pj ) 

,

Wj 

W j , j=1, 2…………. Where m

 et

1i  m

i. j

(4)

j=1, 2…………..n

n

Tlod   Lavg ( Pj )

(5)

1

The average load on a processor Lavg (Pj) depends upon the different tasks on each processor. The system is considered to be balanced if the load on each processor is equal to the processor average load within a given (small percentage) tolerance. As s u mp t io n s To keep the algorithm reasonable in size, several assumptions have been made while designing the algorithm. A program is assumed to be collection of “m” tasks to be executed on a set of “n” processors, which have different processing capabilities. A task may be portion of an executable code or a data file. The number of tasks to be allocated is more than the number of processors (m >>n), as normally is the case in the real life. It is assumed that the execution time of a task on each processor is known, if a task is not executable on any of the processor due to absence of some resources. The execution time of that task on that processor is taken to be (∞) infinite. We assume that once a task has completed its execution on a processor, the processor stores the output data of the task in its local memory. If the data is needed by some another task being computed on the same processor, it reads the data from the local memory. Using this fact, the algorithm tries to assign heavily communicating tasks to the same processor. Whenever groups of tasks or cluster are assigned to the same processor, the data transfer between them is zero. Completion of a program from computational point of view means that all related tasks have got executed. III. The Proposed Method Initially the average load on the processors P j is obtained by using equation (4) and the total load may be calculated by using the equation (5).The load on each processor is equal to the average load within a reasonable tolerance. In the present study a tolerance factor of 20% of average load has been considered. Determine the “n” Minimally Linked Task (MLT) by using the equation (6) and store the result in two-dimensional array MLT(,) the first column represents the task number and second column represents the sum of ITCC of task t i with all task tk-i. Rearrange the MLT (,) in ascending order assuming the second column as sorted key.

MLT (i, k ) 

m

 cc

ik

(6) Augmented ECM (,) by introducing the MLM () and sort ECM (,) in increasing order considering MLT as sorting key. To determine initial allocation apply the Yadav et al algorithm [24] and store in an array T ass(j) (where j = 1,2,…,n). The processor position are also store in a another linear array A alloc (j). The value of TTASK (j) is also computed by adding the values of Aalloc (j) if a task ti is assigned to processor pj otherwise continue. The remaining (m-n) task are then store in a liner array T non_ass( ). Tasks assigned to processors pj and stored in Tnon_ass( ) which are obviously: i , k 1

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

Page 186


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189

T  Tass ()  Tnon_ ass () All the tasks stored in Tnon_ass( ) fused with those assigned tasks stored in T ass( ) on the bases of minimum average of EC and ITCC. The Fused Execution Cost (FEC) of a task ta∊ Tnon_ass( ) with some other task ti∊ Tass( ) on processor pj is obtained as:

FEC ( j ) ai  ecaj  ecij , (1  a  m, 1  i  m, 1  j  n, i  a)

Let ccai be the ITCC between ta∊ Tnon_ass() and ti∊ Tass(). Fused Inter Task Communication Cost (FITCC) for t a with ti is computed as: FITCC( j ) ai 

cc

ai

i | tiTa ss()

Here, cai =0 if fused with ti or a = i and remaining cai value are added and Minimum Average Fused Cost (MAFC) is calculated as follows:

MAFC( j ) ai = min(FEC a1  FITCC a1 ), (FEC a2  FITCC a 2 ),...., ( FEC an  FITTCCan ) 

This process will be continued until all the tasks stored in T non_ass() are fused. After complete allocation is achieved PEC (j) and ITCC (j)by using equation (1) and (2) respectively. Finally, Calculate the Total Optimal Cost (TOC) by summing up the values of PEC (j) and ITCC (j), and store the result in a linear array Over TOC (j) where j = 1,2,…n. The maximum value of TOC (j) will be the optimal cost of the system: TOC (j) = Max {PEC (j)+ITCC (j)} The Mean Service Rate [MSR] of the processors in terms of T ass(j) to be computed as and store the results in MSR(j) (where j = 1,2,…,n).

MSR( j ) 

1 ( where j  12,...n) PEC ( j )

The overall throughout of the processors are calculated asand store the results of throughout in the linear arrays TRP (j), where j=1, 2…….,n T ( j) TRP ( j )  TASK ( where j  12,...n) PEC ( j ) IV. Results and Discussions Input: Data required by the Algorithm is given below: Number of processors available in the system (n) = 3 Number of tasks to be executed (m) = 4

ECM(,)=

p1

p2

p3

t1

8

4

6

t2

6

5

4

t3

8

6

t4

4

7

t1 0

t2

t3

t4

t1

100

3

5

t2

100

0

6

4

5

t3

3

6

0

5

3

t4

5

4

5

0

ITCCM(,)=

Actual average load to be assign to the processor after introducing the 20 % Tolerance Factor (TF) the average load may be assign to the processors is calculated and same may be sore in linear array L AVG() p1 10 LAVG( )

=

p2

09

p3

07

Determine the Minimally Link Task (MLT) and store the result in MLT (,) as follows: t1 108 MLM(,)

=

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

t2

110

t3

14

t4

14

Page 187


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189

Augmented ECM (,) by introducing the MLM () and sort ECM (,) in increasing order considering MLT as sorting key.

ECM(,)=

p1 8 6 8 4

t1 t2 t3 t4

p2 4 5 6 7

p3 6 4 5 3

MLT 108 110 14 14

ECM(,)=

p1

p2

p3

MLT

t3

8

6

5

14

t4

4

7

3

14

t1

8

4

6

108

t2

6

5

4

110

Apply Yadav et al [Yada04] algorithm to determine the initial allocation The initial allocation than store in an linear array Tass() and the processor position are store in Aalloc( ). The remaining m-n tasks are stored in Tnon_ass ( ). The results are asfollows: Tass(j) A a l l o c (j ) Tnon_ass (k)

= = =

{t1, t3,t4 } {p 2 , p 3 ,p 1 } {t2}

Tasks

Processors

FEC(j)

t2+t1

p2

9

t2+t3

p3

9

t2+t4

p1

10

After getting the initial allocation the a task stored in T non_ass( ) has been selected for assignment i.e. t2FEC(j), for t2 with all the stored in Tass() are FITCC (j) for t2 with the other assigned tasks stored in T a s s ( ) are : Tasks

Processors

FITCC(j)

t2+t1

p2

18

t2+t3

p3

112

t2+t4

p1

116

MAFC (j) by summing up the value of FEC (j) and FITCC (j)are: Tasks

Processors

FEC(j)

FITCC(j)

MAFC(j)

t2+t1

p2

9

18

27

t2+t3

p3

9

112

121

t2+t4

p1

10

116

126

Table 1: Final Results Processors

PEC(j)

ITCC(j)

TOC(j)

MSR (j)

TRP(j)

p1

4

14

18

0.056

0.065

p2

9

18

27

0.074

0.037

p3

5

14

19

0.053

0.053

The MAFC (2)21is minimum i.e. 27.Therefore, task t2 is fused with Task t1 executing on processor p2.Compute PEC (j), ITCC (j), TOC (j), MSR (j) and TRP (j) given in table 1. The maximum of TOC (j) is 27 i.e. the total busy cost of the system is 27 which is corresponds to processor p2.The model deals with the problem of optimal task allocation and load balancing in DCS. The load balancing mechanism is introduced in the algorithm by fusing the unallocated tasks of the basis of minimum of the average impact of EC and ITCC. It can also be perceived from the example presented here that wherever the algorithm of better complexity is encountered [19] present technique gets an upper hand by producing better optimal results with slight enhancement in the cost due to minor in complexity factor. The worst case run time

AIJRSTEM 14-193; Š 2014, AIJRSTEM All Rights Reserved

Page 188


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189

complexity of the algorithm suggested by [19]is O (n2+ m2+ m2n+m+n) and the run complexity of the algorithm presented in this paper is O (m2n+2mn). Figure 1 shows the run time complexity of present methods and [19] considering different cases of tasks and processors. The utility of our algorithm will ultimately depend on how it can be incorporated into the designing and managing activities of distributed computing system.

Figure 1: Run time complexity compression REFERENCES [1]. [2]. [3]. [4]. [5]. [6]. [7]. [8]. [9]. [10].

[11]. [12]. [13]. [14]. [15].

[16]. [17]. [18].

[19]. [20]. [21]. [22].

[23].

SHATZ, S/M., and WANG, J.P., “Introduction to distributed software engineering”, Computer, 1987,20 (10), pp. 23-31. D.F. Baca, “Allocation Tasks to Processor in a Distributed System,” IEEE Trans. On Software Engineering, Vol.15 pp.14271436, 1989. S.H. Bokhari, “Dual Processor Scheduling with Dynamic Re-Assignment”, IEEE Trans. On Software Engineering, Vol.SE-5 pp. 341-349, 1979. T.L. Casavent, and J.G. Kuhl, “A Taxonomy of Scheduling in General Purpose Distributed Computing System”, IEEE Trans. On Software Engineering, Vol.14 pp.141-154, 1988. W.W. Chu, “Optimal File Allocation in a Multiple Computing System”, IEEE Trans. On Computer, Vol.C-18 pp.885-889, 1969. O.I. Dessoukiu-EI and W.H. Huna, “ Distributed Enumeration on Network Computers,” IEEE Trans. On Computer, Vol.C-29 pp.818-825, 1980. Ma, P.-Y. R., Lee, E. Y. S., & Tsuchiya, M.. A Task Allocation Model for Distributed Computing Systems. Computers IEEE Transactions on, C-31(1), 41-47, 1982. Gamal Attiya and Yskandar Hamam, “Task allocation for maximizing reliability of distributed systems: A simulated annealing approach”,Journal of Parallel andDistributed Computing, Volume 66, Issue 10, Pages 1259-1266, 2006 Yadav P. K. , Bhatia K. and Gulati Sagar ,Reliability Evaluation of Distributed System Based on Failure Data Analysis, International Journal of Computer Engineering, Vol 2(1) pp.113-118, 2010. Singh M.P.,Kumar Harendra ,Yadav P.K.” An Efficient Algorithm for Optimal Tasks Allocation through Optimizing Reliability Index in Heterogeneous Distributed Processing System” in “3 rd International Conference on Quality, Reliability and Infocom Technology” held at Indian National Science Academy, New Delhi –India, December 2-5.2006. J.B. Sinclair, “Efficient computation of optimal assignments for distributed tasks,” J. parallel Dist. Comput., vol.4, pp.342-362, 1987. Peng, Dar-Tezen; K.G. Shin, Abdel, T.F. Zoher, “Assignment Scheduling Communicating Periodic Tasks in Distributed RealTime System,” IEEE Trans. On software Engg., vol.13, no.12, pp.745-757, 1997. G. Sagar, and A.K. Sarje, “Task allocation model for distributed system,” Int. J. system science, vol. & no. 22, 9, pp. 1671-1678, 1991. Vinod Kumar, M.P. Singh, P.K. Yadav, “A fast algorithm for allocation task in distributed processing system,” proceedings of the 30th Annual convention of computer society of India, Hyderabad, pp.347-358, 1995. Vinod Kumar, M.P. Singh, P.K. Yadav, “An efficient Algorithm for allocating tasks to processors in a distributed system,” Proceedings of the Nineteenth National system conference (NSC-95), PSC College of technology, Coimbatore, organized by system society of India, New Delhi, pp. 82-87, 1995. Vinod Kumar, P.K. Yadav, and K. Bhatia, “Optimal Task Allocation in Distributed Computing Systems Owing to Inter Task Communication Effects.” Proc. Of CSI-98 : IT for the New Generation, pp.369-378, 1998. Vinod Kumar, M.P. Singh, P.K. Yadav, “An efficient Algorithm for Multi-Processor scheduling with Dynamic Re-Assignment,” Proceedings of the Sixth National Seminar on Theoretical Computer Science, Banasthali Vidya Pith, pp. 105-118, 1996.’ Yadav, P. K., Kumar, Singh, M. P, and Harendra Kumar “Scheduling Algorithm: Tasks Scheduling Algorithm for Multiple Processors with dynamic Reassignment” International Journal of Computer System, Network and Communication, Vol. (2008), pp. 1-9 2008. Elsade A.A.,Wells B.E. “A Heuristic Model for Task Allocation in Heterogeneous Distributed Computing System” The International Journal of Computers and Their Applications. Vol.6, no.1, March 1999. Yadav, P. K., Singh Jumindera and Singh, M. P “An efficient method for task scheduling in computer communication network” , International Journal of Intelligent Information Processing Vol. 3(1) pp 81-89, 2009 Yadav P.K. , Singh M.P. , Kumar Avanish and Agarwal Babita, “An Efficient Tasks Scheduling Model in Distributed Computing systems Using ANN”, International Journal of Computer Engineering, Vol 1(2) pp.57-62, 2009. Singh.M.P, Kumar Avanish, Yadav, P.K. and Krishna Garima, “ Study of Load Distribution in fully connected client server Network using feedback neural network architecture”, Journal of Engineering and Technology Research, Vol 2(4) pp. 58-72, 2010 Kumar Avanish, Yadav P.K., and Sharma Abhilasha "Analysis of Load Distribution in Distributed Computing systems Through Systematic Allocation Task" IJMCSIT "International, Journal of Mathematics, Computer Science and Information Technology", Vol 3 (1), ,pp.101-114, 2010

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

Page 189


Abhilasha Sharma et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 184-189 [24].

Yadav, P. K., Kumar, Avanish and Singh, M. P., “An Algorithm for Solving the Unbalanced Assignment Problems”, International Journal of Mathematical Sciences, Vol. 12(2), pp. 447-461 2004.

AIJRSTEM 14-193; © 2014, AIJRSTEM All Rights Reserved

Page 190


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

A SECURE HASHING SCHEME FOR IMAGE AUTHENTICATION USING ZERNIKE MOMENTS AND LOCAL FEATURES WITH HISTOGRAM FEATURES Deepa.S1, Nagajothi.A2 Department Of Computer Science Engineering Karpagam University Coimbatore INDIA Abstract: A secure hashing scheme is developed for image authentication and finding the forgery regions. The hash can be used to find similar, forged, and different images. It can also identify the type of forgery and locate fake regions containing salient contents. The global, local and histogram features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image. The local features contain position and texture information of salient regions in the image. The histogram features includes the no pixels in the same intensity. Secret keys are used in feature extraction and hash construction. The hash is very sensitive to malicious tampering. The hash of a test image is compared with the reference image. The received image is judged as a fake when the hash distance is greater than a threshold T1 and less than T2. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images is zero. Keywords — Forgery detection; Image hash; Saliency; Zernike moments; Gaussian Filter; Geometric Deformation. I.

Introduction

With the widespread use of image editing software, more and more digital media products are easily to distribute illegal copy. Image hashing is a technique that extracts a short sequence from the image to represent its contents, and therefore can be used for image authentication. If the image is maliciously modified, the hash must be changed significantly. The hash functions in cryptography such as MD5 and SHA-1 that are extremely sensitive to every single bit of input, the image hash should be robust against normal image processing. Different images have significantly different hash values, and secure so that any unauthorized party cannot break the key and coin the hash. To meet all the requirements simultaneously, especially robustness and sensitivity to tampering, is a challenging task. In general an ideal image hash should have the following desirable properties [3]:  Perceptual robustness: The image hash function should be insensitive to those common geometric deformations, image compression and filtering operations, which do alter the image but protect its visual quality. Visually similar images without major differences may have hashes with a small distance.  Uniqueness: Probability of two different images having the same hash value should tend to zero.  Sensitivity: Perceptually significant changes to an image should lead to a totally different hash. This significant feature is very useful in both image authentication and digital forensics.  Secret Key: Secret keys are used for hash construction. In [6] the hash method produces a satisfactory robustness performance. All of the geometric attacks respect the rule that some or all of the pixels are displaced at a random amount under the constraint of visual coherence. Based on the rule, we propose a robust hash solution by using the invariance of the histogram shape to geometric attacks. The histogram shape as the exploited robust feature it not only is mathematically invariant to scaling, rotation and translation, but also is insensitive to those challenging geometric attacks. In [7] the NMF construct a secondary image, and obtain low-rank matrix approximation of the secondary image with NMF again. The matrix entries are concatenated to form an NMF-NMF vector. The inner products of the NMF-NMF vector and a set of weight vectors are calculated. Because the final hash comes from the

AIJRSTEM 14-196; © 2014, AIJRSTEM All Rights Reserved

Page 190


Deepa.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 190-195

secondary image with NMF, their method cannot locate forged regions. A new framework FASHION [4], standing for Forensic Hash for Information assurance. The FASHION framework is designed to answer a much broader range of questions regarding the processing of multimedia data than simple binary decision from robust image hashing. It also offers more efficient and exact forensic analysis than multimedia forensic techniques. In clustering based approach [5], the first step extracts a feature vector from the image, whereas the second stage compresses this feature vector to a final hash value. In the feature extraction step, the two-dimensional image is mapped to a one-dimensional feature vector. This feature vector must capture the perceptual qualities of the image. Most of the pervious schemes are based on global or local or histogram features. In this paper we proposed global, local and histogram features. Global features are insensitive to changes of small areas in the image, local features can reflect regional modifications but it usually generates longer hashes and histogram features are sensitive to small area tampering but produces longer hashes II. Image Hashing Concepts A. Detecting forged images On the rise of powerful graphical editors and sophisticated image manipulation techniques make it extremely easy to modify original images in such a way that any alterations are impossible to catch by an untrained eye, and can even escape the scrutiny of experienced editors of reputable news media. Image hashing algorithm is used to analyze an image on pixel level in order to detect whether significant changes were made to the actual pixels, altering the content of the image rather than its appearance on the screen. B. visual saliency A salient region in an image is one that attracts visual attention. The human visual perception allows humans to focus on the most important parts when observing visual scenes. In [2] Log spectrum of an image L(i) is used to represent the common information of the image. Let A(i) denote the redundant information in the L(i), defined as convolution between L(i) and an low pass kernel hi: A (i) = hi * L (i) ………………...(1) According to the Feature- Integration theory [13], observers focus on the areas where objects with significant low-level features (such as color, intensity, orientation and so on) to pop out. C. Zernike moments The Zernike moments are defined over the unit circle, two steps were required to convert a rectangular region of each image to a unit circle for calculation of Zernike moments. First, the `center of fluorescence' (analogous to the center of mass) for each image was calculated and used to define the center of the pixel coordinate system. Second, the x and y coordinates were divided by 150 (this corresponds to the size of an average cell at the magnification used in these experiments). Only pixels within the unit circle of the resulting normalized image, f(x,y), were used for subsequent calculations. The Zernike moments, Znl, for an image were then calculated using, ………………(2) Y = 0.2126 R + 0.7152 G + 0.0722 B, where Y store the result of Luminance. D. Gaussian Low-Pass Filters The input images are filtered with the low pass Gaussian filter. Low-pass filter is used for enhancing robustness of the hash values to image processing operations, such as common compression and filtering. The low-pass filtering operation can be represented as the convolution of the Gaussian function G(x, y) and an image I(x, y): F(x, y, σ) = G (x , y) * I (x , y) ..…………….(3) Where σ is the standard deviation represent the image distribution. The resultant of the Gaussian low pass filter is stored in F(x, y, σ). III. Image Hashing Algorithm Generation The hash function is created from Zernike moments to represent luminance / chrominance properties of the image, the texture features in salient regions to reflect local properties and the histogram properties. As illustrated in Figure 1, the image hashing method consists of the following steps,

AIJRSTEM 14-196; © 2014, AIJRSTEM All Rights Reserved

Page 191


Deepa.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 190-195

Image Size : The image is first convert to a fixed size, and converted from RGB to the YCbCr representation Luminance and Chrominance Extraction: The global features are based on Zernike moments representing luminance(Y) and chrominance(C) characteristics of the image. Zernike moments of Y and |CbCr| are calculated. Z’ = [ ZY ZC ] ……………..(4) A secret Key K1 is used for hashing.

Figure 1 Block diagram of the image hashing method

Salient region Extraction: Salient regions are detected from the luminance image Y. The local features include position (P) and texture information (T) of salient regions in the image. Texture is an important feature to human visual perception. There are six texture features relating to visual perception: coarseness, contrast, directionality, line-likeness, regularity and roughness. A secret key K2 is used for image authentication S’ = [ P T ] ……………...(5)  Histogram Extraction: The histogram (Hm) is extracted from the preprocessed image by referring to the mean value of the image. A rotationally invariant Gaussian filtering is designed to improve the hash robustness. A secret key K3 is used to randomly generate a row vector X3 containing 48 random integers in [0, 255]. Figure 2 shows the histogram value for the given image. The horizontal axis of the graph represents the tonal variations, while the vertical axis represents the number of pixels in that particular tone.  Hash Construction: The Zernike features (Z) represent the global vectors, salient local vectors (S) and histogram vectors (H m) are concatenated and produce an intermediate hash, namely H’. Secret key K4 used to generate the final hash sequence H. H’= [ Z S H m] H’=H ……………(6) A. Image Authentication The images authentication is used to product the images from the forgery attacks. The image authentication process is performed in the following way.  Image Extraction: The test image is transfer to global, local, and histogram vectors and the test image is extracted to obtain the intermediate hash without encryption , H1’=[Global vector(Z) * Salient vector(S)] ……………..(7) 

Decomposition of Global, Local and Histogram features: The intermediate hash with secret keys K1, K2, and K3 are concatenated to obtain the sequence of the trusted image. Decomposes the images into global, local and histogram features.

AIJRSTEM 14-196; © 2014, AIJRSTEM All Rights Reserved

Page 192


Deepa.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 190-195 Figure 2: Example of Histogram Color Image

  

Salient Region comparison: Check if the salient region in the test image is matching those in the trusted image. Hash Distance Computation: Find the distance between hashes of an image pair and check whether the images are similar, dissimilar and forged images. Figure 3, shows the hash distance of an image. Localized the forgery areas: The test images is a fake image, then locate the forged region and find the nature of the forgery. Four types of image forgery can be identified: removal, insertion, replacement of objects and unusual color changes. Figure 3: Distribution of Hash Distance

The hash value H0, H1 and H 2 representing global , local and histogram features and R denote the number of matched salient regions in the trusted image (N0) and test image (N1).  If N0 > N1 = R, Object removal  If N1 > N0 = R, Object insertion  If N1 = N0 = R and Calculation of hash distance δZC - δZY > ƮC , Unusual color changes.  If N1 = N0 = R and Calculation of hash distance δZC - δZY < ƮC , Replaced objects.  If N0 > R and N1 > R, Tampered salient region. IV. Performance Evaluation We have tested 350 image pairs hence the forgery images are correctly detected. The success rate of forgery localization is 96%. When the hash distance is greater than the threshold T1 but less than T2, the received image is judged as a fake. Consider the threshold T1=7and T2=50 used for finding the original and fake images. If the two different images have the similar hash distance, collision occurs. Figure 4 shows curves of Receiver Operating Characteristics (ROC).The ROC represent six types of content preserving processing: gamma

AIJRSTEM 14-196; © 2014, AIJRSTEM All Rights Reserved

Page 193


Deepa.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 190-195

correction, JPEG coding, Gaussian noise addition, rotation, scaling, and slight cropping. Figure 5 shows the comparison of ROC and JPEG coding. The method ROC differentiates the original and forged image. A. Forgery Detection Today’s powerful graphical editors and sophisticated image manipulation techniques make it extremely easy to modify original images. The image hashing algorithm is used to detect the forgery image efficiently. If the image is find as fake then analyze the hash distance in order to detect whether significant changes were made to the actual pixels, altering the content of the image rather than its appearance on the screen. Previous schemes are used either global or local features. The proposed method base on global, local and histogram features. It can easily identify the small area tampering and it is very sensitive to small area changes. It produces the short hash length. Forgery locations are schematically illustrated in Figure 6. The hash distance for the first example is 29.77, indicating the image is fake. B. Forgery Region Findings The forged images are produced by using graphical editing tools. The forged images are correctly detected by using the image hashing algorithm. Figure 7 shows the example of abnormal color changes. The rectangle in the image indicate salient region, hash distance D= 13.7 indicating the image is tampered. Figure 4: ROC performance

Figure 5: Comparison of ROC with JPEG coding

Figure 6: Hash Distance between the original and forged image a. Original image b. forged image c. salient region and detection result

Figure 7: Example of abnormal color changes

AIJRSTEM 14-196; Š 2014, AIJRSTEM All Rights Reserved

Page 194


Deepa.S et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 190-195

V. Conclusion In this paper, an image hashing method is mainly developed for image authentication and finding the forgery regions. The global features based on Zernike moments representing luminance / chrominance of the image, salient local features and histogram features include Gaussian filter are used to find the forgery region effectively. It reduces the collision between hashes of different images. The hash can be used to find similar, forged and other images. This method enhances the hash sensitivity to small area tampering at the same time it produce the short hash length and robustness. VI. Reference [1]

S.Xiang, H.J.Kim, and J.Huang, “Histogram-based image hashing scheme robust against geometric deformations,” in Proc. ACM Multimedia and Security Workshop, New York, 2007, pp. 121 128.

[2]

Yan Zhao, Shuozhong Wang, Xinpeng Zhang, and Heng Yao,” Robust Hashing for Image Authentication Using Zernike Moments and Local Features,”IEEE Trans.Inf.Forensics Security, vol, no.8, January 2013.

[3]

Z.Tang, S.Wang, X.Zhang,W.Wei, and S.Su, “Robust image hashing for tamper detection using non-negative matrix factorization,” J.Ubiquitous Convergence Technol.,vol.2, no.1, pp.18-26, May 2008.

[4]

W. Lu, A. L. Varna, and M. Wu, “Forensic hash for multimedia information,” in Proc.SPIE, Media Forensics and Security II, San Jose, CA, Jan. 2010, 7541.

[5]

V.Monga, A.Banerjee, and B.L.Evans, “A clustering based approach to perceptual image hashing,” IEEE Trans.Inf.Forensics Security, vol.1, no.1, pp.68-79, Mar.2006.

[6]

A.Swaminathan,Y.Mao, and M.Wu, “Robust and secure image hashing,” IEEE Trans.Inf.Forensics Security, vol.1, no.2, pp.215 230. Jun.2006. V.Monga and M.K.Mihcak, “Robust and secure image hashing via non-negative matrix factorizations,” IEEE Trans.Inf Forensics Security, vol.2, no.3, pp. 376-390, Sep. 2007. F.Ahmed, M.Y.Siyal, and V.U.Abbas, “A secure and robust hash based scheme for image authentication,” Signal Process., vol. 90, no.5, pp. 1456-1470, 2010.

[7] [8] [9]

X.v and Z.J.Wang, “Perceptual image hashing based on shape contexts and local feature points,” IEEE Trans.Inf.Forensics Security, vol.7, no.3, pp.1081-1093, Jun.2012.

[10] W.Lu, A.L.Varna, and M.Wu, “Forensic hash for multimedia information,” in Proc.SPIE,Media Forensics and Security II,San Jose, CA, Jan. 2010, 7541. [11] S.Li, M.C.Lee, and C.M.Pun, “Complex Zernike moments features for shape-based image retrieval,” IEEE Trans.Syst., Man, Cybern. A, Syst. Humans, vol.39, no.1, pp. 227-237, Jan. 2009. [12] Z.Chen and S.K.Sun, “A Zernike moment phase based descriptor for local image representation and matching,” IEEE Trans. Image Process., vol. 19, no. 1, pp. 205-219, Jan. 2010. [13] X. Hou and L. Zhang, “Saliency detection: A spectral residual approach,” in Proc.IEEE Int.Conf. Computer Vision and Pattern Recognition, Minneapolis, MN, 2007, pp. 1-8. [14] R.Venkatesan, S.M.Koon, M.H.Jakubowski, and P.Moulin, ”Robust image hashing”, Proceedings of IEEE International Conference on Image Processing, vol.3, pp.664 666,2000. [15] A.Olmos and F.A.A.Kingdom.(2004) McGill Calibrated Colour Image Database. Available at:[http://tabby.vision.mcgill.ca] [16] Standard test image - Wikipedia, the free encyclopedia. Available at: [http://en.wikipedia.org/Standardtest image]

AIJRSTEM 14-196; © 2014, AIJRSTEM All Rights Reserved

Page 195


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

POWER BALANCING APPROACH FOR EFFICIENT ROUTE DISCOVERY BY SELECTING LINK STABILITY NEIGHBORS IN MOBILE ADHOC NETWORKS Nandhini .R 1 , Malathi. K2 Department Of Computer Science and Engineering Karpagam University Coimbatore India Abstract- A fundamental problem arising in mobile ad hoc networks (MANETs) is the selection of the optimal path between any two nodes for longer time and broadcast storm problem due to its highly dynamic nature. We focusing on the availability and duration probability of a routing path that is subject to link failures caused by node mobility. In order to discover the efficient routing with suitable path, the neighbor information with probability values of a node and stable path values are considered to reduce the latency and overhead of routing. To obtain the neighbor information, rebroadcast delay for finding the rebroadcast order and connectivity factor with additional coverage for calculating the node density which reduces the routing overhead by reducing retransmissions are calculated. In order to find stability of a node we present an approach, routing based on stability and hop-count, where stability metric is considered for the residual lifetime of a link. The stability based routing is not as a separate routing protocol but as an enhancement to a hop-count based routing protocol (e.g. Dynamic Source Routing and Adhoc On demand Distance Vector), so that the expected residual lifetime as well as hop count of route are taken into account. Keywords – Mobile Adhoc Networks; rebroadcast order; Stability metric; Routing overhead I. Introduction A mobile ad-hoc network (MANET) is a self-configuring infrastructure less network of mobile devices connected by wireless links. Due to node mobility in MANETs, frequent link breakages may lead to frequent path failures and route discoveries. The main problem of mobile adhoc network is finding the effective path which is stable for longer time duration. It increases the overhead of routing protocols which reduces the packet delivery ratio and also increases the end-to-end delay. Thus, reducing the routing overhead in route discovery and stability path was an essential problem. The conventional on demand routing protocols like Dynamic Source Routing and Adhoc On demand Distance Vector uses the flooding method to discover a route. These protocols use less bandwidth for routing, because the connection is established only on the basis of requirements of a particular node. In the Flooding method, a source node may send a Route Request (RREQ) packet to all of its neighborhood nodes, which leads to redundant RREQ packets. This problem is known as broadcast storm problem [6], which causes a considerable number of packet collisions and traffic problems. Some of the methods like Area based methods, Distance based methods, Counter based methods, Probability based methods and neighbor knowledge has been proposed for minimizing the drawbacks of broadcast storm problem which reduces the problem in minimum significant level. We introduce the approach, neighbor information with probability [1] and path stability metrics which will reduce the broadcast storm problem while decreasing the redundant packets and improving the routing performance .Rebroadcast Delay is used to find the order of broadcasting which eliminates the duplicate RREQ packets. The value of rebroadcast order is calculated based on the area between the sender and the particular node. If the distance is smaller than the packet will reach more number of neighbors. If the distance is higher, then the probability value also be higher, which leads to discard the packet. The Probability value decides whether to forward a packet to its neighbor node or discards the packet. Connectivity value is to determine the number of neighbors may receive the RREQ packet based on the distance coverage.

AIJRSTEM 14-206; Š 2014, AIJRSTEM All Rights Reserved

Page 196


Nandhini .R et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 196-201

For discovering the most stability path in the routing, a routing protocol called Delay Aware and Route Stability Protocol (DARSP) is used. This routing protocol applies the following three metrics for neighbor coverage selection: the estimated total energy to transmit and process a data packet; the residual energy; the path stability. Route maintenance and route discovery procedures are similar to the DSR protocol, but with the route selection based on the three metrics. We develop a technique to make reactive protocols energy-aware in order to increase the operational lifetime of an ad hoc network. The quality of service support in mobile ad hoc networks depends not only on the available resources in the network but also on the mobility rate of such resources. This is because mobility may result in link failure which in turn may result in a broken path. Furthermore, mobile ad hoc networks potentially have fewer resources than fixed networks. Quality of service routing is a routing mechanism under which paths are generated based on some knowledge of the quality of network, and then selected according to the quality of service requirements of flows. Thus, it is evident that both the aforementioned parameters like link stability associated with the nodes mobility and energy consumption should be considered in designing routing protocols, which allow right tradeoff between route stability and minimum energy consumption to be achieved. II. Algorithm A. Ad hoc On-Demand Distance Vector (AODV) Routing Protocol Ad hoc On-Demand Distance Vector (AODV) Routing is a reactive routing protocol for mobile ad hoc networks which establishes the route to the destination only on demand or based on requirements. AODV is capable of both unicast and multicast routing. It maintains these routes as long as they are needed by the sources. AODV builds routes using a route request / route reply query cycle. When a source node desires a route to a destination for which it does not already have a route, it broadcasts a route request (RREQ) packet across the network. Nodes receiving this packet update their information for the source node and set up backwards pointers to the source node in the route tables. In addition to the source node's IP address, current sequence number, and broadcast ID, the RREQ also contains the most recent sequence number for the destination of which the source node is aware. A node receiving the RREQ may send a route reply (RREP) if it is either the destination or if it has a route to the destination with corresponding sequence number greater than or equal to that contained in the RREQ. If this is the case, it unicasts a RREP back to the source. Otherwise, it rebroadcasts the RREQ. Nodes keep track of the RREQ's source IP address and broadcast ID. If they receive a RREQ which they have already processed, they discard the RREQ and do not forward it. Once the source node receives the RREP, it may begin to forward data packets to the destination. A route is considered active as long as there are data packets periodically travelling from the source to the destination along that path. Once the source stops sending data packets, the links will time out and eventually be deleted from the intermediate node routing tables. If a link break occurs while the route is active, the node upstream of the break propagates a route error (RERR) message to the source node to inform it of the now unreachable destination. After receiving the RERR, if the source node still desires the route, it can reinitiate route discovery. B. Neighbor Coverage based Probabilistic Routing Protocol This protocol uses the rebroadcast order and probability with connectivity value for improving the routing performance while reducing the redundant transmissions in the routing.  Rebroadcast order: Rebroadcast order is used to establish the transmission order from the source to the destination. When a node receives the RREQ packet from the previous nodes, it checks the neighbor list of the previous node for the redundant RREQ packet. If the current node has more common neighbors than the previous node, then the rebroadcast order value will be lower and so the packet will reaches the large number of neighbor nodes .If the current node has less common neighbors than the previous node which has highest rebroadcast order value, then it forwards the RREQ packet to next neighbor node. The main purpose of rebroadcast order is to provide the route request information to large number of neighbor nodes more quickly. After calculating the order, the node can set its own timer for further transmissions.  Probability with Connectivity value: The probability contains two parts: 1)Coverage range, which is the ratio of the number of neighbor nodes that should be covered in the single broadcast which is based on the total number of neighbors

AIJRSTEM 14-206; Š 2014, AIJRSTEM All Rights Reserved

Page 197


Nandhini .R et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 196-201

2) Connectivity value, which reflects the relationship of network connectivity factors and the number of neighbors of a given node. The node which has higher rebroadcast order must receive the RREQ packet from the node which has lower rebroadcast order. The node with higher order checks for the duplication of the packet. If it finds the duplication in the packet which is received, it discard the packet and make changes in the neighbors list. When the timer of the rebroadcast order of the node expires, the node obtains the final Uncovered Neighbors set. The nodes belonging to the final UCN set are the nodes that need to receive and process the RREQ packet. If a node does not sense any duplicate RREQ packets from its neighborhood, its UCN set is not changed, which is the initial UCN set. Finally calculated Coverage range is combined with the connectivity value and the probability value is set to 1. C. Delay Aware and Route Stability Protocol (DARSP) Protocol: This Protocol is used to find out the neighbor node with and route stability by using the three metrics for neighbor coverage selection 1) the estimated total energy to transmit and process a data packet; 2) the residual energy; 3) the path stability. A delivery probability of each node is used to select link stability path over dynamic route discovery. Delivery probabilities are synthesized locally from context information. We define context as the set of attributes that describe the aspects of the system that can be used to drive the process of message delivery. An example of context information can be the change rate of connectivity, i.e., the number of connections and disconnections that a host experienced over the last T seconds. The process of prediction and evaluation of the context information in DARSP can be summarized as follows: 1. Each host calculates its delivery probabilities for a given set of hosts. 2. This process is based on the calculation of utilities for each attribute describing the context. 3. The calculated delivery probabilities under current status are periodically sent to the route request neighbor as part of the update of routing information. 4. Each host maintains a logical forwarding table of tuples describing the next logical hop and its associated delivery probability for all known destinations. 5. Each host uses local prediction of delivery probabilities between updates of information. 6. Each host selects the best forwarding node among list of neighbor’s on the basis of highest Stability value 7. Steps continue until reach the destination. III. Protocol Implementation and Performance Evaluation A. Protocol Implementation The protocol is implemented in NS2 simulator by modifying the source code of AODV protocol. The protocol needs Hello packets to obtain the neighbor information, and also needs to carry the neighbor list in the RREQ packet. The techniques are used to reduce the overhead of Hello packets and neighbor list in the RREQ packet, which are described as follows, the broadcasting packets such as RREQ and route error (RERR) can play a role of Hello packets. In order to reduce the overhead of neighbor list in the RREQ packet, each node needs to monitor the variation of its neighbor table and maintain a cache of the neighbor list in the received RREQ packet. For sending or forwarding of RREQ packets, the neighbor table of any node n has the following 3 cases: 1) The num_neighbors variable will set to a positive integer if the node adds a new neighbor. 2) The num_neighbors variable will set to a negative integer if the node deletes the neighbor in the neighbor table. 3) If the neighbor table of node n does not vary, node n does not need to list its neighbors, and set the num_neighbors to 0. B. Simulation Environment In order to evaluate the performance of the proposed protocol, we compare it with some other protocols using the NS-2 simulator. Broadcasting is a fundamental and effective data dissemination mechanism for many applications in MANETs. In order to compare the routing performance of the proposed protocol, we choose the

AIJRSTEM 14-206; Š 2014, AIJRSTEM All Rights Reserved

Page 198


Nandhini .R et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 196-201

Dynamic Probabilistic Route Discovery (DPR) protocol which is an optimization scheme for reducing the overhead of RREQ packet incurred in route discovery in recent literature, and the conventional AODV protocol. 

Routing overhead comparison with number of nodes Fig.1. Normalized routing overhead with varied number of nodes

Fig. 1 shows the normalized routing overhead with different network density. The proposed protocol can significantly reduce the routing overhead incurred during the route discovery, especially in dense network. Then, the RREQ traffic is reduced. In addition, for fairness, the statistics of normalized routing overhead includes Hello traffic. Even so, the NCPR protocol still yields the best performance, so that the improvement of normalized routing overhead is considerable. On average, the overhead is reduced by about 45.9% in the NCPR protocol compared with the conventional AODV protocol. Under the same network conditions, the overhead is reduced by about 30.8% when the proposed protocol is compared with the DPR protocol. When network is dense, the proposed protocol reduces overhead by about 74.9% and 49.1% when compared with the AODV and DPR protocols, respectively. 

Energy Consumption Comparison Fig2 Energy Consumption Comparison

Fig 2 shows the comparison of Energy consumption of Neighbor Coverage with Probability Routing protocol and Adhoc On-demand distance vector routing protocol. The simulation results shows that our proposed protocol consumes less energy than the AODV protocol which indicates the less resource utilization.

AIJRSTEM 14-206; © 2014, AIJRSTEM All Rights Reserved

Page 199


Nandhini .R et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 196-201

Packet delivery ratio with varied random packet loss rate Fig 3 Packet delivery ratio with varied random packet loss rate.

Fig 3 shows the packet delivery ratio with increasing packet loss rate. As the packet loss rate increases, the packet drops of all the three routing protocols will increase. Therefore, all the packet delivery ratios of the three protocols increase as packet loss rate increases. Both the DPR and proposed protocol do not exploit any robustness mechanism for packet loss, but both of them can reduce the redundant rebroadcast, so as to reduce the packet drops caused by collision. Therefore, both the DPR and neighbor information protocols have a higher packet delivery ratio than the conventional AODV protocol. On average, the packet both of them significantly reduce the number of collisions and then reduce the number of packet drops caused by collisions. On average, the packet delivery ratio is improved by about 11.5 percent in the neighbor information protocol when compared with the conventional AODV protocol. And in the same situation, the neighbor information protocol improves the packet delivery ratio by about 1.1 percent when compared with the DPR protocol. IV. Conclusion In this paper we proposed a probabilistic rebroadcast protocol based on neighbor information and delay aware and route stability protocol to reduce the routing overhead and maintain the path stability in MANETs. This neighbor information includes coverage range and connectivity value. We proposed a new scheme to dynamically calculate the rebroadcast order which is used to determine the forwarding order of the packets. Delay aware and Route Stability Protocol is used to select the path which is stable for longer time duration. Simulation results show that the proposed protocol generates less rebroadcast traffic than the flooding and other schemes. Because of less redundant rebroadcast, the proposed protocol mitigates the network collision and contention, so as to increase the packet delivery ratio and decrease the average end-to-end delay. The simulation results also show that the delay aware and route stability protocol improves the routing performance with highest stable nodes while reduces the energy consumption. VI. References [1]

Xin Ming Zhang, En Bo Wang, Jing Xia, and Dan Keun Sung.2012.A Neighbor Coverage based Probabilistic Rebroadcast for Reducing Routing Overhead in Mobile Ad hoc Networks.

[2] [3]

C. Perkins, E. Belding-Royer, and S. Das, “Ad hoc On-Demand Distance Vector (AODV) Routing,” RFC 3561, 2003. D. Johnson, Y. Hu, and D. Maltz, “The Dynamic Source Routing Protocol for Mobile Ad hoc Networks (DSR) for IPv4,” RFC 4728, 2007. H. AlAamri, M. Abolhasan, and T. Wysocki, “On Optimising Route Discovery in Absence of Previous Route Information in MANETs,” Proc. of IEEE VTC 2009-Spring, pp. 1-5, 2009. X. Wu, H. R. Sadjadpour, and J. J. Garcia-Luna-Aceves, “Routing Overhead as A Function of Node Mobility: Modeling Framework and Implications on Proactive Routing,” Proc. of IEEE MASS’07, pp. 1-9, 2007.

[4] [5]

AIJRSTEM 14-206; © 2014, AIJRSTEM All Rights Reserved

Page 200


Nandhini .R et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 196-201 S. Y. Ni, Y. C. Tseng, Y. S. Chen, and J. P. Sheu. “The Broadcast Storm Problem in a Mobile Ad hoc Network,” Proc. of ACM/IEEE MobiCom’99, pp. 151-162, 1999. [7] Mohammed, M. Ould-Khaoua, L.M. Mackenzie, C. Perkins, and J. D. Abdulai, “Probabilistic Counter-Based Route Discovery for Mobile Ad Hoc Networks,” Proc. of WCMC’09, pp. 1335-1339, 2009. [8] Williams and T. Camp, “Comparison of Broadcasting Techniques for Mobile Ad Hoc Networks,” Proc. ACM MobiHoc’02, pp. 194205, 2002. [9] J. Kim, Q, Zhang, and D. P. Agrawal, “Probabilistic Broadcasting Based on Coverage Area and Neighbor Confirmation in Mobile Ad hoc Networks,” Proc. of IEEE GLOBECOM’04, 2004. [10] J. D. Abdulai, M. Ould-Khaoua, and L. M. Mackenzie, “Improving Probabilistic Route Discovery in Mobile Ad Hoc Networks,” Proc. Of IEEE Conference on Local Computer Networks, pp. 739-746, 2007. [6]

[11] Z. Haas, J.Y. Halpern, and L. Li, “Gossip-Based Ad Hoc Routing,” Proc. IEEE INFOCOM, vol. 21, pp. 1707-1716, 2002. [12] W. Peng and X. Lu, “On the Reduction of Broadcast Redundancy in Mobile Ad Hoc Networks,” Proc. ACM MobiHoc, pp. 129-130, 2000. [13] J.D. Abdulai, M. Ould-Khaoua, L.M. Mackenzie, and A. Mohammed, “Neighbour Coverage: A Dynamic Probabilistic Route Discovery for Mobile Ad Hoc Networks,” Proc. Int’l Symp. Performance Evaluation of Computer and Telecomm. Systems (SPECTS ’08), pp. 165-172, 2008. [14] J. Chen, Y.Z. Lee, H. Zhou, M. Gerla, and Y. Shu, “Robust Ad Hoc Routing for Lossy Wireless Environment,” Proc. IEEE Conf. Military Comm. (MILCOM ’06), pp. 1-7, 2006. [15] A. Keshavarz-Haddady, V. Ribeirox, and R. Riedi, “DRB and DCCB: Efficient and Robust Dynamic Broadcast for Ad Hoc and Sensor Networks,” Proc. IEEE Comm. Soc. Conf. Sensor, Mesh, and Ad Hoc Comm. and Networks (SECON ’07), pp. 253-262, 2007. ZHANG ET AL.: A NEIGHBOR COVERAGE-BASED PROBABILISTIC REBROADCAST FOR REDUCING ROUTING OVERHEAD IN MOBILE... [16] F. Stann, J. Heidemann, R. Shroff, and M.Z. Murtaza, “RBP: Robust Broadcast Propagation in Wireless Networks,” Proc. Int’l Conf. Embedded Networked Sensor Systems (SenSys ’06), pp. 85-98, 2006. [17] F. Xue and P.R. Kumar, “The Number of Neighbors Needed for Connectivity of Wireless Networks,” Wireless Networks, vol. 10, no. 2, pp. 169-181, 2004. [18] X.M. Zhang, E.B. Wang, J.J. Xia, and D.K. Sung, “An Estimated Distance Based Routing Protocol for Mobile Ad Hoc Networks,” IEEE Trans.Vehicular Technology,vol.60,no.7,pp.3473-3484,Sept.2011.

AIJRSTEM 14-206; © 2014, AIJRSTEM All Rights Reserved

Page 201


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Studies on glass reinforced laminates based on amide oligomers – epoxy resin based thermosetting resin blends Mahendrasinh M Raj Institute of Science & Technology for Advanced Studies & Research (ISTAR), Vallabh Vidyanagar - 388120, Anand, Gujarat, INDIA Abstract: In many applications thermosets are the materials of choice for long-term use because they are insoluble and infusible high-density networks. Amide oligomers are synthesized in presence of urea, ethylene diamine and melamine with bisphenol A and benzene1,4 diol. Amide oligomers were characterized by number average molecular weight ( ), % nitrogen, % formaldehyde and IR spectral studies. Glass reinforced laminates of amino resin – epoxy resin were prepared and characterized by their synergetic thermal stability by thermo gravimetric analysis (TGA), mechanical properties, chemical resistance and fibre content. Key Words: laminates, oligomer, thermosets etc. I. Introduction Over a past few decades composites, plastics, ceramics have been the dominant engineering materials. The areas of applications of composite materials have grown rapidly and have even found new markets. Modern day composite materials consist of many materials in day to day use and also being used in sophisticated applications while composites have already proven their worth as weight saving materials the current challenge is to make them durable in tough conditions to replace other materials and also to make them cost effective. This has resulted in development of many new techniques currently being used in the industry. New polymer resin matrix materials and high performance fibres of glass, carbon and aramid which have been introduced recently have resulted in steady expansion in uses and volume of composites. This increase has resulted in obvious reduction of cost. High performance FRP are also found in many diverse applications such as composite armoring design to resist the impact of explosions, wind mill blades, industrial shafts, and fuel cylinders for natural gas vehicles paper making rollers and even support beams of bridges. Existing structures that have to be retrofitted to make them seismic resistant or to repair damage caused by seismic activity are also done with help of composite materials. Development of advanced composite materials having superior mechanical properties opened new horizons in the engineering field. Advantages such as corrosion resistance, electrical insulation, reduction in tooling and assembly costs, low thermal expansion, higher stiffness and strength, fatigue resistance, such as greater stiffness at lower weight than metals, etc., have made polymer composites widely acceptable in structural applications. However, the disadvantages of composite materials cannot be ignored: their complex nature, designers' lack of experience, little knowledge of material databases and difficulty in manufacturing are barriers to large-scale use of composites. Various experimental approaches have been developed to investigate these properties. Liang et al. [1] studied the interfacial properties and its impact on tensile strength in unidirectional. Sreekala et al. [2] found the significant decrease in the flexural strength was observed at the highest EFB fibre volume fraction of 100% which was due to the increased fibre – to – fibre interactions and dispersion problem which results in low mechanical properties of composite. Yamamoto et al. [3] reported that the structure and shape of silica particles have significant effects on the mechanical properties such as fatigue resistance, tensile and fracture properties. Singha et al. [4] reported a study on the synthesis and mechanical properties of new series of green composites involving Hibiscus Sabdariffa fibre as a reinforcing material in urea – formaldehyde (UF) resin based polymer matrix. Mahapatra et al. [5] described the development of multi-phase hybrid composites consisting of polyster reinforced with E-glass fibre and ceramic particulates. Aruniit et al. [6] studied to find out how the filler percentage in the composite influences the mechanical properties of the material. Ibrahim [7] investigated the effects of reinforcing polymer with glass and graphite particles on enhancing their flexural properties. In thermosetting polymers, the liquid resin is converted into a hard rigid solid by chemical cross-linking which leads to the formation of a tightly bound, three dimensional network. The mechanical properties depend on the molecular units making up the network and between cross-links and the length density of cross-links. [8-10] There are different types of amine curing agents (1) aliphatic like Triethelentetramine (TETA), (2)

AIJRSTEM 14-207; © 2014, AIJRSTEM All Rights Reserved

Page 202


Mahendrasinh M Raj, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 202-207

cycloaliphatic, (3) aromatic like (DDM) , (4) polyamine adduct, etc. Numerous grades of epoxy resins and curing agents are formulated for a wide variety of applications. [11] The objective of this work is to investigate the effect of amide oligomer – epoxy blended composite reinforced with glass fibre. II. Experimental A. Materials All the chemicals used were of laboratory grade. E-type of glass -woven fabric 0.25 mm thick was used for laminate fabrication. Commercial epoxy resin, DGEBA (Diglycidyl ether of bisphenol A) was obtained from Synpol, Ahmedabad, Gujarat, India. It had an epoxy equivalent 200, estimated by standard method [12]. B. Synthesis of amide oligomers Dimethylol urea and trimethylol urea were synthsize according to standard metnod reported in literature. [13] C. Synthesises of resin based on dimethylol urea+Bisphenol A (UBA) Take the dimethylol urea (120gm) Bisphenol A (228gm) and THF (60ml) as a solvent in three neck flask. All these ingredients are mix thoroughly at 600C and then add concentrated HCl as a catalyst. Now raise the temperature at 80-85 0C for 7 hours. After the preparation of resin mixture was allowed to cool. D. Syntheses of resin based on ethylene diamine+Bisphenol A (EBA) Take ethylene diamine (120gm) and (200ml) Toluene as a solvent and (162gm) Formalin and (2ml) HCL as a catalyst,all these ingradients take in three neck flask.Then after some heating period at 50 to 60 0C and then add 228 gm Bisphenol A and stirred for 2-3 hours at 900C and resin are prepared. In this process dean and stark condenser is used because Toluene is removed from this system due to heating and take it in small beaker. E. Syntheses of resins based on trimethylol melamine+Bisphenol A (MBA) First take melamine trimethylol (TMM) (252gm) and (70ml) water as a solvent and Bisphenol A (114gm) in a three neck flask.After mixing of these ingredients add aqua.KOH as a catalyst and raise the temperature 60 to 900C and after 2 to 3 hours resin is prepared. F. Syntheses resin based on dimethylol urea + Benzene 1,4diole (UB) Take dimithiol urea (120gm) and THF (tetra hydro furan) (60ml) as a solvent and after this add (228gm) benzene1,4diole take this all reactants in three neck flasks and add conc. HCl as a catalyst and reflux at 600C and raise the temperature at 80 to 950 C to 3 to 4 hours. After the preparation of resin mixture was allowed to cool. G. Syntheses resin based on ethylene diamine+Benzene 1,4diol (EB) Take Ethylene diamine(120gm) and Toluene (200ml) as a solvent and Formalin (160gm) and mixed these ingredients in three neck flask and then Benzene1,4diole (110gm) and conc. HCl(2ml) as a catalyst add in three neck flask.In this process dean and stark condenser is used because Toluene is removed from this system due to heating and take it in small beaker.In this process raise the temperature 60 to 100 0C for three hours. H. Syntheses resin based on trimethylol melamine +Benzene 1,4diol (MB) Take Melamine Trimethylol (252gm) and add water (70ml) as a solvent in three neck flask and then add Benzene1,4diole (110gm) and then mixed these ingredients in three neck flask and add aqua.KOH as a catalyst in three flask and raise the temperature at 60 to 90 0C for 3 to 4 hours and then resin is prepared. III. Characterization FTIR spectra of all the amide oligomers were recorded using Perkin Elmer Lambda-19 FTIR spectrometer employing a thin layer of the sample on a KBr cell. Number average molecular weight of the amide oligomer were measured by using vapour pressure osmometer Knaller (Germany K.7000) using DMF as solvent as per ASTM D 3592-77. Corresponding % nitrogen by kjeldahl method and % formaldehyhe of the amide oligomers were determined by standard method reported in literature [12,14]. The viscosity of amide oligomers were measured by Brookfield RVF viscometer as per ASTM D 1824. IV. Laminates Fabrication Glass reinforced laminate were prepared employing hand layup prepreg formation technique. The composites were fabricated using resin to glass fabric ratio of 60:40. The resin was prepared and stirred for 5- 10 minutes and the resultant mixture was applied on a glass fabric (15 X 12) with the brush and allowed to dry. Ten such dried prepregs prepared in similar way were stacked one over other and pressed between two steel plates using teflon sheet as mould releasing agent. Subsequently, these plates were pre-cured in a compression moulding machine at 65-700C for 10 minutes. The pre-curing temperature was raised to 1500C and 150-200 psi pressure was applied for 15-20 minutes. After completion of curing the plates were cooled to 50 0C before pressure was released. These composites were tested as per various standard methods [15]. Specimens of dimension 130 mm X 12.7 mm were taken for measurement of flexural strength as per ASTM D 790, employing Instron testing machine. The tests were carried out at a crosshead speed of 100 mm/min and span length of 70 mm respectively. Izod impact strength of un-notched samples of dimensions 70 mm X 13 mm was measured as per ASTM D 256.

AIJRSTEM 14-207; Š 2014, AIJRSTEM All Rights Reserved

Page 203


Mahendrasinh M Raj, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 202-207

Rockwell hardness of the laminates was measured employing specimens of dimensions 25 mm X 25 mm as per ASTM D 785. All the mechanical tests were conducted at room temperature. Chemical resistance tests were carried out as per ASTM D 543. Test specimens of dimensions 76.2 mm X 25.4 mm were tested in presence of 10 % NaOH, 10 % H2SO4, Dioxane, DMF and water. Thermal stability of all glass reinforced laminates was carried by thermo gravimetric analyzer (TGA) on a Dupont 950 thermo gravimetric analyzer at a heating rate of 100C / min in air atmosphere. The weight loss at different temperatures was recorded in the temperature range of 40-6000C. V. Results & Discussion Amide oligomers are obtained by polycondensation reaction and characterized them by different ways. % nitrogen of all the amide oligomers was characterized by kjeldahl method. Number average molecular weights ( ) for all the amide oligomers were observed in the range of 3000 – 3500 gm / mole. The viscosity data of amide oligomers were found in the range of 180-380 cP. The results are shown in Table 1. FTIR spectra of the amide oligomers were revealed the following results. A strong and sharp band at 1510 cm-1 due to characteristic of p-substituted derivative of bisphenol A was observed. The other strong band around 1380 cm-1 is probably assigned to methyl group of bisphenol A. Similarly, a strong C-O-C stretching band with its maximum in the range between 1255-1245 cm-1 is due to the presence of aromatic aliphatic ether. Two distinct vibrational frequencies at 2945 and 2880 cm-1 indicates characteristics of aliphatic C-H stretching vibration of bridged methylene groups of amide oligomers. Absorption bands corresponding to aromatic ring hydrogen out of plane stretching vibration for 1, 4 substituted were characterized by a band at about 833 cm -1. The other band in the range of 1625-1440 cm-1 due to C=C stretching is observed this is due to mono substituted benzene and its derivatives. In the range of 1150 – 1060 cm-1 is C-O stretch CH2-O-CH2 band is present due to ethers. In range of 1720 -1700 cm-1 stretch band due to ketones C=O band is observed. A characteristic strong band at 1720-1727 cm-1 in all amide oligomers indicates good agreement with data reported in literature for different class of amide oligomers [16]. The mechanical properties of glass reinforced laminates based on amide oligomers – epoxy resin prepared is represented in Table 3. The data of flexural and impact strength values of glass reinforced laminates based on amide oligomers – epoxy showed synergistic effects. This behavior can be explained on the basis of permanent physical chain entanglements resulting due to the presence of aromatic moiety of bisphenol A and 1,4 benzene diol. [17]. Rockwell hardness of all glass reinforced laminates samples were found in the range of 85 to 100 in .R. scale at 230C and 50% RH. Thermal stability of all amide oligomers – epoxy resin based glass reinforced laminates were carried by TGA on a Dupont 950 thermo gravimetric analyzer at a heating rate of 10 0C / min in air atmosphere. In order to determine the thermal stability percentage (%) weight loss at different temperature were calculated and the data of thermo gravimetric analysis are shown in Table 2. Thermo gravimetric analysis chart of all amide oligomers – epoxy resin based glass reinforced laminates are shown in Figure 1&2. The observation of thermo grams of all laminates revealed following conclusion. In all composites about 4 to 6 % weight loss was observed up to 2000C and about 15 to 25% weight loss was observe up to 350 0C temp. After this temperature range very slow rapid loss was observed around 4000C temperature furthermore the thermo grams of all composites reveals that about 50% of weight loss occurred in the temperature range of 400-4500C temperature. Thermo gravimetric studies indicates the good interpenetration of thermosetting epoxy resin composites and these composite have very good thermal stability which proves suitability for high performance applications. There are different methods for the determination of fiber content of glass reinforced composite. In acid dissolution method a specific dimension of composites is dissolve in strong H 2SO4 and the results obtained is good agreement with the ratio of matrix: reinforcement. The results of chemical resistance (ASTM D 543) to various reagents are represented in Table 4. Examination of these data indicated that each of the amide oligomers – epoxy resin based glass reinforced laminates were mostly stable in 1,4-dioxane, DMF and distilled water. However, very little loss in glass and negligible changes in weight and thickness were observed in standard sodium hydroxide solution. VI. Conclusion Synergistic effects observed in mechanical properties exhibited by all amide oligomer – epoxy resin based glass reinforced laminates confirmed that interpenetration has occurred in amide oligomer – epoxy resin. The laminates exhibited higher flexural, impact strength, chemical resistance of the individual network component increased in the glass reinforced laminates of amide oligomer – epoxy resin. Thus, amide oligomer – epoxy resin reinforced with glass fibers can be utilized commercially as an excellent polymer matrix for engineering applications. VII. References [1] J.Z. Liang, R.K.Y. Li, S.C. Tjung, “ Morphology and tensile properties of glass beed filled low density Polyethylene [2]

composites,” Polymer Testing, 16, 529 – 548, 2001. M.S. Sreekala, B. Jayamol, M.G. Kumaran, S. Thomas,The mechanical performance of hybrid phenol – formaldehyde based composite reinforced with glass and oil palm fibres,” Composite Science and Tecnology, 62, 339 – 353, 2002.

AIJRSTEM 14-207; © 2014, AIJRSTEM All Rights Reserved

Page 204


Mahendrasinh M Raj, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 202-207 I. Yamamoto, T. Higashihara, T. Kobayashi, “Iffect of silica – particle characteristics on Impact/ usual fatigue properties and evaluation of mechanical characteristics of silica – particle epoxy resins,” Int. J. JSME, 46(2), 145 – 153, 2003. A.S. Singha,Vijay Kumar Thakur, “Mechanical properties of natural fibre reinforced polymer composites,” Bull. Mater. Sci., 31(5), 791 – 799, 2008. S.S. Mahapatra, Amar Patnaik, “Study on mechanical and erosion wear behaviour of hybrid composites using Taguchi experimental design,” Materials and Design, 30, 2791 – 2801, 2009. A. Aruniit, J. Kers, K. Tall, “ Influance of filler proportion on mechanical and physical properties of particulate composite,” Agronomy Research Biosystem Engineering, 1, 23 – 29, 2011. A.A. Ibrahim, “Flexural properties of glass and graphite particles filled polymer composites,” Journal of Pure and Applied Science, 24(1), 2011. Hull, D., and Clyne, T. W., “ An Introduction to Composite Materials “, 2nd ed., Cambridge University Press, New York, 1996. H. Lee and K. Neville, Handbook of Epoxy Resins, McGraw-Hill Book Company, New York, pp. 5.2-5.13, 1967. George Odian, Principles of Polymerization, John Wiley and Sons, New York, pp. 134, 1991. A. J. Kinloch, Adhesion and Adhesives Science and Technology, First Edition, Chapman and Hall, 1987. A. P. Jain and T. J. John, Polym. Syn. Appl. 253, 1997. A. I. Vogel, Text book of Practical Organic Chemistry, Longman, London, 1989. Encyclopedia of polymer science and engineering, John Wiley & sons, New York, 6, 225, 2005. Annual book of ASTM standards, 2012. D. O. Hummel, Polymer Spectroscopy, Zechnersche Buchdruckerei 6, 1971. S. C. Kim, D. Klempher, K. C. Frisch and H. L. Frisch, Polyurethane—polystyrene interpenetrating polymer networks, Polym. Eng. Sci. 15, 339 – 342, 1975.

[3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17]

Table 1: Characterization of amide oligomers % Nitrogen

% Free formaldehyde

Number average molecular

Sr. No.

Resin system

1

UBA

3.717

13.5

3420

2

EBA

14.868

15.41

2980

3

MBA

1.386

17.17

3527

4

UB

5.355

13.00

3380

5

EB

6.993

15.16

3000

6

MB 3.213

17.18

3480

weight ( Mn )

Table 2: Thermo gravimetric analysis (TGA) of laminates based on amide oligomer – epoxy resin system Sr. No.

Laminates Resin System (Amide Oligomer – Epoxy Resin)

% weight loss at different temperature (0C)

Color

100

150

200

250

300

350

400

450

500

550

600

1.

UBA – Epoxy resin

Light yellow

0.72 5

2.89 9

4.34 9

6.52 3

10.1 46

18.8 42

28.9 88

31.8 87

34.0 5

35.5 02

38.4 87

2.

EBA – Epoxy resin

Yellow

1.5

2.99

5.23

11.2

29.1 1

44.6 3

50

51.5

53.7 4

55.9 2

58.2 7

3.

MBA – Epoxy resin

Light yellow

0.73

1.46

2.9

5.74

9.43

15.2 2

18.8 5

23.9 2

31.1 6

37.6 9

41.3 1

4.

UB – Epoxy resin

Brown

1.48

2.95

6.62

10.3

17.6 5

25.0 0

41.9 1

53.6 8

56.6 2

61.0 3

65.4 5

5.

EB – Epoxy resin

Brown

1.48

2.95

6.62

10.3

17.6 5

25.0

41.9 1

53.6 8

56.6 2

61.0 3

65.4 5

6.

MB – Epoxy resin

Brown

0.75

1.49

2.23

3.71

7.42

12.6

18.5 2

43.7 1

45.1 9

46.6 7

48.1 5

AIJRSTEM 14-207; © 2014, AIJRSTEM All Rights Reserved

Page 205


Mahendrasinh M Raj, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 202-207

Figure 1: Thermo gravimetric analyses of amide oligomer – epoxy resin glass reinforced laminates

120 100 80 60 40 20 0 100

150

200

250

300

Series 1 Series-1 UBA

350

400

Series 2

Series-2 EBA

450

500

550

600

Series 3 Series-3 MBA

Figure 2: Thermo gravimetric analyses of amide oligomer – epoxy resin glass reinforced laminates

Series 1 UB

Series 2 EB

Series 3 MB

Table 3: Mechanical Properties of amide oligomer – epoxy resin glass reinforced laminates Sr. No.

Laminates Resin System (Amide Oligomer – Epoxy Resin)

Impact Strength (J/cm.)

Flexural Strength (KgF/cm.)

Rockwell Hardness (R Scale)

1.

UBA – Epoxy resin

400

430

85

2.

EBA – Epoxy resin

350

500

90

3.

MBA – Epoxy resin

410

550

95

4.

UB – Epoxy resin

450

600

100

5.

EB – Epoxy resin

500

640

90

6.

MB – Epoxy resin

480

620

90

AIJRSTEM 14-207; © 2014, AIJRSTEM All Rights Reserved

Page 206


Mahendrasinh M Raj, American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 202-207

Table 4: Chemical resistance & Fibre Content of glass reinforced laminates of amide oligomer – epoxy resin to standard reagents Sr. No.

1.

2.

3.

4.

5.

6.

Laminates Resin System (Amide Oligomer – Epoxy Resin) UBA – Epoxy resin EBA – Epoxy resin MBA – Epoxy resin UB – Epoxy resin EB – Epoxy resin MB – Epoxy resin

10% NaOH Solution % % Change Change in in Weight thickness

10% H2SO4 Solution

1,4 Dioxane

DMF

Water

% Change in Weight

% Change in thickness

% Change in Weight

% Change in thickness

% Change in Weight

% Change in thickness

% Change in Weight

% Change in thickness

1.84

1.02

0.84

0.60

NC

NC

NC

NC

NC

NC

1.85

0.72

0.80

0.65

NC

NC

NC

NC

NC

NC

1.75

1.05

0.85

0.55

NC

NC

NC

NC

NC

NC

1.70

0.85

0.75

0.75

NC

NC

NC

NC

NC

NC

1.80

0.99

0.84

0.90

NC

NC

NC

NC

NC

NC

1.82

1.00

0.84

0.90

NC

NC

NC

NC

NC

NC

AIJRSTEM 14-207; © 2014, AIJRSTEM All Rights Reserved

Page 207


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Relationship Between Daily Variation of CRI, Solar Activity and Interplanetary Conditions 1,2

R.K. Tiwari1,Ajay K. Pandey2 and Anuj Hundet3 Deptt. of Physics, Govt. New Science College, Rewa (M.P.), India 3 Deptt. of Higher Education, Bhopal (M.P.), India

Abstract: The solar wind data available during 1965-2012 has been utilized to study the large scale features in interplanetary medium and their effect on the geomagnetic activity. The variation of amplitude of daily variation of cosmic ray intensity (CRI) is correlated with IMF B, solar wind velocity V and with product V.B, which show a very high and positive correlation for the ascending phase of the solar cycle variations. The similar analysis is carried out for the descending phase of the solar cycle for the same periods. The correlation is slightly lower in comparisons to the values for the ascending phase. For both ascending and descending phase of the solar cycle for the year 65-68, 87-89, 97-2000, 2009-2012, CRI is strongly anti correlated with the solar flux. The Ap values are also correlated with the amplitude of the first harmonic of daily variation, IMF B and solar wind velocity V and product V.B for the same period. From the analysis it is found that the correlation is positive and high among the parameters under consideration for both the ascending and descending phase of the solar cycles. The correlation for the solar flux with Ap is found to be strongly anti correlated for most of the solar cycles.The same analysis is performed between first harmonic amplitude, product V.B. and Ap values for the complete solar activity cycle (SC 20-SC 24), it is found that the product V.B and Ap values show almost similar correlation for the Solar cycles 20-24 with the first harmonic amplitude. We can conclude that V.B is a rather prominent and reliable indicator of the solar activity variations. It is also observed that the even solar cycles are strongly correlated (SC22,SC24) between first harmonic amplitude and V.B & Ap values in comparison to the odd solar cycles (SC21,SC23). Keywords: Daily variation, Ap index, Interplanetary magnetic field, Solar wind velocity, V.B I. Introduction Cosmic ray ground observations are used to study the features of the variational primary energy spectrum of diurnal and eleven-year variations of neutron component of cosmic rays. When cosmic rays are subjected to heliospheric modulation with their intensity and spectrum changes during 11-year solar cycle, the diurnal variation amplitude is generally most sensitive to the interplanetary field direction and the solar wind velocity. The geomagnetic activity is mainly controlled by variations in the solar wind and interplanetary magnetic field (IMF) features. The direction of the field in the Sun’s northern hemisphere is opposite to that of the field in the southern hemisphere and reverses at solar maximum. The strength of IMF and its fluctuations have also shown to be most important parameter affecting the geomagnetic field condition. South direction of IMF allows sufficient energy transfer from the solar wind into the Earth magnetosphere through magnetic reconnection. The solar wind is continuously emanating from the sun’s outer corona and engulfs the entire heliosphere. The density and speed of this flow is highly variable and depends solely upon the conditions which has caused it to eject. The strength and orientation of IMF associated with solar wind depends up on its interaction between slow and fast solar wind originating from coronal holes and leads to create co-rotating interaction region. The long-term variation of galactic cosmic ray intensity and its association with various solar, interplanetary and geophysical parameters have been studied by many authors.[1]-[6]. The systematic and significant deviations in the amplitude/phase of the diurnal/semi-diurnal anisotropy from the average values are known to occur in association with strong geomagnetic activity[7]-[12]. The diurnal variation might be influenced by the polarity of the magnetic field, so that the largest diurnal variation is observed during the days when the daily average magnetic field is directed outward from the Sun. On days of high geomagnetic disturbance, the daily variation exhibits abrupt changes indicative of the source being situated at a distance shorter than the range of the geomagnetic field. The relation between diurnal variation and geomagnetic condition has been studied by many authors on long term basis and on short term basis for geomagnetic quiet days [13],[14]. They have found that the Ap is best fit parameters on short term basis as well as long term basis to study the quiet condition of geomagnetic activity.

AIJRSTEM 14-210; Š 2014, AIJRSTEM All Rights Reserved

Page 208


R.K. Tiwari et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 208-211

II. Data Analysis For the study of Cosmic Ray Intensity (CRI) daily variation, we have used the pressure and temperature corrected hourly counts of Kiel neutron monitor station for the period 1965 to 2012, which cover the solar cycle 20- 24. We have received the data for sun spot number (Rz), geomagnetic activity parameter (Ap) and solar wind velocity (V) for the period of 1986-2010 from omini data centre. A computer program is being used to derive the amplitudes and phases of the first harmonics of daily variation. Harmonic analysis technique has been adopted to derive the various characteristics of anisotropic (daily variation) variation of cosmic ray intensity. III. Result and Discussion It is well known that cosmic ray intensities are modulated by solar activity cycles. We have done a correlative analysis between the first harmonics amplitude of the CR intensity variation and various other interplanetary parameters. The variation of amplitude of daily variation is correlated with IMF B, solar wind velocity V and with product V.B which show a very high and positive correlation for the ascending phase of the solar cycle variations. The similar analysis is carried out for the descending phase of the solar cycle for the same periods. The correlation is slightly lower in comparisons to the values for the ascending phase. For both ascending and descending phase of the solar cycle for the year 65-68, 87-89, 97-2000, 2009-2012, CRI is strongly anti correlated with the solar flux as shown in Table 1. The Ap values are also correlated with the Solar flux, IMF B and solar wind velocity V and product V.B for the same period. From the analysis it is found that the correlation is positive and high among the parameter under consideration for both the ascending and descending phase of the solar cycles. The correlation for the solar flux with Ap is found to be strongly anti correlated for most of the solar cycle as shown in Table 2. The same analysis is performed between first harmonic amplitude, product V.B, and Ap values for the complete solar activity cycle (SC's 20-24), it is found that the product V.B and Ap values show almost similar correlation for the Solar cycles 20-24 with the first harmonic amplitude. (As shown in the fig. 1 and fig. 2).We can conclude that V.B is a rather prominent and reliable indicator of the solar activity variations. It is also observed that the even solar cycles are strongly correlated (SC22,SC24) between first harmonic amplitude and V.B & Ap values in comparison to the odd solar cycles (SC21,SC23) Solar wind speed V and IMF parameters are responsible for the transport and anisotropy of energetic cosmic ray particles in the interplanetary space. The high velocity solar wind fluxes associated with coronal holes give rise to both isotropic and anisotropic variations in cosmic ray intensity. The IMF magnitude and fluctuations are responsible for the depression of cosmic ray intensity during high-speed solar wind events[14][16] . The strong correlation between cosmic ray intensity and solar wind velocity is found, especially in the period of the maximum solar activity. The relation of cosmic ray intensity to solar wind velocity depends on the physical conditions in the interplanetary space varying with the solar activity and also on the disturbances coming from the intergalactic regions. The present analysis will be extended for the phase of the first harmonic and for higher harmonics of CRI with the solar activity and geomagnetic activity parameters. IV. Acknowledgement The authors are thankful to world Data Centers (ngdc and Omni-web centers) from where the data have been used for the present analysis. V. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13]

S. E. Forbush. 1958. Cosmic-ray intensity variations during two solar cycles. J.Geophys.Res. 1958,63,651. U.R. Rao, A.G. Ananth and S.P. Agrawal, Characteristics of quiet as well as enhanced diurnal anisotropy of cosmic rays, Planetary Space Sci., 1799, 20, (1972), . Pomerantz, M.A.,Duggal, S.P. The sun and cosmic rays. Rev. Geophys. Space Phys.,1974. 12 (3) H. Moraal, Space Sci. Rev. 1976 , 19,845. Hatton, C. J., Solar flares and the cosmic ray intensity, Solar Phys. 1980, 66, 159, . Nagashima, K., I.Morishita, Long term modulation of cosmic rays and inferable electromagnetic state in solar modulating region, Planet. Space Sci. 1980(a) , 28, 177 . D. Venkateasan and Badruddin,Cosmic ray intensity variation in 3-dimensional heliosphere. Space Sci. Rev. 1990,52,121. V. K. Balasubrahmanyan, Solar activity and the 11- year modulation of cosmic rays, 1969, 7,39. H. Mavromichalaki and B. Petropoulos,"Time-lag of cosmic-ray intensity" Astrophy. and Space Science, 1984, 106, 61-71. Ajay K. Pandey, R.K. Tiwari, Brijesh K. Mishra and Shailendra Singh, "Study on the Quiet Conditions in the Interplanetary Medium and Its Association with Diurnal Variation of Cosmic Ray Intensity" AIJSTEM, 4(2), Sep- Nov, 2013, pp. 74-78. Brijesh K. Mishra, Pankaj K. Shrivastava, R. K. Tiwari, and Ajay Ku. Pandey, "Study Of The Average Characteristics of Diurnal Variation of Cosmic Ray Intensity For The Period of 1986-2010", IJETCAS 13-361; 2013, IJETCAS, Page 330. Brijesh K. Mishra, Pankaj K. Shrivastava, R. K. Tiwari, and Ajay K. Pandey, "Long Term Average Observation of First Two Harmonics of Cosmic Ray Daily Variation", IJETCAS 13-558; 2013, IJETCAS, Page 201. C. M. Tiwari, D. P. Tiwari, Ajay K. Pandey, & Pankaj K. Shrivastava, Average Anisotropy Characteristics of High Energy Cosmic Ray Particles and Geomagnetic Disturbance Index Ap, J. Astrophys. Astr. (2005) 26, 429–434.

AIJRSTEM 14-210; Š 2014, AIJRSTEM All Rights Reserved

Page 209


R.K. Tiwari et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 208-211 [14] [15] [16]

Rekha Agarwal Mishra & Rajesh K. Mishra, Effect of solar heliospheric parameters and geomagnetic activity on long-term cosmic ray anisotropy, 29th ICRC Pune ,2005, 00, 101-104. Sabbah, I., Darwish, A.A. and Bishara, A.A. Characteristics of two-way cosmic ray diurnal anisotropy, Solar Physics. 1998.181, 469-477. Sabbah, I., Persistent north–south asymmetry of the daily interplanetary magnetic field spiral1996.Journal of Geophysical Research.1996, 101: 2485.

Table 1: The correlation coefficients between first harmonic of daily variation and different solar & geomagnetic activity parameters for the ascending and descending phase of the solar cycles Ascending Phase Years 65-68 77-79 87-89 97-2000 2009-2012

Amp VS B

Amp VS V

0.89

0.77

Amp VS. Solar flux -0.87

0.89

0.74

0.99

Amp VS V.B

Amp VS Ap

0.97

0.86

-1.00

0.95

0.97

0.83

-0.74

0.97

0.90

0.91

0.97

-0.73

0.96

0.89

0.96

0.93

-0.63

0.98

0.95

Amp VS V.B

Amp VS Ap

0.22

-0.04

Descending Phase Year 69-76 80-86 90-96 2001-2008

Amp VS B

Amp VS V

0.49

-0.02

Amp VS. Solar flux -0.58

0.44

0.14

-0.24

0.44

0.44

0.77

0.44

-0.50

0.83

0.83

0.67

0.06

-0.28

0.56

0.50

Table 2: The correlation coefficients between Ap values and different solar & interplanetary parameters for the ascending and descending phase of the solar cycles Ascending Phase Years 65-68 77-79 87-89 97-2000 2009-2012

Ap VS B

Ap VS V

Ap VS Solar flux

Ap VS V.B

0.80

0.83

-0.89

0.94

0.76

0.87

-0.97

0.85

0.95

0.99

-0.37

0.98

0.95

0.94

-0.59

0.97

0.99

0.78

-0.83

0.97

Descending Phase Years 69-76 80-86 90-96 2001-2008

Ap VS B

Ap VS V

Ap VS Solar flux

Ap VS V.B

0.56

0.88

0.67

0.95

0.85

0.60

-0.52

0.97

0.85

0.56

-0.34

0.96

0.85

0.80

-0.86

0.97

AIJRSTEM 14-210; Š 2014, AIJRSTEM All Rights Reserved

Page 210


R.K. Tiwari et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 208-211

Fig.1: The correlation plot between the I-harmonic amplitude (%) and the geomagnetic activity parameters Ap for SC20-SC24

AIJRSTEM 14-210; Š 2014, AIJRSTEM All Rights Reserved

Fig.2: The correlation plot between the I-harmonic amplitude (%) and the product V.B for SC20-SC24

Page 211


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

CME Association of geomagnetic storms occurred in January- December 2011 Ruchi Nigam1 and R.K.Tiwari2 Jabalpur Engineering College, Jabalpur, India, 482011 2 Govt.New Science College, Rewa, India, 486001

1

Abstract: The period of January- December 2011 was the active part of 24 th solar cycle, close to solar cycle maximum, a significant number of powerful X-ray flares were recorded. In this solar cycle, maximum sunspot number at 1 hour resolution is 136 (OMNI DATA). During that period the Sun became very active. The plural numbers of active region appeared on the solar surface .The identification of solar sources for selected 11 geomagnetic storms (disturbance storm index Dst < - 50 nT) that occurred in January- December 2011 has been done. It is found that 73% geomagnetic storms are identified with CMEs. Solar flare associated with CME is major cause of 64% geomagnetic storms. The results are in satisfactory agreement with previous investigations. But in this study although the value of minimum interplanetary magnetic field Bz for 7 out of 11 geomagnetic storms (Dst < -50 nT) is less than -10 nT but none in the case it is continue for long duration ( > 3 hours). Keywords :Solar Wind; Coronal Mass Ejection (CME); Solar Flare;Geomagnetic storm;Interplanetary Magnetic Field(IMF) I.

Introduction

The study of geomagnetic storms is a particular concern for mankind [1]. With the onset of geomagnetic storms substantial southward distortions of the interplanetary magnetic field and impulsive increase in the solar wind are well associated [2].Coronal mass ejections (CMEs) have been identified as a prime causal link between solar activity and large geomagnetic storms [3]. Ejected solar material is propelled outward into interplanetary space, where it interacts with the background solar wind and distorts the IMF. This distortion together with the internal magnetic structure of CME itself can produce large Bz deflections of Earth, with the amplitudes depending on CME physical configuration, CME launch position background solar wind structure and the position of Earth. Bothmer and Schewenn [4] investigated the solar wind input conditions for major geomagnetic storms during 1966-1990. They found 41of 43 storms caused by transient shock – associated CMEs, caused by transient region (CIR), and 1 caused by a CIR. Intense geomagnetic storms caused by fast CMEs were found to be independent of the phase of solar cycle. Certain manifestation of solar activity can generate large scale disturbances that propagate through the interplanetary medium. These interplanetary disturbances (IPD), often preceded by shocks, modify both solar wind (SW) and interplanetary magnetic field parameters. The situations with southward changes of IMF and impulsive increase of solar wind dynamic pressure are “geo-effective” and well associated with onset of geomagnetic storms [5]. II.

Data and Method

An overview of 11 geomagnetic storm events with Dst < -50nT, that occurred in the interval January 2011 to December 2011 is given in Table 1. In table 1 corresponding to storm date, there is an attempt to find the solar and interplanetary sources of geomagnetic storms. The CME data were taken from the LASCO CME Catalog (http://cdaw.gsfc.nasa.gov/CME_list/). The geoeffectiveness is determined by the magnetic disturbance produced in the Earth, which is measured by the Disturbance Storm Time (Dst) index. Here a geomagnetic storm means that, at peak time, the Dst index is ≤ -50 nT. The Dst index was obtained from site http://omniweb.gsfc.nasa.gov. The interplanetary shock (IP Shock) data are taken from SOHO/CELIAS/MTOF (www.umtof.umd.edu). III.

Observational Results

January 2011 to December 2011 period was characterized by several large active regions in the solar disk some of them are AR 1289,1302,1164,1226,1261,complex structure of 1280 to 1286.These regions produced nearby all major solar flares and CME events during their across the solar disk. Coronal mass ejections (CMEs), as a product and manifestation of solar activity, significantly influence process in the interplanetary medium during their outward propagation; they also affect planetary bodies (Magnetosphere) in the heliosphere [6]. According to Gosling [7] a CME in the solar wind generally a large, three –dimensional structure, usually driving a shock

AIJRSTEM 14-215; © 2014, AIJRSTEM All Rights Reserved

Page 212


Ruchi Nigam et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 212-214

wave in the solar wind when it propagates faster than the ambient plasma. From table 1, total numbers of geomagnetic storms are 11, out of which 7 storms were associated with fast CME, only 1 associated with slow moving CME. Solar flares are prime cause for 8 geomagnetic storms. Solar flare associated with CME is major cause of 7/11 (64%) geomagnetic storms. In this period 7 out of 11 were linked with coronal holes and high speed solar wind stream. The value of interplanetary magnetic field for all geomagnetic storms is found to be >10 nT and IMF Bz is < - 6 nT. In terms of planetary index K *10, it is > 43 and solar wind velocity > 411 km/sec for all geomagnetic storms. Solar IV.

Discussion

The study of the origin of the major geomagnetic storms associated with a solar active period of January 2011 to December 2011 gives the opportunity to have some new information about the origin of geomagnetic storms. The CMEs that produced the major geomagnetic storms were associated with X-ray flare. From Table 1, it is seen that magnetic storms (8/11) was caused by a CME propagating in a solar wind disturbed by previous CMEs that preceded it. Out of 8, 7 associated with fast full halo or partial halo CMEs. Only one is associated with slow moving CME. In the intensification of geomagnetic storms, show that the acceleration of the energetic protons can be produced by interacting CMEs [8&9] or by CMEs propagating in a disturbed solar wind [10].The present results are consistent with Cane and Richardson [11] work that showed the fastest CMEs and the strongest magnetic storms tend to occur after the maximum of the solar cycle. Coronal holes are regions of abnormally low density and temperature in the solar corona [12]. Neupert and Pizzo [13] showed that some coronal holes are associated with high speed wind streams and that the series of coronal holes observed from 1972 through 1973 from OSO 7 strongly correlated with current geomagnetic variations. In this study high speed stream (HSS) from coronal hole on earth side is responsible for geomagnetic storms (7/11). Interplanetary events with the southward IMF component Bz < - 10 and with a duration larger than 3 hours have one to one causal relationship with intense (Dst < - 100nT) geomagnetic storms [14]. Geomagnetic storms are caused by a combination of Bz from injected magnetic cloud structures with Bz generated by interplanetary dynamic interaction [15].Tang et al [16] deal with interplanetary and solar causes of 10 geomagnetic storms(Dst < -100 nT) that occurred in the interval August 16,1978 to December 28,1979.These events were documented and discussed by Gonzalez and Tsurutani [14] with the use of Dst data together with interplanetary magnetic field and plasma data collected by ISEE 3 satellite, while it was in of the Earth in its halo orbit about L1 libration point. Gonzalez and Tsurutani [14] have claimed that a common interplanetary feature for these storm events were long duration (> 3 hours), large and negative (-10 nT) interplanetary magnetic field (IMF) Bz events. But in this study although the value of minimum interplanetary magnetic field Bz for 7 out of 11 geomagnetic storms (Dst < -50 nT) is less than -10 nT but none in the case it is continue for long duration ( > 3 hours). V. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]

[16]

References

W. D. Gonzalez, J.A. Joselyn, Y.Kamide, H. W.Kroehl, G.Rostoker, B. T.Tsurutani,V. M.Vasyliunas, 1994. What is a geomagnetic storm? J.Geophys.Res., Vol.99, No.A4, Pages 5771-5792. D. Odstrcil and V. J. Pizzo, 1999. Dimension of interplanetary magnetic field by three-dimensional propagation of coronal mass ejections in structured solar wind, J. Geophys. Res., Vol -104, No. A12, Pages 28, 225 -28,239. J. T. Gosling, D. J. McComas, T.L. Phillips and S.J. Bame, 1991. Geomagnetic activity associated with Earth passage at interplanetary shock disturbances and coronal mass ejections, J.Geophys.Res.96, 7831-7839. V. Bothmer and R.Schwenn, 1995. The interplanetary and solar causes of major geomagnetic storms, J.Geomagn. Geoelects., 47, 1127-1132. B. T. Tsurutani, W. D. Gonzalez, F. Tang, S. I. Akasofu and E. J. Smith, 1988. Origin of interplanetary southward magnetic fields responsible for major magnetic storms near solar maximum (1978-1979), J.Geophys.Res., 93,8519. J. T. Gosling, 1993. “The solar flare myth”, J. Geophys. Res., 98, p. 18937. J.T. Gosling, 1990. “Coronal mass ejections and magnetic flux ropes in interplanetary space”, Washington DC American Geophysical Union Geophysical Monograph Series, 58, p.343. W. D. Gonzalez, E. Echer, A. L.Clua-Gonzalez and B. T.Tsurutani, 2007. Interplanetary origin of intense geomagnetic storms (Dst < - 100 nT) during solar cycle 23,Geophys. Res. Lett., VOL. 34. N. Gopalswamy, A.Lara, S. Yashiro, R. A. Howard, 2003. Coronal mass ejection and solar polarity reversal, ApJ, 598, L63. I.G. Richardson, G .R. Lawrence, D.K. Haggerty, T.A. Kucera, A.Szabo, 2003. Are CME ‘interactions’ really important for accelerating major solar energetic particle events? Geophys. Res. Lett. 30 (12), 8014. H. V Cane and I. G.Richardson, 2003. Interplanetary coronal mass ejections in the near-Earth solar wind during 1996– 2002, J. Geophys. Res, VOL. 108, 1156. J.B. Zirker, 1977. Coronal holes and high – speed wind streams. Reviews of geophysics and space physics, vol.15,No. 3,257269. W.M.Neupert and V. Pizzo, 1974. Solar coronal holes as a sources of geomagnetic disturbances; J.Geophys. Res., 79,3701. W. D. Gonzalez and B. T. Tsurutani, 1987.Criteria of interplanetary parameters causing intense magnetic storms (Dst < -100 nT), Planet. Space Science, 35,1101. B. T. Tsurutani, E .Echer and W. D. Gonzalez, 2011. The solar and interplanetary causes of the recent minimum in geomagnetic activity (MGA23): a combination of mid latitude small coronal holes, low IMF Bz variances, low solar wind speeds and low solar magnetic fields. Ann. Geophys., 29, 839-849. F. Tang, B.T. Tsurutani, W.I. Gonzalez, E. J.Smith, , S. I. Akasofu, 1989. “Solar sources of interplanetary southward Bz events responsible for major magnetic storms ( 1978- 1979)”. J. Geophys. Res, 94(.44): 3535.

AIJRSTEM 14-215; © 2014, AIJRSTEM All Rights Reserved

Page 213


Ruchi Nigam et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 212-214

Kp* 10 maximum

Ap maximum

CME

Interplanetary Shock

Solar Flare

Coronal hole

No shock

No

High speed stream from coronal hole

53

56

No

No shock

No

Coronal hole

57 (12 Mar.)

67 (12 Mar.)

Halo CME

IP shock

X 1.5 class solar flare

Coronal hole

63

94

No

No shock

M1 class solar flare

Coronal hole

43

32

No shock

C class solar flare

Coronal hole

77

179 (5 August)

IP shock

M1 class solar flare

560 (9-Sept.)

57

67 (9-Sept.)

Partial Halo CME

IP shock

X 1.8 solar flare

Coronal hole

-9

549

53

56

Partial Halo CME

IP shock

No

No

-16.4

704

80

94

Halo CME

IP shock

X 1.9 class solar flare

No

534 (24-Oct.)

73

154

CME

IP shock

M 1.3 class solar flare

No

436

47

39

Slow moving CME

IP shock

M1 class solar flare

No

BZ Minimum (nT)

No

B Maximum (nT)

67

Dst≤ - 50nT minimum (nT)

57

Date/Month

647 (5 Feb.)

S.No.

V(Km/sec) Maximum

Table 1: Shows the major geomagnetic storms and associated parameters occurred in January-December 2011above

1

04- 022011

-56

21.2

-15.6

2

1-032011

-61

14.1

-11.1

3

11-032011

-80

14.1

-8.9

4

28-052011

-80

13

-10.7

5

5-072011

-57

6

6-082011

113

7

10-092011

-64

10.1 (4 July) 29.4 (5 August) 19.3 (9Sept.)

-8.7 (4 July) -18.7 (5 August) -14.2 (9Sept.)

8

17-092011

-63

13.6

9

26-092011

103

34.2

1 0

25-102011

137

24

1 1

1-112011

-61

13

-11.6 (24Oct.) -6.2 (31Oct.)

687 (2 Mar.) 561 (12 Mar.) 752 (29 May) 411 (4 July) 611 (5 August)

AIJRSTEM 14-215; © 2014, AIJRSTEM All Rights Reserved

CME Halo CME

Coronal hole

Page 214


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

Service Innovation Discovery for Enterprise Business Intelligence Applications (SIDE-BIA) 1

Prof. Dr. P.K. Srimani, F.N.A.Sc. 2.Prof. Rajasekharaiah K.M. Former Chairman, Dept. of Computer Science & Maths, Bangalore University Director, R & D, Bangalore University, Bangalore, INDIA. 2 Professor & HOD, Department of Computer Science and Engineering, JnanaVikas Institute of Technology, Bangalore Mysore High Way, Bidadi, Bangalore, Visvesvaraya Technological University (VTU), Belgaum, Karnataka, INDIA. 1

Abstract: Enterprises launch many business services to its clients. The business process must be effective in meeting the client’s expectations for the enterprises success in the competitive business area. The job of business architects in the enterprise is to design a service process flow and provide to managers to implement the service flow. When the service is ready it is open to customer access. Customer use service process flow and gets their requirements satisfied. Being the end users of the service flow, customers have lot of queries and suggestions on the service flow. Feedback from the customers must be reinforced to redesign the service flow or identify new way of handling the service. Not many systems are present in the market for this kind of service innovation requirements from the business analyst. In this paper, we explore this problem and propose a mechanism for service innovation discovery in the domain of service redesign for enterprise. Keywords: Service Innovation, launching, customer feedback, analyst, reasoning, prototyping, redesign, repository

I. INTRODUCTION Enterprise launches new services for remaining competitive in business vertical. The success of an organization depends on the effectiveness of the service in meeting the customer’s requirements. Service development move through a set of planned stages consisting of establishment of clear objectives, idea generation to concept development, service design, prototyping, services launch and customer feedback. Service blue print techniques are used for service development in most enterprises. Once the service is in the market, serving the customers, many feedbacks on the services and competitive service information arrives from multiple sources. Based on this information the service has to be redesigned, identify the new service and made effective. With information arriving from different sources both in and out of an organization and with wide variety of information, the data for the service redesign would be very huge. With this vast information, it is difficult for an enterprise business analyst to go through all the\information and redesign the services. Also the challenge is in filtering the irrelevant information and aligned with organization goals and resource constraints in the organization. Prioritizing the information is also needed for service redesign. Business analysts also want to know the weak points on the current services and areas for improvement. This necessitates automated approaches for information processing and interfacing to existing Business Intelligence (BI) applications in the organizations. In this paper, we study the current approaches for service discovery from the vast information collected from different sources. We identify the problems in current solutions and propose an effective semi automated mechanism for service discovery and re-design of the services. We implement the solution and measure the effectiveness of solution in terms of quality of re-design suggestions, quality of pain points and areas indentified for service redesign. Section 2 gives the related work and the problems in the current work. In Section 3, we discuss the solution. In Section 4 performance analysis of the proposed solution is given. In section 5 conclusion and future work are detailed. II. RELATED WORK Ghose et al. in their work [1] have proposed Rapid Business Process Discovery (R-BPD) tool that can query heterogeneous information resources (e.g. corporate documentation, web-content, code etc.) and rapidly construct proto-models to be incrementally adjusted to correctness by an analyst. The R-BPD tool can potentially extract a large number of (sometimes small) process proto-models from an enterprise repository. Some of these proto-models might in fact be alternative descriptions of the same process. But R- BPD cannot

AIJRSTEM 14-216; Š 2014, AIJRSTEM All Rights Reserved

Page 215


P.K. Srimani et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 215-219

identify the problems for service re-design. It cannot also identify the areas of improvement in the current service flow. .In [2] the authors proposed an approach for constructing Business Process. Their solution has a three-layered language for representing business processes and the use of Model Checking Techniques for verifying constructed models against business requirements that are specified a-priori. They presumed the correctness of the requirements provided and evaluate correctness within and across the inter-relationships between the models constructed in this variety of languages. In [3] the authors proposed an approach for Modeling Business Objectives using Domain Ontology. They detailed the use of tool support for the task of correlating the business objectives and service offering. Their work focused mainly on service re-alignment but the focus of our work is redesigning the services. In [4],the authors proposed GoalBPM methodology for relating business process models (modeled using BPMN) to highlevel stakeholder goals (modeled using KAOS). Changes can also be made to the goal model and tested against the current business process model to identify behaviors that are invalidated. Invalid behavior may be explicitly defined, supporting further redesign to align the processes changed against organizational goals. But this approach does not use the customer feedbacks on the process to verify the satisfaction level. In [5], work flow mining from event logs was discussed for service innovation discovery. But the approach does not support handling inconsistencies in the event logs. It does not address service refinement on already existing workflows. In [6], automated tools for auditing the business process for compliance requirements were discussed. But compliance is the goal of our paper . We focus on improving the business process to increase the customer satisfaction and improve the business process to remove the wastages and move towards more profitability. In [7], the authors proposed a mechanism for managing inconsistencies in business process using semantic annotation. But the inconsistency was mainly found by verifying the interaction between processd and no external information was taken to verify the inconsistency. This solution cannot improve the service process. In [8] models for redesign of organization work were proposed. This work closely resembles our work. But the data acquisition and processing techniques for service redesign were not considered in this work. In [9], ProcessSEER framework was proposed to mine process semantics. This framework for service design toolkit asks many questions about service flow interaction and guides the business analyst to design an effective service. But this framework does not support mining customer feedback information and use it for service redesign. In [10], Service Align framework for automated analysis of alignment of service designs with strategy model was proposed. Service designs are specified in terms of service post-conditions (in the same underlying formal language that one might use in the context of semantic effect annotation of process models) and QoS guarantees (typically in the form of linear inequalities involving QoS measures). The alignment machinery involves a novel combination of goal satisfaction analysis, plan conformance analysis and optimization. But the work does not consider aligning services for valuable feedback information extracted from the information sources. III. PROPOSED SOLUTION The system architecture of the proposed system is given in Fig.1

Service description

Service Model Construction

Service Feedback Mining

Reasoning

Areas for improvement of service

Fig. 1 System Architecture of the proposed system The system has three important components. 1. Service feedback mining: This component will crawl for service feedback information on various sources in and out of the organization. It will filter the relevant service information and provide it to the reasoning component. 2. Service Model Construction: The service description expressed as SADT flow in enterprises is not suitable for reasoning. This has to be converted to a uitable form for reasoning.

AIJRSTEM 14-216; Š 2014, AIJRSTEM All Rights Reserved

Page 216


P.K. Srimani et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 215-219

3. Reasoning: This component will execute reasoning on the Service model and the Service feedback information mined. The result of the reasoning process is the areas of refinement on the service flow, so that service can be redesigned. A. Service Feedback Mining: Service feedback information comes from customer, critics and competitor. This information is available on web or documents. This information must be used to extract the knowledge which will be used for service redesign. For this, special crawlers must be used to mine this information from web blogs and other databases. Hadoop Big Data techniques are used to process such volumes of data related to services. The information extracted from the sources are preprocessed to know the relevance of concepts to the service. (Figure 2 and 3) B. Service Model Construction: Service must be modeled to apply reasoning on the service flow. We use Strategic rationale model to represent the service flow. Strategic rationale is a graph based representation method. The nodes in this model are goals, tasks and resources. These node elements are linked to each other through means-end relationship and task decomposition relationship. An example of strategic rationale representation for a Banking System Service is shown in Figure 5. C. Reasoning Reasoning component learns decision trees from the strategic rationale model. The business analyst defines the goals for the service. Based on the goals, the decision trees are constructed from the strategic rationale model. The items in the decision trees are evaluated against the data mined. From this we check the validity of decisions against the data mined. For example, say for processing the bank loan the business analyst has set the service flow that if the CRIL score is below 700 reject the application. But from the transaction reports collected from the sources it is evident that scoring patterns for applications whose overdraft history is satisfactory for the last two years have CRIL score from 640 to 680. Reasoning module will be able to check the validity of that particular decision in the service flow whether it is profitable for the organization. Reasoning module will alert the business analyst to check the validity of this decision on CRIL score because many applications have been rejected. These problem areas are identified and informed to business analyst so that service flow can be changed and service is redesigned. (See figures 2 and 3). The computations are performed using the Bex tool. IV. RESULT We conducted the performance analysis on our proposed solution w.r.t three parameters: 1. Coverage: It is the measure of how much percentage of service suggestion was able to match with that of a business expert. 2. Accuracy: different service suggestion is given different scores by the business expert and the score for all the suggestions given by the Bxtool is measured and converted to percentile. 3. F1 Measure: It is measured in terms of precision & recall using the below formula.

We conducted our performance analyzing on the German credit application dataset taken from UCI machine learning repository [http://archive.ics.uci.edu/ml/datasets/Statlog+ (German+Credit+Data)] Based on 20 attributes of the applicant, the decision to provide loan or not is made. We formed the decision tree from the dataset & created the service model. Two goals were provided to test the service redesign capability: 1. Increased User base: More credit card applications must be accepted without major change in service. 2. Decrease risk: Riskier credit card applications must be rejected without affecting the service logic in a major way.

Fig. 2 Coverage as no. of mined data for change

AIJRSTEM 14-216; Š 2014, AIJRSTEM All Rights Reserved

Page 217


P.K. Srimani et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 215-219

We varied the number of records mined for change and measured the coverage. Coverage is overall around 70 to 80%. The quality score is around 90% proving the effectiveness of our method with respect to score of human expert.

Fig. 3 Accuracy as no. of mined data for change The F1 measure is taken for different dataset size for change and from this we see as the volume of data for change increases, the system precision is good compared to its recall. (See Fig. 4)

Fig. 4 F1 Measure as no. of mined data for change V. CONCLUSION In this paper, we have proposed a mechanism for service redesign assistance by mining information from web and other sources in the organization. Our mechanism is able to effectively identify the areas of improvement in the service flow while meeting the organizational goals and constraints. REFERENCES [1] [2] [3] [4] [5] [6] [7]

A. K. Ghose, G. Koliadis, and A. Cheung. Rapid business process discovery (r-bpd). In Proc. of the 2007 ER Conference (ER2007). SPringer LNCS, 2007. K. Xu, L. Lianchen, and C. Wu. A three-layered method for business process discovery and its application in manufacturing industry. Computers in Industry, 58:265–278, 2007. Lam-Son Lê,Aditya K. Ghose,Muralee Krishnan Krishnajith M. Krishnankunju,Konstantin Hoesch-Klohe “Correlating Business Objectives with Services: An Ontology-Driven Approach” 2011 IEEE International Conference on Services Computing.. G. Koliadis and A. K. Ghose. Relating business process models to goal-oriented requirements models in kaos. In Proc. of the 2006 Pacific-Rim Knowledge Acquisition Workshop (PKAW-2006), 2006. W. M. P. van der Aalst, T. Weijters, and L. Maruster. Workflow mining: Discovering process models from event logs.IEEE Trans. Knowl. Data Eng, 16(9):1128–1142, 2004. A. K. Ghose and G. Koliadis. Auditing business processcompliance. In Proceedings of the International Conference on ServiceOriented Computing (ICSOC-07), 2007. G. Koliadis and A. K. Ghose. Semantic verification of business processes in inter-operation. In Proc. of the 2007 IEEE Services Computing Conference. IEEE Computer Society Press, 2007

AIJRSTEM 14-216; © 2014, AIJRSTEM All Rights Reserved

Page 218


P.K. Srimani et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013-February 2014, pp. 215-219 [8] [8] [9] [10]

E. Yu. Models for supporting the redesign of organizational work. In Proceedings of Conf. on Organizational Computing Systems (COOCS’95), pages 225–236, Milpitas, CA: USA, August 13-16 1995. Kaliappan, P.S "Designing and Verifying Communication Protocols Using Model Driven Architecture and Spin Model Checker" IEEE International Conference on Computer Science and Software Engg. (CSSE Dec.2008) K. Hinge, A. Ghose, and G. Koliadis. Process seer: A tool for semantic effect annotation of business process models. In Enterprise Distributed Object Computing Conference, 2009. EDOC ’09. IEEE International, pages 54 –63, 2009. A. K. Ghose, E. Morrison, and L.-S. Le. Correlating services with business objectives in the servalign framework. In Proc. of the CAISE-2013 Workshop on Business IT Alignment (BUSITAL-13). Springer LNBIP, 2013.

ACKNOWLEDGMENTS One of the author’s Mr. Rajasekharaiah K.M. thanks Ms. Chhaya Dule, Asst.Prof. Jyothy Institute of Technology, Bangalore for her valuable suggestions. About Author: Presently Mr. Rajasekharaiah K.M. is working as Professor & HOD Department of Computer Science & Engineering, Jnana Vikas Institute of Technology, Bangalore. He has done M.Tech. in Computer Science & Engg. M.Sc. Information Technology, M.Phil. in Computer Science, and PGDIT from reputed Universities, India. He is having 30+ years of total experience including 16 years of Industrial experiences. He is a Life fellow Member of Indian Society for Technical Education (ISTE), New Delhi. He is presently pursuing the doctoral degree in the Branch of Computer Science & Engineering, in the domain area of Data Mining & Warehousing. He has research publications in reputed national and international journals. His other area of interests are DBMS, Software Engg., Software Architecture, Computer Networks, Programming Languages, Data Structures and Mobile Computing. He is also a resource scholar for other Engineering Colleges/University.

Figure 5: Shows Strategic Rationale Representation of Banking System Service

AIJRSTEM 14-216; © 2014, AIJRSTEM All Rights Reserved

Page 219


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

An Experimental Study of Sn-Doped-ZnO using sol-gel approach Anand1, Rajesh Malik2, Rajesh Khanna3 1 Student, N.C College of Engineering Israna, Panipat, Haryana, India 2 Associate Professor, N.C College of Engineering Israna, Panipat, Haryana, India 3 Professor, Thapar University Patiala, Punjab, India Abstract: ZnO represents the thin film based gas sensors that are fabricated onto Si wafers. It is been affected by different materialistic and atmospheric parameters along with the electrical response. In this work, a study of Sn-doped-ZnO is performed and the deposition is performed on to the alkali free glass substrate. The deposition approach used here is sol-gel method. The work includes the study and experimentation of the work using sol-gel approach under high temperature. Keywords: ZnO, Sn-doped, sol-gel, deposition

I. Introduction Thin Films along with transparent conducting oxide has identified its importance in different areas including the gas sensor, display panels, photovoltaic devices detector. Among these all, Zinc oxide(ZnO) provides some of the effective and interesting characterization of Zinc Oxide under different application. ZnO itself is composed in the form of hexagonal structure with a wide band gap of 3.37eV at the room temperature. The band gap defined for ZnO is also affected with the material properties as well as the atmospheric properties. The properties that affects the band gap includes the electrical conductivity, optical absorption and the refractive index. It is also representing as the excition binding energy under the optical devices such as LEDs and UV protectrol lasers. ZnO is considered as the most effective candidate for these kind of UV and photo detector devices[1][2]. The structural characteristic of the ZnO is the symmetric center that is combined with electromechanical computing under the results of piezoelectric and phyroelectric properties. Author also analyzed the ZnO under the mechanical actuators and the piezoelectric sensors [3]. The basic structure of ZnO is shown in figure 1. Figure 1: Structure of ZnO

To use the Zno think films in different application the extraction of desired and reproducible properties is required. It includes the actual deposition of the properties under different parameters so that different kind of properties will be extracted. These properties include the optical and electrical parametric processing properties. To process with ZnO thin films different approaches are adapted by the authors including the electro deposition, pulsed deposition, sol gel, spray pyrolsis etc. These all approaches are based on the oxidation of Zn under the metalic films. These approaches perform the parametric analysis and do the structural changes to attain the electrical and optical properties [4][5]. To provide the effective experimentation, the exploration of different

AIJRSTEM 14-217; Š 2014, AIJRSTEM All Rights Reserved

Page 220


Anand et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 220-223

properties of ZnO is required. ZnO includes the relative soft material analysis under the hardness as well as penetration capabilities[4][5]. Some of the properties that are experimentally useful are shown in table 1. Table 1: Mechanical Properties of ZnO Parameters Bulk Young Modulus Bulk hardness Epitaxial Young Modulus Epitaxial Young’s Modulus, E(GPa)

Experimental 111.2 ± 4.7a 5.0 ± 0.1a 310 ± 40b 5.75 ± 0.8b

Theoritical

Bulk modulus, B (GPa) dB/dP e33 (C m-2) e31 (C m-2)

142.4c 3.6c 0.96e *0.62e

156.8d 3.6d 1.19f -0.55f

The solution based process defined by the ZnO is quite cheap and simple and defined under the large areas. The thin-film coating method adapted under the coating method as alternative to the vacuum deposition technique. The use of solution process under the oxide semiconductor can improve the output of micro electrical devices under the different approach. One of such approach is discussed in this paper called sol-gel method. It is most of most effective and common solution process adapted by polycrystalline oxide. ZnO based films are defined under the active channel layers in the form of TFT. They are defined and controlled under the carrier density of active layer with the ZnO based TFT challenge. It also include the active layer supply with high carrier density that will conduct with gate voltage [6][7]. In this section, the basic introduction to thin film is defined with the exploration of ZnO, its characteristics and the associated methodologies. In section II, the literature defined related to ZnO and the adapted methodologies. In section III, the exploration of experimentation is described under sol-gel approach. In section IV, the conclusion derived from the work is shown. II. Existing Work In year 2009, Abdur Rashid defined a study and comparative analysis on the effects of Zinc Oxide Barriers on electrical tree growth in solid insulation. Author has studied the effect of zinc oxide. Under the reinforcement and suppression of tree initiation and tree growth. Author has defined Zinc oxide as a filler to the polymer insumation so that to show a linear effect in the conductivity. Author also defined a clear polyester resin as a barrier. Author perform work on high voltage 28kV in the burst form. The growth and nature of tree monitoring is polarized on regular interval. The life time of specimen was filled with ZnO as a barrier[1]. Another work on the charcaterizaton of Sn-Doped ZnO on thin film was identified by Krisana Chongsri in year 2012. Author defined a sol-gen spin-coating based method on zinc acetate hihydrate(CH3COO)2Zn 2H2O Tin (IV) chloride pentahydrate (SnCl45H2O), 2-methoxyethanol (C3H8O2) and diethanolamine ((HOCH2CH2)2NH. Author presented the solution with different Sn doping content in ZnO and annealed temperature at 550 for 4h in air. Author analyzed the structural properties of the films characterized by XRD and SEM. Author defined an increasing Sn doping content into ZnO films results to the enhancement in transparency of the films. Author also taken the photo response of the device to obtain significant improvement with increasing Sn doping content [2]. In year 2011, H.Abdullah has defined a work on ZnO-Sn deposition by using the solgel method under the structural and morphology annealing effects. Author has prepared the sol from zinc acetate dehydrate and tin stabilizer. Author performed the investigation on the surface scanning and electron microscopy. Author given the hexagonal wurzite and the structural analysis with annealing temperature under the SEM observation. Author analyzed the optical properties by photoluminescence(PL) and UV-Visible spectrometer. Author taken the annealing temperature to identify the formation of defects which were related to nonradiative recombination centers[3]. In year 2010, Kuan Jen Chen defined a work on the structural optical and electrical properties of SnDoped ZnO thin film in sol gel. Author defined the surface morphology of the SZO films showing hte large crystallization. Author has dopped the tin dopant under the surface roughness of film as well as analyzed the defect in the pore structure. Author defined the crystallization of the temperature under processed of better crystallization and fewer defects. Author performs the XPS analysis under the presence of OSn4 under PL spectra, luminescence in the Zn2SnO4 phase was observed and affected [4]. In year 2012, A.Ahmadi Daryakenari has defined a preparation and Ethanol sensing properties under the novel Sol-GelMethod. Author defined the non practicle properties of ZnO with novel sol-gel method. Author studied the chemical reaction between the zic acetate and methanol under ambient conditions. Author analyzed the properties of the product under the characterization of X-ray diffraction, scanning electron mirosopy and Fourier transform infrared spectra. Author also perform the gas sensing under the testing gas[5]. Another work on the zinc oxide property exploration was done by Tao Jiang in year 2006. Author defined a humidity sensor based work on zinc oxide based modified porous silicon. Author prepared the electro chemical etching process. Author

AIJRSTEM 14-217; © 2014, AIJRSTEM All Rights Reserved

Page 221


Anand et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 220-223

also defined a sol-gel based precursor analysis on Zinc Oxide under the applied PS substrate and annealed at 450o. Author performed the structural analysis on the silicon and zic oxide under different humidity levels. Author perform the analysis under the increased sensitivity and the shorter response time relative to the humidity constraint [6]. Another work on property study of Zic Oxide under humidity vector was presented by Tao Jiang inyear 2006. Author defined the humidity sensing behaviour of sinc oxide with modified porous silicon under the composite structure. Author defined the porous silicon substrate under the preparation of electrochemical etching process. Author defined a sol gel based approach to obtain the uniformity in Zinc Oxide film on the porous silicon substrate. Author performed the modification of the porous silicon by sol gel zinc oxide increase the sensitivity and the response time to the humidity. Author defined the concentration of zinc oxide in precursours after the sensing performing. Author deifned the composite structural analysis in potential to develop the humidity sensor under high performance [7]. Another work on the roughness effect of third order nonlinear optical susceptibility of rare earth doped was presented by MAddou in year 2007. Author defined an investion on Sn-ZnO by third harmonic generation method at the wavelength using nanosecond laser radiation. Author defined the dependence of third order under susceptibility and transmission characteristics on the thin films roughness has been measured. Author defined the deposited films under the analysis of XRay diffraction and atomic fornce microscopy [8]. Another work on the influence analysis of nano Zinc Oxide treated by HMDS in humidity detection by Jiayun Lu in year 2009. In the paper, author analyzed the nano zinc oxide under the humidity sensitivity. Author defined work for nano zic oxide by quartz crystal in relative humidity detection. Author showed the results under the humidity sensitivity and well reproducibility. Author observed the sensitivity of nano zinc oxide and trated by HMDS under low value treatment[9]. In year 2009, Jiraporn Klanwan presented a work on gas phase reaction under zinc oxide and carbon nano particle composition. Author defined the effect of nitrogen flow under the range of sizes and the morphology. Author defined the scanning electron microscopic analysis. Author analyzed the control and increase flow of gas under the morphology effect[10]. Antoher work on the selective partterning of Zinc Oxide Precursor solution by Inkjet Printing was given by Yen Nan Liang in year 2009. Author defined a spatial analysis on inkjet printing of ZnO sol gel precursor. Author also studied differnt factors that affects the inkjet printing and the patterns including the ink formulation and process parameters [11]. III. Experimental Analysis : A Study TFT is defined with spin coated thin films under the Sn doping for different active channel layers. The transparent oxide semiconductor of Sn-doped ZnO thin films prepared by a solution process under the vacuum deposition technique. The iconic radius of Sn4 is lesser than Zn2. So that the substituional of the ion is performed. In this work, sol-gel is been defined to prepare the transparent oxide semiconductors with Sn-doped ZnO thin films under the different Sn concentration of different properties. These analytical properties include the microstructure analysis, crystalline and optical properties. The structural representation under different sensing parameters is shown in figure 2. Figure 2: Top view of ZnO structure

As shown in the figure, the main influence of ZnO is respective to the temperature vector. For this it includes a heater and the temperature sensor as the basic component of the structure. Sn-doped ZnO is discussed in this section. Thse kind of thin films are prepared on glass substrates by using the sol-gel method. It is defined using the Zinc acetate dehydrate and tin tetrachloride. Both of these are disolved in 2-methoxyethanol and then monoethanolamine is also added to the solution to achieve the stability. The metal ions of Sn doped ZnO was

AIJRSTEM 14-217; Š 2014, AIJRSTEM All Rights Reserved

Page 222


Anand et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 220-223

controlled at 0.35 M and Sn/Zn ratios are differentiated. The complex solution is taken under the transparent and homogeneous solution. The doped ZnO film is then coated under the alkali-free glass with spin coating a high speed of 1000 rpm for 30 seconds. After this, the coated films are heated at high temperature of around 300 C for 10 minutes. This heating process is called pre-heating and performed just after the coating. This coating procedure is performed trice so that the effective coating will be obtained. Finally the coating is annealed in air at 500 C for about 1 hour. In the next stage, the crystallinity levels of this doped ZnO thin films are annealed and determined at the glancing angle with X-ray diffraction. The diffraction parameter taken here includes the MAC science. The surface morphology and the micro-structural analysis is performed in this stage. Finally the analysis of the film is performed at the rough surface at different levels. The analysis is also done under different resistivity and measured at room temperature. IV. Conclusion In this paper, a study of Sn doped ZnO is performed. The exploration of the work is done under different parameters. The work also includes the experimentation of Sn-doped-Zn by using sol gel method. The approach is described as the deposition of films under high temperature. VI. [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

References

Abdur Rashid," To Study and Compare in Depth the Inside View of Major Effects of Zinc Oxide Barriers on Electrical Tree Growth in Solid Insulation", 978-1-4244-4873-9/09 ©2009 IEEE Krisana Chongsri," Characterization and Photoresponse Properties of Sn-doped ZnO Thin Films", 10th Eco-Energy and Materials Science and Engineering (EMSES2012) H. Abdullah," ZnO:Sn Deposition by Sol-gel Method: Effect of Annealing on the Structural, Morphology and Optical Properties". Kuan Jen Chen," Surface Characteristics, Optical and Electrical Properties on Sol-Gel Synthesized Sn-Doped ZnO Thin Film", Materials Transactions, Vol. 51, No. 7 (2010) A. Ahmadi Daryakenari," Preparation and Ethanol Sensing Properties of ZnO Nanoparticles via a Novel Sol-GelMethod". Tao Jiang," A Zinc Oxide modified Porous Silicon humidity sensor", Proceedings of the 2006 IEEE International Conference on Information Acquisition 1-4244-0529-7/06©2006 IEEE Tao Jiang," Study of humidity properties of Zinc Oxide modified Porous Silicon", IEEE SENSORS 2006, 1-4244-03766/06@2006 IEEE M. Addou," Effect of Roughness on Third-order Nonlinear-Optical Susceptibility of Rare Earth Doped Zinc Oxide Thin Films", ICTON-MW'07 978-1-4244-1639-4/07©2007 IEEE Jiayun Lu," Influence of Nano Zinc Oxide Treated by HMDS in Humidity Detection", 978-1-4244-2723-9/09@ 2009 IEEE Jiraporn Klanwan," A single-step gas-phase reaction for synthesizing zinc oxide and carbon nanoparticle composite", 978-98108-3694-8 (RPS)@ 2009 IEEE Yen Nan Liang," Spatially Selective Patterning of Zinc Oxide Precursor Solution by Inkjet Printing", 2009 11th Electronics Packaging Technology Conference 978-1-4244-5100-5/09 ©2009 IEEE

AIJRSTEM 14-217; © 2014, AIJRSTEM All Rights Reserved

Page 223


American International Journal of Research in Science, Technology, Engineering & Mathematics

Available online at http://www.iasir.net

ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629 AIJRSTEM is a refereed, indexed, peer-reviewed, multidisciplinary and open access journal published by International Association of Scientific Innovation and Research (IASIR), USA (An Association Unifying the Sciences, Engineering, and Applied Research)

A Study of Tin Oxide Structure and Deposition under sol-gel Approach Suraksha1, Rajesh Malik2, Rajesh Khanna3 Student, N.C College of Engineering1 Israna, Panipat, Haryana, India Associate Professor, N.C College of Engineering2 Israna, Panipat, Haryana, India Professor, Thapar University3 Patiala, Punjab, India Abstract: ZnO is having its high importance in different application areas because of its semiconducting properties. The property based analysis of ZnO is here presented under the structural analysis as well as parametric analysis. The structure of ZnO and its characterstics are under the influence of temperature and pressure parameters. In this paper, the characteristic optimization of ZnO thin film is defined using sol-gel approach. Keywords: ZnO Thin Film, Sol-gel, pressure, temperature Introduction With the increasing demand of semiconductor devices, the new evaluation was performed where the creation of material processing was performed using the technology and the electronics. There are number of material are available under the semiconductor families. One of such material comes under the composition is Zinc Oxide. Zinc Oxide is presented as the semiconductor materials that satisfy the most properties required in a material by the world. These requirements include the higher electron mobility, larger band gap and the strength of breakdown field. Because of these materialistic and environmental characteristics, it provides the extensive support with some different application and devices such as sensor devices, UV electronics etc. Zinc oxide is also having its role in polycrystalline form. In this form, it is used in vast range of applications such as for medical treatments. Later on the work on the optimization under different growth parameters is defined to improve the quality. A. Structure of ZnO ZnO crystallized form is present in tow main forms called cubic zic blende and the hexagonal surtzite. This hexagonal cell structure is shown in figure 1. These structures of the zinc are optimized under different parameters that mainly include temperature and pressure. In the hexogen form it is defined in closed packed lattice form. In this form, the interconnection between two interconnecting lattices exist called zn2 and o2. There are different form of ZnO that are defined under the periodic linear combination of atomic theory. The structural form of ZnO is also defined in terms of different planes derive alternatively from the ions of these two. The opposite charge on these ions returns the positive charge as well as negative charge polar surfaces. It also includes the structural definition so that the defect avoidance will be there. The excess atoms are defined as the tendency to provide the functional analysis on the donor interstitials Figure 1: Cell Structure of ZnO

AIJRSTEM 14-217; Š 2014, AIJRSTEM All Rights Reserved

Page 224


Suraksha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 224-227

B. ZnO characterization using sol-gel Zinc oxide characterization can be performed under different approaches including the chemical bath, electrochemical and sol-gel spin techniques. Out of these, sol-gel is one of the most promising approach that gives the simplified fabrication route under the ZnO layers. It also includes the deposition of the equipment so that the vapor phase will be eliminated from the process model. To fabricate the ZnO thin films sol-gel is being adapted as an inexpensive alternative. The sol-gel fabrication is defined an adaptive procedure so that ZnO construction for the thin film is comparative simple and low cost. It also includes the relatively extensive approach so that the cost effective sol-gel will be identified. There are different coating techniques integrative used by sol-gel itself so that the optimization of the ZnO film growth will be obtained. It also gives the growth estimation of ZnO films so that the simplicity will be achieved extensively over the ZnO. The process formed by sol-gel is correlated with ZnO properties under different parameters. These parameters can be materialistic or environmental. The main parameters analyzed under sol-gel process are the temperature and the pressure. It also include the effective analysis for the optimization of structural and optical property analysis so that sol-gel spin coated film will be attained and discussed. The presented paper is the experimental study of ZnO, its structure and characterization approaches. In this section the detail view about the ZnO structure and its characterization is defined under different parameters. In section II, the work already has done in the area of ZnO parametric analysis and its optimization approaches. In section III, the experimental investigation is performed on sol-gel approach for the characterization of ZnO film. In section IV, the conclusion obtained from the work is discussed. Literature Study Lot of work is already done on metal oxide gas sensor analysis and its characteristic exploration. This exploration is performed by different researchers in different ways. Here in this section, the work done by some earlier author on the characteristic exploration of Zinc-Oxide Thin Film using Sol Gel technique is proposed. N. Barsan has presented a study on the generalized exploration of Metal Oxide Gas Sensors. Author presented a review study under different experimental techniques so that the gas sensors based on the semiconducting metal oxide is defined. Author presented the assessment on the subsequent modelling of gas sensors. Author defined some phenomenological and spectroscopic approaches under the realistic environments so that the shortcoming and the achievements of different experiments will be obtained [1]. Another work on the thin film gas sensor done by G.Sberveglieri. Author defined a critical review on the material used in the gas sensor. Author categorized the gas sensors in two categories under the material analysis. The first category is of oxygen with surface conductance and second is the materials that detect oxidizing and gas reduction in air. Author did the property based analysis on different sensor materials such as tin oxide etc. Author also defined an approach for the thin film growing called rheotaxial growth and sensor oxidation. Author defined this technique with mixed oxide thin film under the high surface analysis with nano size crystal. The main objective of the work was to present the valid utilization of the thin film material under different measure techniques and parameters[2]. Another analytical work on the effect of tin oxide and zinc oxide was presented by S.Hegedus. Author defined the analytical study on the material based properties and its effect. Author study the effect under Silicon Solar Cells. Author defined an optoelectronic and structural property analysis under different films. Author defined the analysis of the work under different vectors including the stability, deposition time and the usage[3]. In year 2009, Daniel P. Heineck has presented an enhancement over the Zinc Tin oxide based thin film transistor. Author defined the fabrication process so that the enhancement over the zinc tin oxide based inverter. Author has presented the enhancement and the depletion mode based estimation under the bottom gate analysis. Author also presented the post deposition based annealing so that the gain over the voltage is obtained [4]. Michal Byrczek presented an investigation on the Zinc oxide growth under the coated glass substrate. Author defined the crytalistic fabrication so that the chemical bath under low temperature is performed. Author defined the crystal growth based experimentation so that the electron scanning under microscoping analysis is performed. Author also performed the study under the speed of nanoflower growth and explains the cause of nanorods. Author defined the research direction and the quality influence under different vectors. Author defined the work under the heating analyss so that the strong exploration to the system will be performed[5]. Another work in same year was presented by Josh Triska to attain the stability for Zin Tin Oxide Thin Film Transistor. Author defined the layered deposition in this work with gate dielectrics and performs the work under acceptable mobility. Author defined the efforts on the stabilize operations under changing deposition temperature (AI2O3), O2 plasma exposure. Author defined the mapping under the plasma exposure so that the traffic behavior of SiO2 will be altered. Author defined the trapping behavior so that the reduced instability significantly. Author defined an interface so that the trapping of source will be done[6]. S.Jay Chey has defined the characterization of Indium Tin Oxide under the doped zinc oxide thin film deposition. Author defined the RF magnetron sputter deposition. Author defined the thin film based resistance and optimal transissivity and stability testing conditions. Author defined the doped film deposition using magnetron sputtering system. Author defined sampled biasing under the high temperature conditions and the properties. Author defined the

AIJRSTEM 14-217; Š 2014, AIJRSTEM All Rights Reserved

Page 225


Suraksha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 224-227

sample bias was applied by RF power to the sample. Author defined the high deposited with argon as the process gas. Author defined the deposition on Zinc Oxide under high temperature. Author defined the comparative analysis on thin film under biased sampling on high temperature process. Author defined the work under the resistance and the transmission. Author also presented the optimal concentration of hydrogen under the sample biasing and high temperature in high deposition conditions[7]. In year 2010, Yong-Hoon Kim defined a thermal annealing process so that Zinc Tin Oxide based transistors and the circuits. Author defined the high performance circuit for zinc-tin-oxide. And thin film transistor. Author defined the electrodes and the thermal treated annealing process. Author defined the thresh holding based current ratio analysis under the ZTO TFT based inverter. Author defined the logic level conservation of the thin oxide[8]. Rongliang has defined a sol gel based method for the high surface based Tin Zinc oxide processing. Author defined the a method for the conductance of the nano particles so that the photocatalytic activity. Author defined the ZnO SnO2 for the solid solution under the sol gel technique. Author defined the work with low photocatalytic activity so that heat treatment will be obtained under high temperature. Author defined the ZnO process under the nanopartical analysis in specific surface area[9]. Another work on the optical and electrical stability was processed by Pradipta K. Nayak on Zinc Tin Oxide based thin film transistors. Author defined the fabrication study under the sol gel method. Author defined the thresholding voltage based approach to achieve the environmental and optical stability. Author presented a sensitivity to the positive and gegative gate bias stress[10]. Experimental Analysis : A Study In this present work, the characteristics optimization of ZnO thin film is defined by using sol-gel approach so that the maximum purity intended acetate dehydrate will be obtained. The objective of this experimentation is to use the triehanolamine as the surfactant so that the maximum purity will be obtained. In this experiment, Ethonal and Ammonium hydroxide are considered as the homogeneous components and some value of PH is mixed with the solution so that the stoichiometric solution will be obtained and the Zinc oxide nano particles will be generated. The chemical required for this experimentation includes the water, tea and the ethanol drops that will be taken in continuity so that the homogenous solution will be formed from the experiment. The work also attain the stoichimetry in mind so that the sinc oxide will be prepared from it. It also includes the mix of TEA component in constant quantity with ethanol in a dropwise additive approach. The homogenous solution is keep in the effective temperature for 2-3 hours. Along with this, molar calculation of the zinc oxide will be combined with dihydrated mix by using which the homogenous solutoin is generated. Once the solution is obtained, the addition of ammonium hydroxide is added to it drop wise by providing it continuous heating at high temperature for 20 mins. After that little amount of distil water is added to it, the whole solution is left for 30 mins so that the result formation from the solution will be done. It also includes the watching process for 810 times based on which the filtration is performed. The solution is placed in oval at medium temperature of about 95 C for 8 hours. Figure 2: Sol-Gel Process Stages Solvation

Hydrolysis

Polymerization

Transformation

The yellow colors is attained form it by placing it at very high temperature of 500 to 900 C. Tin oxide film is been analyzed under the thickness analysis under the active analysis and the cancination techniques. The Sn layer has been under different vectors called O2 and Ar with effect of environmental and atmospheric constraints. The work includes the atmosphere control on high temperature and the with mix gas environment. The tin oxide layer is fabricated on the substrate so that the reactive current magnetron for the sputtering of tin is processed and identified. This process uses the pure tin as the main target and identifies under the temperature and the pressure vectors. Here figure 2 is showing the basic circuit diagram of SnO2. The optimized growth of Zinc oxide from zinc acetate is obtained from sol-gel process under different stages. These stages are defined as the solvation, hydrolysis, polymerization and transformation. These stages are defined

AIJRSTEM 14-217; Š 2014, AIJRSTEM All Rights Reserved

Page 226


Suraksha et al., American International Journal of Research in Science, Technology, Engineering & Mathematics, 5(2), December 2013February 2014, pp. 224-227

in figure 2. The first stage is about to perform the solvation of acetate dehydrate in ethanol which is followed by hydrolyzed. Late on the removal of the intercalated ions is formed as the polyhymer process so that reaction observation will be done. Finally the transformation of this polymerization hydroxyl complex is converted to ZnO. Conclusion In this paper, an exploration is defined for ZnO oxide, its structure and the characteristics optimization approaches. The main concentration is given on the optimization process for the chacterization of the properties using sol-gel approach. In this paper, experiment based study is defined on sol-gel approach for the optimization of ZnO thin film. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]

N. Barsan," Metal oxide-based gas sensor research: How to?", Sensors and Actuators B 121 (2007) G. Sberveglieri," Recent developments in semiconducting thin-film gas sensors", Sensors and Actuators B 23 (1995) S. Hegedus," EFFECT OF TEXTURED TIN OXIDE AND ZINC OXIDE SUBSTRATES ON THE CURRENT GENERATION IN AMORPHOUS SILICON SOLAR CELLS", 25th PVSC; May 13-17, 1996 0-7803-3166-4/96@ 1996 IEEE Daniel P. Heineck," Zinc Tin Oxide Thin-Film-Transistor Enhancement/Depletion Inverter", IEEE ELECTRON DEVICE LETTERS 0741-3106 © 2009 IEEE Michal Byrczek," Investigation of Zinc Oxide Nanorods Growth on Indium Tin Oxide Coated Glass Substrates". Josh Triska," Bias Stability of Zinc-Tin-Oxide Thin Film Transistors with Al2O3 Gate Dielectrics", 2009 IIRW FINAL REPORT 978-1-4244-3922-5/09©2009 IEEE S. Jay Chey," CHARACTERIZATION OF INDIUM TIN OXIDE AND AL-DOPED ZINC OXIDE THIN FILMS DEPOSITED BY CONFOCAL RF MAGNETRON SPUTTER DEPOSITION", 978-1-4244-2950-9/09 ©2009 IEEE Yong-Hoon Kim," Ink-Jet-Printed Zinc–Tin–Oxide Thin-Film Transistors and Circuits With Rapid Thermal Annealing Process", IEEE ELECTRON DEVICE LETTERS 0741-3106 © 2010 IEEE Rongliang He," Synthesis of High Surface Area Amorphous Tin-Zinc Oxides by a sol-gel method", ICONN 2010 978-1-42445262-0/10@ 2010 IEEE Pradipta K. Nayak," Environmental, Optical, and Electrical Stability Study of Solution-Processed Zinc–Tin–Oxide Thin-Film Transistors", JOURNAL OF DISPLAY TECHNOLOGY 1551-319X © 2011 IEEE.

Acknowledgments The heading of the Acknowledgment section and the References section must not be numbered.

AIJRSTEM 14-217; © 2014, AIJRSTEM All Rights Reserved

Page 227


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.