Icciet proceedings

Page 1


PROCEEDINGS

ICCIET - 2014 INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Sponsored By INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

Technical Program 31 August, 2014 Hotel Pavani Residency, Nellore

Organized By INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

www.iaetsd.in


Copyright Š 2014 by IAETSD All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written consent of the publisher.

ISBN: 378 - 26 - 138420 - 5

http://www.iaetsd.in

Proceedings preparation, editing and printing are sponsored by INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT COMPANY


About IAETSD: The International Association of Engineering and Technology for Skill Development (IAETSD) is a Professional and non-profit conference organizing company devoted to promoting social, economic, and technical advancements around the world by conducting international academic conferences in various Engineering fields around the world. IAETSD organizes multidisciplinary conferences for academics and professionals in the fields of Engineering. In order to strengthen the skill development of the students IAETSD has established. IAETSD is a meeting place where Engineering students can share their views, ideas, can improve their technical knowledge, can develop their skills and for presenting and discussing recent trends in advanced technologies, new educational environments and innovative technology learning ideas. The intention of IAETSD is to expand the knowledge beyond the boundaries by joining the hands with students, researchers, academics and industrialists etc, to explore the technical knowledge all over the world, to publish proceedings. IAETSD offers opportunities to learning professionals for the exploration of problems from many disciplines of various Engineering fields to discover innovative solutions to implement innovative ideas. IAETSD aimed to promote upcoming trends in Engineering.


About ICCIET: The aim objective of ICDER is to present the latest research and results of scientists related to all engineering departments’ topics. This conference provides opportunities for the different areas delegates to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration. We hope that the conference results constituted significant contribution to the knowledge in these up to date scientific field. The organizing committee of conference is pleased to invite prospective authors to submit their original manuscripts to ICCIET 2014. All full paper submissions will be peer reviewed and evaluated based on originality, technical and/or research content/depth, correctness, relevance to conference, contributions, and readability. The conference will be held every year to make it an ideal platform for people to share views and experiences in current trending technologies in the related areas.


Conference Advisory Committee:

Dr. P Paramasivam, NUS, Singapore Dr. Ganapathy Kumar, Nanometrics, USA Mr. Vikram Subramanian, Oracle Public cloud Dr. Michal Wozniak, Wroclaw University of Technology, Dr. Saqib Saeed, Bahria University, Mr. Elamurugan Vaiyapuri, tarkaSys, California Mr. N M Bhaskar, Micron Asia, Singapore Dr. Mohammed Yeasin, University of Memphis Dr. Ahmed Zohaa, Brunel university Kenneth Sundarraj, University of Malaysia Dr. Heba Ahmed Hassan, Dhofar University, Dr. Mohammed Atiquzzaman, University of Oklahoma, Dr. Sattar Aboud, Middle East University, Dr. S Lakshmi, Oman University


Conference Chairs and Review committee:

Dr. Shanti Swaroop, Professor IIT Madras Dr. G Bhuvaneshwari, Professor, IIT, Delhi Dr. Krishna Vasudevan, Professor, IIT Madras Dr.G.V.Uma, Professor, Anna University Dr. S Muttan, Professor, Anna University Dr. R P Kumudini Devi, Professor, Anna University Dr. M Ramalingam, Director (IRS) Dr. N K Ambujam, Director (CWR), Anna University Dr. Bhaskaran, Professor, NIT, Trichy Dr. Pabitra Mohan Khilar, Associate Prof, NIT, Rourkela Dr. V Ramalingam, Professor, Dr.P.Mallikka, Professor, NITTTR, Taramani Dr. E S M Suresh, Professor, NITTTR, Chennai Dr. Gomathi Nayagam, Director CWET, Chennai Prof. S Karthikeyan, VIT, Vellore Dr. H C Nagaraj, Principal, NIMET, Bengaluru Dr. K Sivakumar, Associate Director, CTS. Dr. Tarun Chandroyadulu, Research Associate, NAS


ICCIET - 2014 CONTENTS 1

PLACEMENT OF SUPER CONDUCTING FAULT CURRENT LIMITERS TO MITIGATE FAULT CURRENTS IN SMART GRIDS WITH DIFFERENT TYPES OF DISTRIBUTED GENERATION SOURCES

1

2

IMPLEMENTATION OF HDLC PROTOCOL USING VERILOG

6

3

A SMART ECO-FRIENDLY CAR SYSTEM WITH TRAFFIC CONJECTURE USING INDUCTIVE LOOPS AND TOUCH LESS MOBILE CONTROL ALONG WITH ALCOHOL PREVENTION IN AUTOMOBILE

9

4

A REVIEW ON MODIFIED ANTI FORENSIC TECHNIQUE FOR REMOVING DETECTABLE TRACES FORM DIGITAL IMAGES

18

5

FPGA IMPLEMENTATION OF FAULT TOLERANT EMBEDDED RAM USING BISR TECHNIQUE

24

6

ADAPTIVE MODULATION IN MIMO OFDM SYSTEM FOR 4G WIRELESS NETWORKS

28

7

VLSI BASED IMPLEMENTATION OF A DIGITAL OSCILLOSCOPE

33

8

APPEARANCE BASED AMERICAN SIGN LANGUAGE RECOGNITION USING GESTURE SEGMENTATION AND MODIFIED SIFT ALGORITHM

37

9

BAACK: BETTER ADAPTIVE ACKNOWLEDGEMENT SYSTEM FOR SECURE INTRUSION DETECTION SYSTEM IN WIRELESS MANETS

46

10

PERFORMANCE ANALYSIS OF MULTICARRIER DS-CDMA SYSTEM USING BPSK MODULATION

51

11

ITERATIVE MMSE-PIC DETECTION ALGORITHM FOR MIMO OFDM SYSTEMS

57

12

COMPUTATIONAL PERFORMANCES OF OFDM USING DIFFERENT PRUNED RADIX FFT ALGORITHMS

62

13

IMPROVEMENT OF DYNAMIC AND STEADY STATE RESPONSES IN COMBINED MODEL OF LFC AND AVR LOOPS OF ONE-AREA POWER SYSTEM USING PSS

69

14

ENHANCED CRYPTOGRAPHY ALGORITHM FOR PROVIDING DATA SECURITY

76

15 16

SINTER COOLERS CHAOS CDSK COMMUNICATION SYSTEM A REVIEW ON DEVELOPMENT OF SMART GRID TECHNOLOGY IN INDIA AND ITS FUTURE PERSPECTIVES

81 83

17

88

18

DESIGN AND ANALYSIS OF WATER HAMMER EFFECT IN A NETWORK OF PIPELINES

95

19

A SECURED BASED INFORMATION SHARING SCHEME VIA SMARTPHONE’S IN DTN ROUTINGS

107


20

STORAGE PRIVACY PROTECTION AGAINST DATA LEAKAGE THREADS IN CLOUD COMPUTING

113

21

REAL TIME EVENT DETECTION AND ALERT SYSTEM USING SENSORS

118

22

AN EFFICIENT SECURE SCHEME FOR MULTI USER IN CLOUD BY USING CRYPTOGRAPHY TECHNIQUE

123

23

DESIGN AND IMPLEMENTATION OF SECURE CLOUD SYSTEMS USING META CLOUD

127

24

INCREASING NETWORK LIFE SPAN OF MANET BY USING COOPERATIVE MAC PROTOCOL

134

25

SCALABLE MOBILE PRESENCE CLOUD WITH COMMUNICATION SECURITY

142

26

ADAPTIVE AND WELL-ORGANIZED MOBILE VIDEO STREAMING PUBLIC NETWORKS IN CLOUD

146

27

IDENTIFYING AND PREVENTING RESOURCE DEPLETION ATTACK IN MOBILE SENSOR NETWORK

155

28

EFFECTIVE FAULT TOERANT RESOURCE ALLOCATION WITH COST REDUCTION FOR CLOUD

166

29

SECURED AND EFFICIENT DATA SCHEDULING OF INTERMEDIATE DATA SETS IN CLOUD

30

SCALABLE AND SECURE SHARING OF PERSONAL HEALTH RECORDS IN CLOUD USING MULTI AUTHORITY ATTRIBUTE BASED ENCRYPTION

172

31

LATENT FINGERPRINT RECOGNITION AND MATCHING USING STATISTICAL TEXTURE ANALYSIS

178

32 33 34

RELY ON ADMINISTRATION WITH MULTIPATH ROUTING FOR INTRUSION THRESHOLD IN HETEROGENEOUS WSNS ELECTRIC POWER GENERATION USING PIEZOELECTRIC CRYSTAL INTEGRATION OF DISTRIBUTED SOLAR POWER GENERATION USING BATTERY ENERGY STORAGE SYSTEM

186 193 197

35

ESTIMATION OF FREQUENCY FOR A SINGLE LINK-FLEXIBLE MANIPULATOR USING ADAPTIVE CONTROLLER

206

36

MODELLING OF ONE LINK FLEXIBLE ARM MANIPULATOR USING TWO STAGE GPI CONTROLLER

219

37

DESIGN OF A ROBUST FUZZY LOGIC CONTROLLER FOR A SINGLE-LINK FLEXIBLE MANIPULATOR

227

38 39 40

FPGA IMPLEMENTATION OF RF TECHNOLOGY AND BIOMETRIC AUTHENTICATION BASED ATM SECURITY DESIGN AND SIMULATION OF HIGH SPEED CMOS FULL ADDER FPGA IMPLEMENTATION OF VARIOUS SECURITY BASED TOLLGATE SYSTEM USING ANPR TECHNIQUE

237 245 251

41

MINIMIZATION OF VOLTAGE SAGS AND SWELLS USING DVR

256

42

POWER-QUALITY IMPROVEMENT OF GRID INTERCONNECTED WIND ENERGY SOURCE AT THE DISTRIBUTION LEVEL

264


43

AN ENHANCEMENT FOR CONTENT SHARING OVER SMARTPHONE-BASED DELAY TOLERANT NETWORKS

271

44

EFFICIENT RETRIEVAL OF FACE IMAGE FROM LARGE SCALE DATABASE USING SPARSE CODING AND RERANKING

275

45

ELIMINATING HIDDEN DATA FROM AN IMAGE USING MULTI CARRIERITERATIVE GENERALISED LEAST SQUARES

278

46

IMPLEMENTATION OF CONTEXT FEATURES USING CONTEXT-AWARE INFORMATION FILTERS IN OSN

282

47

ENHANCEMENT OF FACE RETRIVAL DESIGEND FOR MANAGING HUMAN ASPECTS

289

48

DESIGN AND IMPLEMENTATION OF SECURE CLOUD SYSTEMS USING META CLOUD

293

49

ASYNCHRONOUS DATA TRANSACTIONS ON SoC USING FIFO BETWEEN ADVANCED EXTENSIBLE INTERFACE 4.0 AND ADVANCED PERIPHERAL BUS 4.0

300

50

APPLIANCES OF HARMONIZING MODEL IN CLOUD COMPUTING ENVIRONMENT

307

51

SIMILARITY SEARCH IN INFORMATION NETWORKS USING META-PATH BASED BETWEEN OBJECTS

317

52

PINPOINTING PERFORMANCE DEVIATIONS OF SUBSYSTEMS IN DISTRIBUTED SYSTEMS WITH CLOUD INFRASTRUCTURES

323

53

SECURE DATA STORAGE AGAINST ATTACKS IN CLOUD ENVIRONMENT USING DEFENCE IN DEPTH

326

54

IMPROVED LOAD BALANCING MODEL BASED ON PARITIONING IN CLOUD COMPUTING

331

55

EFFECTIVE USER NAVIGABILITY THROUGH WEBSITE STRUCTURE REORGANIZING USING MATHEMATICAL PROGRAMMING MODEL

334

56

AN EFFECTIVE APPROACH TO ELIMINATE TCP INCAST COLLAPSE IN DATACENTER ENVIRONMENT

339

57

SECURED AND EFFICIENT DATA SCHEDULING OF INTERMEDIATE DATA SETS IN CLOUD

347

58

KEY RECONSTRUCTION AND CLUSTERING OPPONENT NODES IN MINIMUM COST BLOCKING PROBLEMS

352


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Placement of Super Conducting Fault Current Limiters to mitigate fault Currents in Smart Grids with different types of Distributed Generation Sources 1

Sai Sowmya Varanasi Ramachandra college of Engineering,4th B.Tech saisowmya.varanasi@gmail.com from all power sources and has no negative effect on the integrated Sources was suggested.

Abstract—

The important application of superconducting fault current limiters (SFCL) for upcoming smart grid is its possible effect on the reduction of abnormal fault current and the suitable location in the micro grids. Due to the grid connection of the micro grids with the current power grids, excessive fault current is a serious problem to be solved. In this paper, a resistive type SFCL model was implemented using Simulink and SimPowerSystem blocks in Matlab. The designed SFCL model could be easily utilized for determining an impedance level of SFCL according to the fault-current-limitation requirements of various kinds of the smart grid systems. In addition, typical smart grid model including generation, transmission and distribution network with different types of DG’s having different ratings such as wind, solar, diesel was modeled to determine the location and the performance of the SFCL. A 10 MVA wind farm, 675 KW solar and 3MVA diesel generator was considered for the simulation. Three phase fault has been simulated at distribution grid and the effect of the SFCL and its location on the wind farm, solar, diesel generator fault currents was evaluated. The SFCL location in Smart Grid with different types Distributed Generation Sources have been proposed and their performances were analyzed.

2. SIMULLINK MODELS SET-UP Matlab/Simulink/SimPowerSystem was selected to design and implement the SFCL model. A complete smart grid power network including generation, transmission, and distribution with an integrated wind farm, solar, diesel generator model [6] was also implemented in it. Simulink /SimPowerSystem have number of advantages over its contemporary simulation software (like EMTP, PSPICE) due to its open architecture, a powerful graphical user interface and versatile analysis. 2.1. Power System Model The modeled power system consists of an electric transmission and distribution power system. Newly developed micro grid model was designed by integrating a 10 MVA wind farm, 675 KW solar, 3MVA diesel generator with the distribution network. Fig.1 shows the power system model designed in Simulink/SimPowerSystem. The power system is composed of a 100 MVA conventional power plant, composed of 3-phase synchronous machine, connected with 200 km long 154 kV distributed-parameters transmission line through a step-up transformer TR1. At the substation (TR2), voltage is stepped down to 22.9 kV from 154 kV. High power industrial load (6 MW) and low power domestic loads (1 MW each) are being supplied by separate distribution branch networks. The 10 MVA wind farm, 675 KW solar, 3MVA diesel generator are connected directly through Step up transformer to the grid in Fig.1 artificial fault three-phase-to-ground fault and locations of SFCL are indicated in distribution grid. Three prospective locations for SFCL installation are marked as Location 1, Location 2 and Location 3. Generally, conventional fault current protection devices are located in Location 1 and Location 2. The output current of wind farm, solar, diesel measured at the output of DG for various SFCL locations have been measured and analyzed.

Index Terms—Fault current, micro grid, smart grid, SFCL, wind farm, solar, diesel.

1.

INTRODUCTION

Conventional protection devices installed for protection of excessive fault current in electric power systems allows initial two or three fault current cycles to pass through before getting activated. But, superconducting fault current limiter (SFCL) is innovative electric equipment [1] which has the capability to reduce fault current level within the first cycle of fault current. The first-cycle suppression of fault current by a SFCL results in an increased transient stability of the power system carrying higher power with greater stability. Smart grid is the novel term used for future power grid which integrates the modern communication technology and renewable energy resources to supply electric power which is cleaner, reliable, resilient and responsive than conventional power system. One of the important aspects of the smart grid is decentralization of the power grid network into smaller grids, which are known as micro grids, having distributed generation sources (DG) connected with them. These micro grids need to integrate various kinds of DGs and loads with safety should be satisfied. The direct connection of DGs with the power grid are the excessive increase in fault current and the islanding issue which is caused, when despite a fault in the power grid, DG keeps on providing power to fault-state network . Solving the problem of increasing fault current in micro grids by using SFCL technology is the main topic of this work. In this paper, the effect of SFCL and its position was investigated considering a wind farm, solar [5], diesel generator [6] integrated with a distribution grid model as one of typical configurations of the smart grid. The impacts of SFCL on the wind farm, solar, diesel generator and the strategic location of SFCL in a micro grid which limits fault current

2.2. Resistive SFCL Model The three phase resistive type SFCL was modeled and considering four fundamental parameters of a resistive SFCL. These parameters and their selected values are 1) Transition or response time 2) minimum impedance and maximum impedance 3) Triggering current and 4) recovery time .Its working voltage is 22.9 kV. Fig. 2 shows the SFCL model developed in Simulink/Simpower System. The SFCL model works as follows. First, SFCL model calculates the RMS value of the passing current and then compares it with the Lookup table. Second, if a passing Current is larger than the triggering current level, SFCL’s resistance increases to maximum impedance level in a pre-defined response time. Finally, when the current level falls below the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

1

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig.1 Power system model designed in Simulink/SimPowerSystem,Fault and SFCL locations are indicated in the diagram

triggering current level the system waits until the recovery time and then goes into normal state. SFCL is located at different locations.

2.4. Solar model The terminal equations of the Voltage, Current and Power [5] used for the devolpement of Solar Simulink model are given by equations 1,2 and 3.

Fig.2 Single phase SFCL model developed in Simulink/SimPowerSystem.

2.3. Wind Turbine Generator Model Where Iph is the light generated current, Is is the cell saturation of dark current, q is the unit electric charge, k is the Boltzman’s constant t is the working temperature of p-n junction, A is the ideal factor. Iph is the current based on cells working temperature and solar irradiation level described in equation.2 . Isc is the cells short circuit current Kt is the temperature coefficient and S is the solar irradiation level in KW/m2. Power delivered by PV array is given by equation(3) and PV array is modeled as single diode circuit . The subsystem can be implemented by using equations 1, 2 and 3.

A 10 MVA wind farm[1] is composed of 5 fixed-speed inductiontype wind turbines,each having a rating of 2MVA at the time of fault the domestic load is being provided with 3MVA out of which 2.7 MVA is provided by the wind farm. The wind farm is directly connected with the branch network B6 through the transformer TR3.Simulink model of wind turbine generator is shown.

Fig.3 Three phase wind turbine generator subsystem model.

Fig.4 Solar model developed in Simulink/SimPowerSystem.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

2

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The output power of pv array is 675KW which is given to the inverter and then the ouput of model is connected directly to the branchnetwork B6 through transformer T3. The PV simulink model is shown in Fig.4.

ISBN: 378 - 26 - 138420 - 5

800

600

Current (Amp)

400

2.5. Diesel Generator Model

200

0

-200

A lower order model [6] is used for studying dynamics of internal combustion engines. A self –tuning PID controller is developed for a small size diesel engine and generator set.The output power of diesel generator is 3.125MVA.The ouput of model is connected directly to the branchnetwork B6 through transformer T3.The Simulink model of Governor and diesel engine is shown in Fig.5.

-400

-600 0.25

0.3

0.35

0.4

0.45

0.5

0.45

0.5

Time(Sec)

A. Fault current without SFCL 1200 1000 800

Current(Amp)

600 400 200 0 -200 -400 -600 0.25

Fig.5 Governor and diesel engine Simulink model developed

0.3

0.35

0.4 Time(Sec)

B.Fault current with SFCL at Location 1

The Simulink model of Voltage control and Speed control of diesel generator shown in Fig.6. 1200 1000 800

Current(Amp)

600 400 200 0 -200 -400 -600 0.25

0.3

0.35

0.4

0.45

0.5

Time(Sec)

C. Fault current with SFCL at Location 2

Fig.6 Voltage and Speed control Simulink model developed 600

3. RESULTS AND CONCLUSIONS

400

Three scenarios of SFCL’s possible locations were analyzed for distribution grid fault occurring in the power system. The simulation is carried by placing SFCL in Location 1,2 and 3 by considering one DG at a time at the position shown in fig.1.

Current(Amp)

200

0

-200

-400

3.1. Wind Turbine Generator as DG -600 0.25

0.3

0.35

0.4

0.45

0.5

Time(Sec)

Fault currents of one phase where the severity is more and different locations of SFCL are shown in Fig.7. When a threephase-to-ground fault was initiated in the distribution grid (Fig.1) SFCL located at Location 1 or Location 2, fault current contribution from the wind farm was increased . These critical observations imply that the installation of SFCL in Location 1 and Location 2 instead of reducing, has increased the DG fault current. This sudden increase of fault current from the wind farm is caused by the abrupt change of power system’s impedance. The SFCL at these locations (Location 1 or Location 2) entered into current limiting mode and reduced fault current coming from the conventional power plant due to rapid increase in its resistance. Therefore, wind farm which is the other power source and also closer to the Fault is now forced to supply larger fault current to fault point. In the case when SFCL is installed at the integration point of wind farm with the grid, marked as Location 3 in Fig.1 wind farm fault current has been successfully reduced. SFCL gives 68% reduction of fault current from wind farm and also reduce the fault current coming from conventional power plant because SFCL is located in the direct path of any fault current flowing towards Fault 1.

D.Fault current with SFCL at Location 3

Fig.7 Fault Currents of Wind Turbine Generator.

Comparison of fault currents when SFCL is placed at different Locations in Fig.1 are shown in Fig.8

Fig.8.Comparison of fault currents at different locations

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

3

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

3.2. Solar model as DG Fault currents of one phase where the severity is more and different locations of SFCL are shown in Fig.9.When a three-phaseto-ground fault was initiated in the distribution grid (in Fig. 1).The fault without SFCL is greater when compared with SFCL placed at different Locations the following results were observed. Once again the best results are obtained when a single SFCL is located at Location 3, when compared with the remaining two locations. Fault current has been successfully reduced.SFCL gives 70% reduction of fault current from Solar module and also reduce the fault current coming from conventional power plant because SFCL is located in the direct path of any fault current flowing towards Fault 1.

Fig.10.Comparison of fault currents at different locations

3.3. Diesel Generator as DG

3000

2500

Fault currents of one phase where the severity is more and different locations of SFCL are shown in Fig.11. In this case when a three-phase-to-ground fault was initiated in the distribution grid (in Fig. 1).The fault without SFCL is greater when compared with SFCL placed at different Locations the following results were observed. Once again the best results are obtained when a single SFCL is located at Location 3, when compared with the remaining two locations. Diesel fault current has been successfully reduced. SFCL gives 84% reduction of fault current from diesel engine and also reduce the fault current coming from conventional power plant because SFCL is located in the direct path of any fault current flowing towards Fault 1.

2000

C urrent(am p)

1500

1000

500

0

-500

-1000 0.25

0.3

0.35

0.4

0.45

0.5

Time(s ec)

A. Fault current without SFCL 1500

1000

1000

Current(amp)

500

800

0

-500 C urrent(Am p)

600

-1000

-1500 0.25

0.3

0.35

0.4

0.45

0.5

400

200

Time(Sec)

B. Fault current with SFCL at Location 1

0

1500 -200 0.25

0.3

0.35

0.4

0. 45

0.5

0.45

0.5

Time(Sec)

1000

Current(amp)

A. Fault current without SFCL 500

1000

800

0 600

-500

0.3

0.35

0.4

0.45

Current(Amp)

-1000 0.25

400

0.5

Time(Sec)

C.Fault current with SFCL at Location 2

200

0

-200

-400

1000 -600

800 -800 0.25

0.3

0.35

600 400 Current(amp)

0.4 Time(Sec)

B. Fault current with SFCL at Location 1

200 1000

0 800

-200 600

-400 400

-800 0.25

0.3

0.35

0.4

0.45

C urrent(Am p)

-600

0.5

200

0

Time(Sec) -200

D. Fault current with SFCL at Location 3 -400

-600

Fig.9 Fault Currents of Solar model.

-800 0.25

0.3

0.35

0.4

0. 45

0.5

Time(Sec)

Comparison of fault currents when SFCL is placed at different Locations in Fig.1 are shown in Fig.10.

C.Fault current with SFCL at location 2

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

4

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

and diesel. Multiple SFCLs will not used in micro grid because of their inefficiency both in performance and cost. The strategic location of SFCL in a power grid which limits all fault currents and has no negative effect on the DG source is the point of integration of the DG sources with the power grid.

500

400

300

200

Current(Amp)

ISBN: 378 - 26 - 138420 - 5

100

REFERENCES

0

-100

-200

[1] Umer A.Khan, J.K.seong, S.H.Lee “Feasibility analysis of the Positioning of Super Conducting Fault Current Limiters for the Smart Grid Application using Simulink & Simpower Systems” IEEE Transactions on applied superconductivity, vol.21, No.3 June-2011. [2] S.Sugimoto, J.Kida, H.Aritha “Principle and Characteristics of Fault Current limiters with series compensation” IEEE Trans. on Power Delivery, vol.11,No.2,pp 842-847,Apr 1996. [3] J.Driensen, P.Vermeyen and R.Belmans “Protection issues in micro grids with multiple distributed generation units” in Power conversion conference., Nagoya, April 2007, pp 646-653. [4] K.Maki ,S.Repo and P.Jarventausta “Effect of wind power baSed distributed generation on protection of distribution netwrk Work” in IEEE Developments in Power systems protection, Dec.2004, vol.1, pp 327-330. [5] N.Pandiarajan, Ranganath muth “Modeling of Photo Voltaic Module with Simulink” international conference on electrical Energy system (ICEES-2011), 3-5 Jan-2011. [6] Takyin Takychan Transient analysis of integrated solar/diesel hybrid power System using MATLAB/Simulink . [7] S.N.B.V.Rao Asst Prof. in Ramachandra college of engineering

-300

-400 0.25

0.3

0.35

0.4

0.45

0.5

Time(Sec)

D.Fault current with SFCL at Location 3

Fig.11 Fault Currents of Diesel Engine.

Comparison of fault currents when SFCL is placed at different Locations in Fig.1

Fig.12.Comparison of fault currents at different locations

When the SFCL was strategically located at the point of integration Location 3, the highest fault current reduction was achieved whatever may be the DG source. The multiple SFCLs in a micro grid are not only costly but also less efficient than strategically located single SFCL. Moreover, at Location 3, fault current coming from the conventional power plant was also successfully limited. The results of fault currents at three locations with different DG units are summarized in Table I for comparison. Table I: Percentage change in fault currents of different DG units due to SFCL locations 3-Ph Distribution grid fault DG type

No SFCL

Location 1

Location 2

Location 3

Wind

787A

1020A 30% increased

1020A 30% increased

265A 67% decreased

Solar

2675A

1100A 59% decreased

1100A 59% decreased

800A 70% decreased

Diesel

916A

798A 13% decreased

798A 13% decreased

148A 84% decreased

4. CONCLUSION This paper presented a feasibility analysis of positioning of the SFCL in rapidly changing modern power grid. A complete power system along with a micro grid (having a wind farm, solar, diesel connected with the grid) was modeled and transient analysis for three-phase-to-ground faults at different locations of the grid were performed with SFCL installed at different locations of the grid. This placement of SFCL results in abnormal fault current contribution from the wind farm and other two sources such as solar

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

5

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Implementation of HDLC Protocol Using Verilog K. Durga Bhavani Dept. of ECE RGUKT-Basar Durga.ece06@gmail.com

B. Venkanna Dept. of ECE RGUKT-Basar Venkanna.engineer@gmail.com

Abstract— A protocol is required to transmit data successfully over any network and also to manage the flow at which data is transmitted. HDLC protocol is the high-level data link control protocol established by International Organization for Standardization (ISO), which is widely used in digital communications. High-level Data Link Control (HDLC) is the most commonly used Layer2 protocol and is suitable for bit oriented packet transmission mode. This paper discusses the Verilog modeling of single-channel HDLC Layer 2 protocol and its implementation using Xilinx.

II.

HDLC PROTOCOL

The HDLC Protocol Controller is a high-performance module for the bit-oriented packet transmission mode. It is suitable for Frame Relay, X.25, ISDN B-Channel (64 Kbits/s) and D-Channel (16 Kbits/s) The Data Interface is 8-bit wide, synchronous and suitable for interfacing to transmit and receive FIFOs. Information is packaged into an envelope, called a FRAME [4]. An HDLC frame is structured as follows:

Keywords- High –Level Data link Control (HDLC), Frame Check Sequence (FCS), and Cyclic Redundancy Check (CRC)

I.

K. Gayathri Dept. of ECE Intell Engg. College-Anatapur kgayathri.vijji@gmail.com

FLAG

ADDRESS

CONTROL

INFORMATION

FCS

FLAG

8 bits

8 bits

8 /16 bits

variable

8

8 bits

INTRODUCTION

HDLC protocol is the high-level data link control protocol established by International Organization for standardization (ISO), which is widely used in digital communication and are the bases of many other data link control protocols [2]. HDLC protocols are commonly performed by ASIC (Application Specific Integrated Circuit) devices, software programming and etc. The objective of this paper is to design and implement a single channel controller for the HDLC protocol which is the most basic and prevalent Data Link layer synchronous, bitoriented protocol. The HDLC protocol (High –Level Data link Control) is also important in that it forms the basis for many other Data Link Control protocols, which use the same or similar formats, and the same mechanisms as employed in HDLC. HDLC has been so widely implemented because it supports both half duplex and full duplex communication lines, point to point(peer to peer) and multi-point networks[1]. The protocols outlined in HDLC are designed to permit synchronous, code-transparent data transmission. Other benefits of HDLC are that the control information is always in the same position, and specific bit patterns used for control differ dramatically from those in representing data, which reduces the chance of errors.

Table 1. HDLC Frame A. Flag Each Frame begins and ends with the Flag Sequence which is a binary sequence 01111110. If a piece of data within the frame to be transmitted contains a series of 5 or more 1’s, the transmitting station must insert a 0 to distinguish this set of 1’s in the data from the flags at the beginning and end of the frame. This technique of inserting bits is called bit-stuffing [3]. B. Address Address field is of programmable size, a single octet or a pair of octets. The field can contain the value programmed into the transmit address register at the time the Frame is started. C.

Control

HDLC uses the control field to determine how to control the communications process. This field contains the commands, responses and sequences numbers used to maintain the data flow accountability of the link, defines the functions of the frame and initiates the logic to control the movement of traffic between sending and receiving stations.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

6

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

D. Information or Data address detect unit, FCS calculator unit, zero detection unit, This field is not always present in a HDLC frame. It is only flag detection unit, receive control and status register unit present when the Information Transfer Format is being and frame timer and synchronization logic unit. used in the control field. The information field contains the A. Transmitter Module actually data the sender is transmitting to the receiver. The Transmit Data Interface provides a byte-wide E. FCS interface between the transmission host and the HDLC Controller. The Transmit data is loaded into the controller The Frame Check Sequence field is 16 bits. The FCS is on the rising edge of Clock when the write strobe input is transmitted least significant octet first which contains the asserted. The Start and End bytes of a transmitted HDLC coefficient of the highest term in the generated check polynomials. The FCS field is calculated over all bits of the Frame are indicated by asserting the appropriate signals addresses, control, and data fields, not including any bits with the same timing as the data byte. inserted for the transparency. This also does not include The HDLC Controller will, on receipt of the first byte of a the flag sequence or the FCS field itself. The end of the data new packet, issue the appropriate Flag Sequence and field is found by locating the closing flag sequence and transmit the Frame data calculating the FCS. When the last removing the Frame Check Sequence field (receiver section) byte of the Frame is seen the FCS is transmitted along with [5]. a closing Flag. Extra zeros are inserted into the bit stream to avoid transmission of control flag sequence within the III. HDLC MODULE DESIGN Frame data. In this design, HDLC procedures contain two modules, i.e. The Transmit Data is available on TxD pin with encoding-and-sending module (Transmitter) and receivingappropriate to be sampled by Clk. If TxEN is de-asserted, and-decoding module (receiver). The function diagram is transmit is stalled, and TxD pin is disabled. shown as below. A transmit control register is provided which can enable or disable the channel. In addition it is possible to force the transmission of the HDLC Abort sequence. This will cause the currently transmitted Frame to be discarded. The transmit section can be configured to automatically restart after an abort, with the next frame, or to remain stalled until the host microprocessor clears the abort. B. Receiver Module The HDLC Controller Receiver accepts a bit stream on port RxD. The data is latched on the rising edge of Clock under the control of the Enable input RxEN. The Flag Detection block searches the bit stream for the Flag Sequence in order to determine the Frame boundary. Any stuffed zeros are detected and remove and the FCS is calculated and checked. Frame data is placed on the Receive Data Interface and made available to the host. In addition, Flag information is passed over indicating the Start and the End byte of the HDLC Frame as well as showing any error conditions which may have been detected during receipt of the Frame. In normal HDLC protocol mode, all Receiver Frames are presented to the host on the output register. A status register is provided which can be used to monitor status of the Receiver Channel, and indicate if the packet currently being received includes any errors.

Fig.1. HDLC Block Design Form this diagram we know that, transmitter module includes transmit register unit, address unit, FCS generation unit, zero insertion unit, Flag generation unit, control and status register unit and transmit frame timer and synchronization logic unit. Receiver module includes receive register unit,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

7

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

International Organization for Standardization, pp 10-17, July 2002. [4] A.Tannenbaum, “Computer Networks”, Prentice Hall of India, 1993. [5] Mitel Semiconductor, “MT8952B HDLC Protocol Controller,”

IV. RESULTS

Mitel Semiconductor Inc., pp 2-14, May 1997.

Fig.2. Simulation Waveform Device Utilization Report: Clock Frequency: 78.2 MHz Resource Used Avail Utilization IOs 60 180 33.33% Function Generators 205 1536 13.35% CLB Slices 103 768 13.41% Dffs or Latches 108 1536 7.03% Table 2. Synthesis Report

V. CONCLUSION We designed HDLC protocol sending and receiving RTL level modules in Verilog and had them tested successfully, which has the following advantages like easy to program and modify, suitable for different standards of HDLC procedures, match with other chips with different interfaces. So this proposed method can be more useful for many applications like a Communication protocol link for RADAR data processing.

REFERENCES [1] “Implementation of HDLC protocol Using FPGA”, [IJESAT] International Journal of Engineering Science & Advanced Technology, ISSN: 2250-3676, Volume-2, Issue-4, 1122 – 1131. [2] M.Sridevi, DrP.Sudhakar Reddy / International Journal of Engineering Research and Applications (IJERA) ISSN: 22489622 www.ijera.com Vol. 2, Issue 5, September- October 2012, pp.2217-2219. [3] ISO/IEC 13239, “Information technology -Telecommunications and Information exchange between systems – High-level data link control (HDLC) procedures,”

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

8

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

AUTOMOBILE INNOVATION

stuck in the traffic jams mainly due to no

(A Smart Eco-Friendly Car system with Traffic Conjecture using inductive loops and touch less mobile control along with Alcohol Prevention in Automobile)

driver/commuter about the level of traffic at

proper estimation on the traffic on their way. So, we thought that we can eliminate this problem by intimating or instructing the

any junction all over the city. Such that the commuter can escape from the traffic jam and he/she can reach the destination in time. So to intimate the density of traffic at any junction, we are going to use “Induction loops� to calculate the density of traffic at the junctions and to display them to the

Presented by:

commuter. In addition to this we are also using Google maps to forecast the traffic

G.RANA PHANI SIVA SUJITH

through GPS to the driver at any place Electronics and communications engineering

regarding the traffic at the junctions. And

Sri Vasavi Institute of Engineering and technology

also we are using these induction loops to

Ph no: 9030633480

reduce the noise pollution at the traffic

Email Id: sujithgopisetti@gmail.com

intersections. We also quoted about one of the smart car experience to the commuter

Abstract:

ie., a touch less mobile controlling system The traffic jams in cities are most common

that enables the driver to operate his/her

and remained as a uncontrollable problem.

mobile with out even touching it. This touch

This idea we developed is the result of our

less mobile system is more useful to

observation in one of the major traffic

eliminate accidents that occur due to usage

intersection at Vijayawada (Benz circle). In

of mobile phones while driving.when a

this paper we are going to present you the

driver or any passengers of car consumes

best solution to eradicate heavy traffic jams

alcohol then the car automatically stops in a

without using any constructions like bridges

safe place side of a road using traffic

or additional lanes on the roads. We

conjecture & the front wheels of the car will

observed that the commuters are getting

be locked out.Through GPS technology a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

9

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

complaint will be sent to the nearest police

1. Placing the induction loops at the

station.If driver talks in a mobile for more

junctions beneath the road.

than 8 secs then the system of the car

2. Setting up a detector to detect the sata

activates the jammers to cut the call signals

given by each and every loop placed on the

& It stops the car.We are also Placing a

road.

piezo-electric devices on roads & zebra energy

3. Setting up a control system that enables

through the mechanical stress applied by the

us to display the density and give the

pedestrians & vehicles while going on the

instructions to the driver.

crossing

will

generate

electric

road. We are going to use this electric

4. Further using the data given by the

energy as the power supply to the Induction

detectors to indicate the Google map with

Loops,Traffic Signals & Traffic Street Lightening

System.We

are

using

traffic density.

this

technique with less amount of investment and more Accurate Output..

Flow chart:

Approach:

Installation of inductive loops

The traffic can be controlled by using various advanced technologies such as digital signal processing and various other. But here we are using a simple and cost effective tool to estimate the density of

Detecting the data given by the loops

traffic. And after getting the data on density we are going to control the traffic depending up on the density value. Steps involved for traffic conjecture:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

10

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

metal objects in metal detectors or vehicle presence indicators. A common modern use

Estimating the traffic and calculation of time required to reach the required destination

for induction loops is to provide hearing assistance to hearing-aid users. Vehicle detection loops, called inductive-loop traffic detectors, can detect vehicles passing or arriving at a certain point, for instance approaching a traffic light or in motorway traffic. An insulated, electrically conducting loop is installed in the pavement. The electronics unit transmits energy into the wire loops at frequencies between 10 kHz to

Displaying the data and instructions

200 kHz, depending on the model. The inductive-loop system behaves as a tuned electrical circuit in which the loop wire and lead-in cable are the inductive elements. When a vehicle passes over the loop or is stopped within the

loop, the vehicle

The main element that is needed to

induces eddy currents in the wire loops,

implement this idea is the inductive loop.

which

And also proper controlling system to attain

decreased inductance actuates

much accuracy.

electronics unit output relay or solid-state

decrease

their

inductance.

The the

optically isolated output, which sends a

Inductive loops:

pulse

An induction

system

which

the

traffic

signal

controller

loop is

signifying the passage or presence of a

or

vehicle. Parking structures for automobiles

a

may use inductive loops to track traffic

an electromagnetic communication detection

to

uses

moving magnet to induce an electrical

(occupancy) in and out or may be used by

current in a nearby wire. Induction loops are

access gates or ticketing systems to detect

used for transmission and reception of

vehicles while others use Parking guidance

communication signals, or for detection of

and information systems. Railways may use

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

11

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

an induction loop to detect the passage of trains

past

a

given

point,

as

an

An increasingly common application is for

electronic treadle.

providing hearing aid-compatible "assistive

The relatively crude nature of the loop's

listening" telecoil. In this application a loop

structure means that only metal masses

or series of loops is used to provide an audio

above a certain size are capable of triggering

frequency oscillating magnetic field in an

the relay. This is good in that the loop does

area where a hearing aid user may be

not thus produce very many "false positive"

present. Many hearing aids contain a telecoil

triggers (say, for example, by a pedestrian

which allows the user to receive and hear the

crossing the loop with a pocket full of loose

magnetic field and remove the normal audio

metal change) but it sometimes also means

signal provided from the hearing aid

that bicycles, scooters, and motorcycles

microphone site. These loops are often

stopped at such intersections may never be

referred to as a hearing loop or audio

detected by them (and therefore risk being

induction loop.

ignored by the switch/signal). Most loops

An anti-submarine

can be manually adjusted to consistently

device used to detect submarines and surface

detect

vessels using specially designed submerged

the

presence

of

scooters

and

motorcycles at the least. A different sort of "induction

loop"

is

applied

indicator loop was a

cables connected to a galvanometer.

to metal

Application of this idea to control

detectors, where a large coil, which forms

traffic:

part of a resonant circuit, is effectively "detuned" by the coil's proximity to a conductive object. The detected object may be metallic (metal and cable detection) or conductive/capacitive

(stud/cavity

detection). Other configurations of this equipment use two or more receiving coils, and the detected object

modifies the

inductive coupling or alters the phase angle of the voltage induced in the receiving coils relative to the oscillator coil.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

12

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Our Intelligent Traffic Light System is capable of changing priority level of the roads according to their traffic level. To measure the traffic level we have several mechanisms. 

Image Processing

Pressure sensors, give reading when

pressure changes by the vehicles 

Inductive Loops

From the above methods we choose inductive loop.It is build on the concept of inductance change of a coil when a metal object come closer.You know that when you send electrical current through a wire, it generates a magnetic field. For a coil this electromagnetic

The above is the case study that we had

field

is high.You can

change the inductance of the coil and change

conducted. The red colored lane indicates

the electromagnetic flux by introducing

the heavy traffic area. And the green lane

additional conductive materials into the

indicates the free path without any traffic.

loop's magnetic field. This is what happens

The vehicle and the destination are indicated

when a car pulls up to the intersection. The

above. At the first junction before the main

huge mass of metal that makes up your car

junction, we are going to place a board that

alters the magnetic field around the loop,

indicates the driver about the status of the

changing its inductance.

traffic and also instruct him to take diversion or not. This makes the commuter to reach his destination as fast as possible. To achieve this we need to use the inductive loop detectors and to be placed beneath the road to detect the vehicle density.

Working:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

13

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

from the same point. The two ends of the loop wire are connected to the loop extension cable, which in turn connects to the vehicle detector. The detector powers the loop causing a magnetic field in the loop area. The loop resonates at a constant frequency that the detector monitors. A base frequency is established when there is no vehicle over the loop. When a large metal object, such as a vehicle, moves over the loop, the resonate frequency increases. This increase

in

frequency is sensed and,

depending on the design of the detector, So we have made a coil develop the

forces a normally open relay to close. The

inductive loop.We have some kind of

relay will remain closed until the vehicle

metering device to meter the voltage level

leaves the loop and the frequency returns to

change in the coil.

the base level. The relay can trigger any number of devices such as an audio

An inductive loop vehicle detector system

intercom system, a gate, a traffic light, etc.

consists of three components: a loop

In general, a compact car will cause a

(preformed or saw-cut), loop extension cable

greater increase in frequency than a full size

and a detector. When installing or repairing

car or truck. This occurs because the metal

an inductive loop system the smallest detail

surfaces on the under carriage of the vehicle

can mean the difference between reliable

are closer to the loop. Figures 3 and 4

detection and an intermittent detection of

illustrate how the under carriage of a sports

vehicles. Therefore, attention to detail when

car is well within the magnetic field of the

installing or troubleshooting an inductive

loop compared to the sports utility vehicle.

loop vehicle detection system is absolutely

Notice that the frequency change is greater

critical. The preformed or saw-cut loop is

with the smaller vehicle.

buried in the traffic lane. The loop is a continuous run of wire that enters and exits

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

14

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

frequency. For example, a one square foot piece of sheet metal positioned in the same plane of the loop has the same affect as a hunk of metal one foot square and one foot thick. Another way to illustrate the point is to take the same one square foot piece of sheet metal, which is easily detected when held in the same plane as the loop, and turn it perpendicular to the loop and it becomes impossible to detect. Keep this principle in mind when dealing with inductive loop detectors.

Some detectors provide PC diagnostics via a communication

port

on

the

detector.

Diagnostic software gives you a visual picture of what is happening at the loop, and

Also, it is interesting to note that the

will help you troubleshoot any problems you

frequency change is very consistent between

may experience during installation or in the

two vehicles of the same make and model,

future. Detectors with this feature are

so much so that a detector can almost be

usually in the same price range as other

designed to determine the type of vehicle

detectors and can help you save time solving

over the loop. There is a misconception that

a detection problem. The PC software and

inductive loop vehicle detection is based on

cable is usually additional, however keep in

metal mass. This is simply not true.

mind that if you have multiple installations

Detection is based on metal surface area,

you need only buy the software and cable

otherwise known as skin effect. The greater

setup once. Diagnostics software can also

the surface area of metal in the same plane as the loop, the greater the increase in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

15

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

help determine the depth and position of the

ISBN: 378 - 26 - 138420 - 5

Alcohol prevention in Automobiles:

loop in the pavement. we

are

seeing

daily

in

newspaper,newschannels etc., that many people are dieing in accidents due to Alcohol consumption or negligence of the driver while driving. My solution to this problem

is

“Alcohol

Prevention

in

Automobiles”.In this system firstly,we will fix Alcoholic Sensors or Air Bleezers in stearing of vehicle,near dashboard,back part of front seats(rear & back),so we can easily detect alcohol consumption of driver and passengers in the cars,trucks,lorrys,buses etc,. In Bikes,we will place sensors near the speedometer to detect the alcohol. After

Mapping the traffic density and enabling

detection,the

GPS:

system

will

give

three

warnings with voice response saying “you By using the data given by the induction

are alcohol consumed please stop the

loop detector we are going to enable the

vehicle”. If driver does not listen then

Google maps to give the updates of high

car(any vehicle) stops in a safe place on

traffic prone areas through GPS.

other side of the road by observing Traffic conjecture on the screen in his/her car,or if we have Wi-Fi in our mobile,then by clicking on Traffic Conjecture App on your Mobile u can see the Traffic Ahead of you. After car has stopped, the Front Wheels of vehicle will be locked, Car Engine will be OFF and the position of the vehicle(through Google map) will be given as a Compliant to the nearest Police Station through wide

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

16

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

area

differential

global

positioning

“LAUNCH

camera in front of driver seat,so if driver

and

then

stops

the

ERA

OF

Attachments:

falls asleep or became less sensitive then it first

NEW

AUTOMOBILE”…..

system(WADGPS).we will also place a

warns

OF

ISBN: 378 - 26 - 138420 - 5

car

Videos:

Automatically after warning. 1.Induction Loops Practical Functioning Video on Roads Practically.

Eco-Friendly Autonomous Car:

2.Future Alcoholic Drunk Drive System Prevention Video Practically.

We are planning to make our autonomous more Eco-Friendly one.For his,we are going

3.Power Point Presentation of Automobile

to

Innovation.

place

Piezo-Electric

Devices

on

Roads,Zebra Crossing,Footpath etc.,thereby

4.Autonomous Car Project Abstract.

we are going to generate Electric energy from the mechanical stress applied by the

5.Power Point Presentation of Autonomous

pedestrians,vehicles while going on the

Car.

Roads.we are going this developed electric

References:

energy as a power supply to the Induction Loops,GPS

system,Traffic

System,Street

Lightening

Signalling

1. http://www.fhwa.dot.gov/publication s/publiroads/98septoct/loop.cfm

System,Traffic

Conjecture Display Screens etc,.Thus we are

2. http://auto.howstuffworks.com/cardriving-safety/safety-regulatorydevices/red-light-camera1.htm

renewable energy.

Conclusion:

3. http://www3.telus.net/chemelec/Proj ects/Loop-Detector/LoopDetector.htm

At last, My Aim is to Develop an EcoFriendly Autonomous Car with Traffic

4. www.wikipedia.org

Conjecture using Induction Loops and Alcoholic Prevention System. Being an Electronics I did my maximum what I can do for Automobile Field. Hence it is a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

17

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

A REVIEW ON MODIFIED ANTI FORENSIC TECHNIQUE FOR REMOVING DETECTABLE TRACES FORM DIGITAL IMAGES 1

2

M.GOWTHAM RAJU.

N.PUSHPALATHA,

M.Tech (DECS) Student, Assistant professor Department of ECE, AITS Department of ECE, AITS Annamacharya Institute of Technology and Sciences, Tirupati, India-517520 1

mgr434@gmail.com pushpalatha_nainaru@rediffmail.com

2

cameras produce instant images which can be viewed without delay of waiting for film processing. it does not require external development they can be store easily. And there should not be taken any time delay. Images can be processed in different ways. They are processed as jpeg images, in some other cases they are processed in bit mat format. When they are used in bitmap format it does need to use without any information of past processing. to know about the past processing information it is desirable to know the artifacts of image. These techniques are capable of finding the earlier processing information. Therefore forensic researches need to examine the authenticity of images to find how much the trust can be put up on the techniques and this can also be used to find out the drawback of this techniques. Person with good knowledge in image processing can do undetectable manipulation. it is also desirable to find the draw backs of these techniques. For this purpose research has to develop both forensic and anti-forensic techniques to understand the weaknesses. Consider the situation that already tried to remove the artifacts of compression. The forensic experts can easily find out the existing techniques such as quantized estimation. It is useful when image processing unit receives compression details and quantization table used for processing and compression. Some of the existing techniques like detection of blocking signature estimation of quantization table this allow the mismatches and forgeries in jpeg blocks by finding the evidence of compression. To solve this problem of image forensic the research has to develop tools that are capable of fooling the existing methodologies. Even though the existing methods have advantages some limitations too. The main drawback of these techniquesis that they do not report for the risk that new technique may be design and used to conceal the traces ofmanipulations. As mention earlier it may possible for an image forger to generate undetectable compression and other image forgeries. This modified anti-forensic technique approach is presented which is capable of hiding the traces of earlier processing including both compression and filtering. This concept is that adding specially designed noise to the image’s blocks will help to hide the proof of tampering.

Abstract: The increasing attractiveness and trust on digital photography has given rise to new acceptability issues in the field of image forensics. There are many advantages to using digital images. Digital cameras produce immediate images, allowing the photographer to outlook the images and immediately decide whether the photographs are sufficient without the postponement of waiting for the film and prints to be processed. It does not require external developing or reproduction. Furthermore, digital images are easily stored. No conventional "original image" is prepared here like traditional camera. Therefore when forensic researchers analyze the images they don’t have access to the original image to compare. Fraud through conventional photograph is relatively difficult, requiring technical expertise. Whereas significant features of digital photography is the ease and the decreased cost in altering the image. Manipulation of digital images is simpler. With some fundamental software, digitally-recorded image can easily be edited. The most of the alterations include borrowing, cloning, removal and switching parts of a digital image. A number of techniques are available to verify the authenticity of images. But the fact is that number of image tampering is also increasing. The forensic researcher’s need to find new techniques to detect the tampering. For this purpose they have to find the new anti-forensic techniques and solutions for them. In this paper a new anti-forensic technique is considered, which is capable of removing the evidences of compression and filtering. It is done by adding a specially designed noise called tailored noise to the image after processing. This method can be used to cover the history of processing in addition to that it can be also used to remove the signature traces of filtering. Keywords: Digital forensic, jpeg compression, image coefficients, image history, filtering, Quantization, DCT coefficients.

Introduction Digital images become very popular for transferring visual information. And there are many advantages using these images instead of traditional camera film. The digital

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

18

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

which we refer to as anti-forensic dither, to it value according totheequation Z=Y+D The segment length is equal to the length of the quantization interval the probability that the quantized coefficient value is qk is given by.

1. RELATED TO PROJECT WORK: 1.1. ANTI FORENSIC OF DIGITAL IMAGE COMPRESSION: As society has become increasingly reliant upon digital image to communicate visual information, a number of forensic techniques have developed. Among the most successful of these are techniques that make use of an images compression history and its associate compression finger prints. Anti-forensic techniques capable of fooling forensicAlgorithms this paper represents set of antiforensic techniques designed to remove forensically significant indicators of compression of an image. in this technique first distributes the image transform coefficients before compression then adding anti-forensic transform coefficients of compressed image so that distribution matches estimation one. When we use these frame work of anti-forensic techniques specially targeted at erased finger prints left by both JPEG and wavelet based coders. 1.1.1. ANTI-FORENSIC FRAMEWORK: All image compression techniques are subbing band coders, which are themselves a subset of transform coders. Transform coders are mathematically applying to the signals of compressing the transforms coefficients. Sub band coders are transform coders that decompose the signal in to different frequency bands. By applying two dimensional invertible transform, such as DCT to as image as a whole that has been segmented into a series of disjoint sets. Each quantized transform coefficient value can be directly related to its corresponding original transform coefficient value by equation. = ≤ < + 1 (1)

( =

)∫

( , )

(2)

The anti-forensic dither’s distribution is given by the formula P (D=d)=

( ∫

, ) ( . )

(

+

<

+ 1)(3)

1.1.2. JPEG ANTI-FORENSICS: Brief over view of JPEG compression then present our anti-forensic technique designed to remove compression finger prints from JPEG compressed image DCT coefficients. For gray scale image, JPEG compression begins by segmenting an image into a series of non over lapping 8x8 pixel blocks then computing the two dimensional DCT of each block. Dividing each coefficient value by its corresponding entry in predetermined quantization matrix rounding the resulting value to the nearest integer. First image transformed from the RGB to the YCBCrcolorspace. After this can been performed, compression continues as if each color layer were an independent gray scale image. 1.1.3. DCT Removal:

If the image was divided into segment during compression, another compression finger print may arise. Because of the loss

Coefficient

Quantization

Fingerprint

Anti-forensic frame work which we outlined in section 2.we begins by modeling the distribution of coefficients values with in a particular ac sub band using the Laplace distribution. ( = ) = x (4) Using this model and the quantization rule described above the coefficient values of an ac sub band of DCT coefficients with in a JPEG compressed image will be distributed according to the discrete Laplace distribution. 1− P(Y=y)=

, ⁄

if y = 0 if y=kQi,j sin ( 2) 0

(5)

Fig1: anti forensic of digital image compression When the anti-forensically modify each quantized transform coefficient by adding specially designed noise,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

19

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Fig2: Histogram of perturbed DCT coefficient values from a DCT sub band in which all coefficients were quantized to zero during JPEG compression. Wavelet-Based Compression Overview: Through several wavelet-based image compression techniques exists such as SPIHT,EZW,and most popularly JPEG 2000.they all operate in a similar fashion and leave behind similar compression finger prints.JPEG 2000 begins compression by first segmenting an image into fixed sized non over lapping rectangular blocks known as “tiles” while other operate on the image as a whole. Two dimensional DWT of the image or each image tile is computed these sub bands of the wave let coefficient. Because of these sub bands corresponding to either high or low frequency DWT coefficients in each spatial dimension, the four sub bands are referred to using the notation LL, LH, HL, and HH. Image compression techniques achieve loss compression through different processes they each introduce DWT coefficient quantization finger prints into an image Quantization and dequantization process causes DWT coefficient in image compression in the multiples of their respective sub bands.

ISBN: 378 - 26 - 138420 - 5

Fig3: Top: Histogram of wavelet coefficient from an uncompressed image. Bottom: wavelet coefficient from same image after SPIHT compression.

As a result only the n most significant bits of each DWT coefficients are retained. This is equivalent to applying the quantization rule. Where X is a DWT coefficient from an uncompressed imager y is the corresponding DWT coefficient in its SPIHT compressed counterpart.

Fig4: Top: peg compressed image using quality factor. Bottom: Anti forensically modified version of same image. 2. UNDETECTABLE IMAGE TAMPERING THROUGH JPEG COMPRESSION Number of digital image forensic techniques have been developed which are capable of identifying an image’s origin, tracing its processing history, and detecting image forgeries. Though these techniques are capable of identifying standard image manipulation, they do not address the possibility t be that anti forensic operations may be designed and used to hide evidence of image tampering .we propose anti-forensic operation capable of removing blocking artifacts from a previously JPEG compressed image. We can show that by help of this operation along another anti-forensic operation we are able to fool forensic

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

20

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

methods designed to detect evidence of JPEG compression in decoded images, determine an image’s origin, detect double JPEG compression, and identify cut and paste image forgeries. A digital image forgery has resulted in an environment where the authenticity of digital images cannot be trusted. Many of these digital forensic techniques rely on detecting artifacts left in image by JPEG compression. Because most of the digital cameras make use of proprietary quantization tables, an image compression history can be used to help identify the camera used to capture it. These techniques are quite adept at detecting standard image manipulation, they do not account for the possibility that anti-forensic operation designed to hide traces of image manipulation may applied to an image. Recent work as shown such operations can be constructed to successfully fool existing image forensic techniques. Back Ground: When an image is subjected to JPEG compression, it is first segmented into 8X8 pixel blocks. The DCT of each block is computed and resulting set of DCT coefficients are quantized by dividing each coefficient by its corresponding entry in a quantization table then rounding the result to the nearest integer. The set of quantized coefficients read into a single bit stream and lossless encoded. so decompressed begins by bit stream of quantized DCT coefficients and reforming into a set of 8X8 pixel blocks. As a result two forensically significant artifacts are left in an image by JPEG compression. That is DCT coefficient quantization artifact sand blocking artifacts. Blocking artifacts are the discontinuities which occur across 8X8 pixel block boundaries because of JPEG’s loss nature antiforensic technique capable of removing DCT coefficient artifacts from a previously compressed image.

ISBN: 378 - 26 - 138420 - 5

A measure of blocking artifacts strength is obtained by calculating the difference between the histograms of Z’ and Z” values denoted by H1 and H2 respectively, using the equation. K=∑|HI (Z′= n) −HII (Z′′= n)|. The values of K lying above a fixed detection threshold indicate the presence of blocking artifacts.

Fig5: Histogram of DCT coefficients from an image before compression (top left), after JPEG compression (top right), and after addition anti-forensic dither to the coefficients of the JPEG compressed image. 2.2. IMAGE TAMPERING THROUGH ANTIFORENSIC: We show that anti-forensic dither and our proposedantiforensic deblocking operation can be used to deceive several existing image forensic algorithms that rely on detecting JPEG compression artifacts.

2.1. ANTI-FORENSIC DEBLOCKING OPERATION JPEGblocking artifacts must be removed from an image after anti-forensic dither has been applied to its DCT coefficients. Number of de blocking algorithms proposed since the introduction of JPEG compression, these are all suited for anti-forensic purposes. To be successful it must remove all visual and statistical traces of block anti-facts. We found that light smoothing an image followed by adding low-power white Gaussiannoise. Able to remove statistical traces of JPEG blocking artifacts without causing the image DCT coefficient distribution to deviate from the Laplace distribution. in the anti-forensically deblocked image according to the equation.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

21

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig 7: Histogramof (3, 3) DCT coefficients from an image JPEG compressed once using a quality factor of 85(left), image after being double JPEG compressed using a quality factor of 75 followed by 85(center),and the image after being JPEG compressed using a quality factor of 75,followed by the application of anti –forensic dither, then recompressed using a quality factor of 85(right). 3. PROPOSED METHOD:

Tothe best knowledge increased in the field of antiforensics. Most of the methods of this an forensics is to find out the process that which the image compression is takes places, such of that methods involves in like JPEG detection and quantization table estimation.in this method of anti-forensic the JPEG compression of an image history also produces the information of camera used to produce an image.

Fig 6:Result of the proposed anti-forensic deblocking algorithm applied to a typical image after it has been JPEG compression using a quality factor of 90 (far left), 70(center left), 30(center right),and 10 (far right) followed by the addition of anti-forensic dither to its DCT coefficients.

Although it can be used to discover the forged areas along with in the picture.in case of image compression this technique is also developed to use as evidence of image manipulation.so in this anti forensic technique traces left by compression and other processing are discussed

2.3. Hiding Traces of Double JPEG compression: An image forger may wish to remove evidence of corresponding a previously JPEG compressed image. Such image forger wishes to alter a previously compressed image, and then save the altered image as JPEG.Several methods have been proposed to detect recompression of JPEG compressed image commonly known as double JPEG compression. 2.4. Falsifying an Image’s Origin: In some scenarios, an image forger may wish to falsify the origin of digital image simply altering the Mata data tags associated with an image’s originating device is insufficient to accomplish this because several origin identifying features are intrinsically contained with a digital image.Anti-forensic dither of an image’s DCT coefficient, then re-compressing the image using quantization tables associated with another device. by doing an image in this manner, we are able to insert the quantization signature associated with a different camera into an image while preventing the occurrence of double JPEG compression artifacts that may alert forensic investigators of such a forgery.

4. CONCLUSION:

By the above two existing methods, one of the method of anti-forensic method of digital image compression it has increasingly up on digital images to communicate and this method is considered anti forensics method is fooling forensic algorithms. This technique is designed to remove forensically significant indicators of compression of an image. First developing frame work its design the antiforensic techniques to remove compression finger prints from image transform coefficients. This anti forensic dither to the transform coefficient of compressed image distribution matches the estimated one. When we use this frame work it specifically targeted at erasing compression finger prints left by both JPEG and wavelet based coders. These techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image’s visual quality. The second method of undetectable image tampering through JPEG compression anti forensics digital

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

22

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[8] W. S. Lin, S. K. Tajo, H. V. Zhao, and K. J. Ray Liu, “Digital image source coder forensics via intrinsic fingerprints,” IEEE Trans. InformationForensics and Security, vol. 4, no. 3, pp. 460–475, Sept. 2009. [9] H. Farid, “Digital image ballistics from JPEG quantization,” Tech. Rep.TR2006-583, Dept. of Computer Science, Dartmouth College, 2006. [10] A.C. Popes cu and H. Farid, “Statistical tools for digital forensics,” in 6th International Workshop on Information Hiding, Toronto, Canada, 2004. [11] T. Pevny and J. Fridrich, “Detection of doublecompression in JPEG images for applications in steganography,” IEEE Trans. InformationForensics and Security, vol. 3, no. 2, pp. 247–258, June 2008. [12] M. Kirchner and R. Bohme, “Hiding traces of resampling in digital images,” IEEE Trans. Information Forensics and Security, vol. 3, no.4, pp. 582–592, Dec. 2008.

forensics are developed which are capable of identifying an image’s origin. Thesetechniques are capable of identifying standard image manipulations. This anti forensic technique capable of removing blocking artifacts from previously JPEG compression image.in this method we are able to fool forensic methods to designed to detect evidence of JPEG compression in decoded images, determine an image’s origin. When comparing above two existing methods, the anti-forensic method of removing detectable traces from digital images has advanced technique increases attractive ness and more over trust in the digital images it has capable of removing evidences of compression and filtering of in digital images history processing.by adding tailored noise in the image processing we can find out the where the images is tampered and compressed, weather its fake or original this can be used in the medical department as well as in the police department cases. This method is to be used to cover history of processing and it can be also used to remove the signature traces of filtering. REFERENCES [1]M.chen, J.fridrich, M.goljan and Lukas, “Determining image origin and integrity using sensor noise” IEEE trans.inf.forensic security, vol.3, no.1, pp.74-90, march.2008. [2]A.swaminathan,M.Wu,andK.R>Liu,”Digital image forensics via intrinsic finger prints,”IEEEtrans.inf.forensicssecurity, vol.3, no.1, pp.101117, mar.2008. [3] M.Kirchner and R.Bohme,”Hiding traces of resampling in digital images,”IEEEtrans.inf.forensics security, vol.3, no.4, pp.582-592, Dec.2008. [4] I. Ascribes, S. Bayram, N.Memon, M. Ram Kumar, and B. Sankur, “A classifier design for detecting image manipulations,” in Proc. IEEE Int.Conf. Image Process. Oct. 2004, vol. 4, pp. 2645–2648. [5] M. C. Stamm and K. J. R. Liu, “Forensic detection of image manipulation using statistical intrinsic fingerprints,” IEEE Trans. Inf. ForensicsSecurity, vol. 5, no. 3, pp. 492– 506, Sep. 2010. [6] Z. Fan and R. de Queiroz, “Identification of bitmap compression history: JPEG detection and quantizer estimation,” IEEE Trans. ImageProcess. vol. 12, no. 2, pp. 230–235, Feb. 2003. [7] M. C. Stamm and K. J. R. Liu, “Wavelet-based image compressionanti-forensics,” in Proc. IEEE Int. Conf. Image Process., Sept. 2010, pp. 1737–1740.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

23

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

FPGA Implementation of Fault Tolerant Embedded RAM using BISR Technique Swathi Karumuri, Suresh Chowdary Kavuri Dept., of VLSI and Embedded systems, Dept., of Electronics and Communication Engineering, JNTU Hyderabad, India swathisubadra@gmail.com kavurisureshchowdary@gmail.com Abstract-- Embedded memories with fault tolerant capability can effectively increase the yield. The main aim of this project is to design a fault tolerant memory which is capable of detecting stuck at faults and to repair them automatically. It uses BISR with redundancy scheme in order to achieve this. The architecture consists of a multiplexer (MUX), memory (circuit under test-CUT), Built in self test (BIST) module, Built in self diagnosis (BISD) module. Besides adding spare rows and columns user can select normal words as redundancy to repair the faults. So, it reduces the area. High repairing speed is possible with one to one mapping between faults and redundancy locations. It provides test mode and access mode to the RAM users. Many repairing schemes were implemented where repairing is not possible if the redundancy locations have faults. This proposed architecture shows the number of redundant locations and depending on that it has the advantage of repairing faults in redundancy locations also. The proposed model is simulated and synthesized using Xilinx and tested using Spartan3E development board.

time-saving and less cost compared to the ones controlled by the external tester (ATE) [8]. However, BIST doesn’t repair the faults. BISR techniques tests embedded memories, saves the fault addresses and replaces them with redundancy. [5] Presents multiple redundancies scheme and [9] proposes BISR strategy applying two serial redundancy analysis (RA) stages. All the previous BISR techniques have the ability to repair memories, but they can not avoid storing fault addresses more than once. [10] proposes a solution to this but it cannot repair redundant location faults. This paper proposed an efficient strategy that can repair redundancy faults also. The rest of the paper is organized as follows. Section II briefs different fault models, March algorithms and BIST. Section III introduces the proposed BISR technique. Section IV shows the experimental results. Section V concludes this paper. Section VI gives the future work. II.

Keywords-- Built in self diagnosis (BISD), Built in self repair (BISR), Built in self test (BIST), FPGA, VHDL.

I.

FAULT MODELS, MARCH ALGORITHMS AND BIST

A fault model is a systematic and precise representation of physical faults in a form suitable for simulation and test generation [11]. Different RAM faults are as follows and can be referred in [12].  AF: Address Fault  ADOF: Address Decoder Open Faults  CF: Coupling Faults  DRF: Data Retention Faults  SAF: Stuck-at Faults  SOF: Stuck Open Faults  TF: Transition Faults Memory test algorithms are two types where traditional tests are either simple, fast but have poor fault coverage or have good fault coverage but complex and slow. Due to this imbalance conflicts, the these algorithms are losing popularity. The details of these algorithms and comparison can be referred in [13]-[14]. An Efficient memory test should provide the best fault coverage in the shortest test time. March tests are the most common in use. They have good fault coverage and short test time. In order to verify whether a given memory cell is good, it is necessary to conduct a sequence of write(w) and read(r) operations to the cell. The actual number of read/write operations and the order of the operations depend on the target fault model. March element is a finite sequence of read/write operations applied to a cell in memory before processing the next cell. The next cell address can be ascending or descending in order. The comparison of different march algorithms are tabulated in Table I [15]. MarchC- algorithm has better fault

INTRODUCTION

With improvement in VLSI technology, more and more components are fabricated onto a single chip. It is estimated that the percentage of area occupied by embedded memories on an SOC is more than 65% and will rise up to 70% by 2017[1]. The increasing use of large embedded memories in SOCs require automatic reconfiguration to improve yield and performance. Memory fabrication yield is limited to manufacturing defects, alignment defects, assembly faults, random generated defect patterns, random leakage defects and other faults and defects [2]. To detect defects in a memory, many fault models and test algorithms have been developed [3]. Most of these algorithms have been in use. To increase the yield and efficiency of embedded memories, many redundancy mechanisms have been proposed in [4]-[7]. In [5]-[6] both redundant rows and columns are incorporated into the memory array. Repairing the memory by adding spare words, rows and columns into the word oriented memory core as redundancy is proposed in [7]. All these redundancy mechanisms increase the area and complexity in designing of embedded memories. A new redundancy mechanism is proposed in this paper in order to solve the problem. Some normal words in embedded memories can be considered as redundancy instead of adding extra rows and columns. It is necessary to test the memory before using redundancy mechanisms to repair the faults. In 1970’s, circuit under test is tested by external equipments called ATEs. BIST controlled DFT circuitry is more efficient,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

24

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

coverage, less operations and less test length. It has 6 elements and 10 operations. So it is used in this paper to test the memory. The steps in MarchC- algorithm are as follows” TABLE I MARCH ALGORITHMS AND FAULT COVERAGE

1 up - W0 2 up – (R0, W1) 3 up – (R1, W0) 4 down – (R0, W1) 5 down – (R1, W0) 6 down - R0 In above steps, W represents write, R represents read, “up” represents executing RAM addresses in ascending order while “down” in descending order. Write0(1) represents writing 0(1)into the memory, read 0(1) represents expected data(0 or 1) from the memory in read operation. The BIST module contains a test pattern generator and an output response analyzer (ORA). MarchC- algorithm is used to generate the test patterns and a comparator is used as an ORA. The output from the memory in read operation is compared with expected data in the corresponding March element and indicates whether there are any faults in memory or not. It can also indicate if the memory test is done by activating test_done to 1. III.

A.

Fig. 1 Proposed Redundancy mechanism in RAM

redundancy limit it activates overflow signal. It shows the number of faults in redundant locations. When repairing is activated, faulty addresses are replaced with redundant addresses. MUX module controls the access between test mode and access/user mode. In test mode (test_h=1) BIST generates the inputs to the memory and in access mode (test_h=0) the inputs are equal to the system inputs or user inputs.

PROPOSED BISR TECHNIQUE

Proposed Redundancy Mechanism

The proposed architecture is flexible. Some normal words in RAM can be selected as redundancy if it needs to repair itself. To distinguish them from the normal ones we name these words as Normal-Redundant words. We take a 64x4 RAM as shown in Fig.1 as an example. There are 58 normal words and 6 Normal-Redundant words. When repairing is not used, the Normal-Redundant words are accessed as normal ones. The Normal-Redundant words can be accessed, when there are faults in normal words. This kind of selectable redundancy architecture can increase efficiency and save area. B.

Proposed BISR Architecture The architecture of Proposed BISR is shown in Fig.2. It consists of 3 parts named as BIST module, MUX module, and BISD module. BIST module produces test inputs to the memory using MarchC- algorithm. It detects the failures in memory by comparing RAM output data with expected data of MarchC- algorithm. compare_q=1 for faulty addresses. It asserts test _done to 1 when testing is completed. BISD module stores the faulty address in Fault_A_Mem. It maintains a counter to count the number of faulty addresses. When the count exceeds the maximum

Fig. 2 Architecture of proposed BISR

C.

Built in Self Repair Procedure Fig. 3 shows the flowchart for testing and repairing mechanism used. The BISR starts by resetting the system initially (rst_l=0).After that the system should be tested to find the faults in the memory. In this phase BIST and BISD modules works in parallel. BIST detects the fault address and it will be sent to the BISD. It checks it with already stored addresses .If

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

25

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

the faulty address has not been stored it stores in Fault_A_Mem and the counter increments. It will be ignore otherwise. After the test phase there will be two conditions. If there are no faults or there are too many faults that overflows the redundancy capacity, BISR goes into COMPLETE phase. If there are faults in memory but without overflows then BISR goes into REPAIR and TEST phase. As in TEST phase, BIST and BISD works in parallel. BISD replaces the fault addresses with redundant ones. It can repair the faults in redundancy locations also. BIST module tests the memory again. There will be two results, repair pass or fail. Fig. 4 shows the storing and repairing fault addresses.

ISBN: 378 - 26 - 138420 - 5

but the fault addresses should not be stored thrice. So, one to one mapping is proposed in BISD module. Second feature is, BISR is flexible. In access mode user can decide whether to use repairing or not. If BISR, i.e. repairing is activated redundant locations are used for repairing .Table II shows the operating modes of RAM. Third feature is, it’s repairing speed is high. With one-one mapping it can replace the faulty location’s address with corresponding redundant address very quickly. TABLE II OPERATING MODES OF RAM

E. Proposed Repairing Mechanism to Repair Faults in Redundant Locations In order to repair the faults, there should not be any faults in redundant locations. After test phase Fault_A_Mem contains all the faulty addresses. If it contains all normal location faults, then the mapping shown in Fig. 4 works good. If the Fault_A_Mem contains redundant location addresses then mapping faulty address with corresponding faulty redundant location results in faulty output again. To solve this problem the repairing mechanism is modified slightly and is shown in Fig. 5 where there is one redundant and four normal faults.

Fig. 3 Flow chart of testing and repairing mechanism

Fig. 5 Mapping procedure in presence of redundant fault

Fault-address 3 is mapped with redundant address 5 as redundant address 3 is in Fault_A_Mem. Fig. 6 shows the flow chart. If there are redundant faults then the resultant redundant address should be checked in Fault_A_Mem .If there are any unmapped redundant locations, then mapping is done to that redundant location, if the new redundant location is fault free. If it is faulty again then next unmapped redundant location is considered and checking continuous until the new redundant location is fault free. Otherwise repairing fails.

Fig. 4 Flow of storing fault addresses & repairing mechanism

D.

BISR Features The first feature is, BISR is efficient. Normal redundant words can be used when repairing is not activated. It saves chip area. The fault addresses are stored only once. March algorithm has 6 steps. The address will be read five times in one test. Some faulty addresses will be detected mpore than one time. Take stuck-at-1 fault, it will be detected in 2nd, 4th and 6th steps

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

26

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig.7 Output showing repairing of redundant location faults(rounded), normal faults REFERENCES [1] Fig. 6 Flow chart for repairing redundant and normal faults IV.

[2]

EXPERIMENTAL RESULTS

The proposed BISR is simulated and synthesized using Xilinx and is checked on Spartan 3E FPGA kit. A stuck-at-0 and stuck-at-1 faults are set into the memory to verify the function. It is observed that in presence of faults, the data written into and data read from the faulty location is not same if repairing is not activated. Repairing is activated and is observed that the locations are repaired. It is also verified that the faults in redundant locations are repaired. Fig. 7 shows the output when redundant and normal faults are repaired.

[3] [4]

[5]

[6]

[7]

V. CONCLUSION

A fault tolerant embedded RAM using BISR technique has been presented in this paper. It uses selectable redundancy and is designed flexible such that user can select the operating modes. BISD module avoids storing the fault addresses more than once. The mechanism for repairing redundant locations faults has been proposed and is observed that it efficiently repairs the redundant faults along with normal faults.

[8] [9]

[10]

VI. FUTURE WORK The proposed architecture is mainly focused on stuck-at-faults in the memory using MarchC- algorithm. There are many fault models such as address decoder faults, transition faults etc; and there are different test algorithms. Therefore a future work can improve the proposed architecture to repair the above mentioned faults in the memory using different test algorithms.

[11]

[12] [13] [14] [15]

Farzad Zarrinfa “Optimizing embedded memory for the latest ASIC and SOC design” (2012) -chip design tools, technologies and methodologies by Mentor Graphics. C. Stapper, A. Mclaren, and M. Dreckman, “Yield model for Productivity Optimization of VLSI Memory Chips with redundancy and partially good Product,” IBM Journal of Research and Development, Vol. 24, No. 3, pp. 398-409, May 1980. Ad J.van de Goor, “Testing Semiconductor memories: Theory and Practice”, 1999, ISBN 90-80 4276-1-6 . P. Mazumder and Y. S. Jih, “A new built-in self-repair approach toVLSI memory yield enhancement by using neural type circuits,” IEEE transactions on Computer Aided Design, vol. 12, No. 1, Jan, 1993. W. K. Huang, Y. H. shen, and F. lombrardi, “New approaches for repairs of memories with redundancy by row/column deletion for yield enhancement,” IEEE Transactions on Computer-Aided Design, vol. 9,No. 3, pp. 323-328, Mar. 1990. H. C. Kim, D. S. Yi, J. Y. Park, and C. H. Cho, “A BISR (built-in self repair) circuit for embedded memory with multiple redundancies,” VLSI and CAD 6th International Conference, pp. 602-605, Oct. 1999. Shyue-Kung Lu, Chun-Lin Yang, and Han-Wen Lin, “Efficient BISR Techniques for Word-Oriented Embedded Memories with Hierarchical Redundancy,” IEEE ICIS-COMSAR, pp. 355-360, 2006. C. Stroud, “A Designer’s Guide to Built-In Self-Test”, Kluwer Academic Publishers, 2002. I.Kang, W. Jeong, and S. Kang, “High-efficiency memory BISR with two serial RA stages using spare memories,” IET Electron. Lett. vol. 44, no. 8, pp. 515-517, Apr. 2008. Huamin Cao, Ming Liu, Hong Chen, Xiang Zheng, Cong Wang and Zhihua Wang,” Efficient Built-in Self-Repair Strategy for Embedded SRAM with Selectable Redundancy”, Consumer Electronics, Communications and Networks (CECNet), 2012 2nd International Conference . M. Sachdev, V. Zieren, and P. Janssen, “Defect detection with transient current testing and its potential for deep submicron CMOS ICs,” IEEE International Test Conference, pp. 204-213, Oct. 1998. Mentor Graphics,”MBIST Architect Process Guide”,Software Version 8.2009_3, Aug 2009, pp. 113-116. Pinaki Mazumder, Kanad Chakraborty,”Testing and Testable Design of High-Density Random-Access Memories”. Vonkyoung kim and Tom Chen,”Assessing SRAM Test Coverage for Sub–Micron CMOS Technologies”,VLSI test symposium,1997,15th IEEE. L.Dharma Teja,K. Kiruthika and V. Priyanka Brahmaiah,”Built in self repair for Embedded RAMS with efficient fault coverage using PMBIST”, International Journal of Advances in Engineering & Technology, Nov. 2013.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

27

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

ADAPTIVE MODULATION IN MIMO OFDM SYSTEM FOR4G WIRELESS NETWORKS 

POLI SURESH, Student M.Tech-ECE Dept, Siddartha Educational Academy Group of Institutions, Tirupati,Andhra Pradesh,India517505.psuresh2705@gmail.com

M.VINOD, Assistant Professor-ECE Dept, Siddartha Educational Academy Group of Institutions, Tirupati,Andhra Pradesh,India-517505.

vinodmovidi@gmail.com ABSTRACT This paper presents the strategy of applying Hybrid adaption techniques in MIMO OFDM system. With the rapid growth of digital communication in recent years, the need for high-speed data transmission is increased. Multiple-input-multiple-output (MIMO) antenna architecture has the ability to increase capacity and reliability of a wireless communication system. Orthogonal frequency division multiplexing (OFDM) is another popular technique in wireless communication which is famous for the efficient high speed transmission and robustness to frequency selective channels. Therefore, the integration of the two technologies probably has the potential to meet the ever growing demands of future communication systems. First focusing on OFDM in which the bit error rate (BER) of multilevel quadrature amplitude modulation (M-QAM) in flat Rayleigh fading channel for 128,256,512 subcarriers was calculated and also channel estimation can be done by using different algorithms which is carried out through Matlab software. The channel estimation of MIMO OFDM system is calculated by using Minimum mean square error algorithm (MMSE) and compares the actual value and estimation errors using Matlab simulation. Then take the feedback from the channel estimation and apply hybrid adaptation techniques to improve the spectral efficiency and to reduce the transmit power. This system is used in wireless LAN i.e. IEEE 802.11a/g, HYPERLAN etc., Keywords: MIMO, OFDM, BER, M-QAM, MMSE, MIMO OFDM

I.

INTRODUCTION

Physical limitations of the wireless medium create a technical challenge for reliable wireless communication. Techniques that improve spectral efficiency and overcome various channel impairments such as signal fading and interference have made an enormous contribution to the growth of wireless communications. Moreover, the need for high-speed wireless Internet has led to the demand for technologies deliveringhigher capacities and link reliability than achieved by current systems. Multipleinput multiple-output (MIMO) based communication systems are capable accomplishing these objectives. The multiple antennas configuration exploits the multipath effect to accomplish the additional spatial diversity.

However, the multipath effect also causes the negative effect of frequency selectivity of the channel. Orthogonal frequency division multiplexing (OFDM) is a promising multi-carrier modulation scheme that shows high spectral Efficiency and robustness to frequency selective channels. In OFDM, a frequency-selective channel is divided into a number of parallel frequency-flat sub channels, Thereby reducing the receiver signal processing of the system. The combination of OFDM and MIMO is a promising technique to achieve high bandwidth efficiencies and System performance. In fact, MIMO-OFDM is being considered for the upcoming IEEE 802.11n standard, a developing standard for high data rate WLANs

II. PAPER REVIEW C.Poongodi,P.Ramya,A.Shanmugam,(2010) “BER Analysis of MIMO OFDM System using M-QAM over Rayleigh Fading Channel” Proceedings of the International Conference on Communication and Computational Intelligence, explains the BER of MIMO OFDM over the Rayleigh fading channel for MQAM Modulation. and also the estimation of channel at high frequencies with conventional least squares(LS) and Minimum Mean Square(MMSE) estimation algorithms which is carried out through MATLAB simulation. The performance of MIMO OFDM is evaluated on the basics of Bit Error Rate (BER) andMeansquareError (MSE) level. Dr.JayaKumari.J,(2010) “MIMO OFDM for 4G Wireless Systems”, International Journal of Engineering Science and Technology VOL.2 (7)., explains the OFDM may be combined with antenna arrays at the transmitter and receiver to increase the diversity gain and/or to enhances the system capacity on time variant and frequencyselective channels, resulting in MIMO Configuration. As a promising technology for the future broadband communication, and the simulation results show that this is a promising technology for next generation wireless systems and used in applications such as HYPERLAN,WLAN and DSL etc. Pallavi Bhatnagar ,Jaikaran Singh,Mukesh Tiwari (2011) “Performance of MIMO-OFDM System for Rayleigh fading channel” (ISSN 2221-8386) Volume No 3 May 2011 explains the efficient simulation for MIMO OFDM system with channel equalization. BPSK modulation is used to detect the behavior of the Rayleigh fading channels in presence of additive white Gaussian noise and performance is evaluated. This paper shows that the addition of equalizer reduces the BER and the channel output becomes more pronounced.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

28

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

III. METHODOLOGY Orthogonal frequency division multiplexing (OFDM) transforms a frequency selective channel into a large set of individual frequency non-selective narrowband channels, which is suited for a multiple-input multiple-output (MIMO) structure that requires a frequency non-selective characteristic at each channel when the transmission rate is high enough to make the whole channel frequency selective. Therefore, a MIMO system employing OFDM, denoted MIMO-OFDM, is able to achieve high spectral efficiency. However, the adoption of multiple antenna elements at the transmitter for spatial transmission results in a superposition of multiple transmitted signals at the receiver weighted by their corresponding multipath channels and makes the reception more difficult. This imposes a real challenge on how to design a practical system that can offer a true spectral efficiency improvement. If the channel is frequency selective, the received signals are distorted by ISI, which makes the detection of transmitted signals difficult. OFDM has emerged as one of most efficient ways to remove such ISI. The delay spread and Doppler spread are the most important factors to consider in thee characterizing the SISO system. In the MIMO system which employs multiple antennas in the transmitter and/or receiver, the correlation between the transmitter and receiver antenna is an important aspect of the MIMO channel.it depends on the angle of arrival of each multipath component In fact, MIMO technique is an essential means of increasing capacity in the high SNR regime, providing at most N spatial degrees of freedom. A typical multi-user MIMO communication environment in which the multiple mobile stations are served by single base station in the cellular system. Fig 5.1 and fig 5.2 shows the block diagram of MIMO OFDM transmitter and receiver. This system is modification of OFDM, which provides high BER and used in many applications such as DAB, DVB, DSL.

Figure 2: Receiver block diagram of MIMO OFDM Regardless of the type of MIMO system, most of the equalization/detection schemes require knowledge of the channel information in order to recover the signal. Hence, developing an efficient method of approximating the transmission channel between the transmitter and receiver is an essential component of the receiver design. In this chapter the channel estimation for

MIMO-OFDM is briefly explained fundamentally. First, a general overview on classical estimation theory is provided then OFDM channel estimation is briefly explained in the next step MIMO-OFDM channel estimation is investigated. The problem of MIMO-OFDM system channel estimation in frequency domain is addressed, and the solution for this problem is well interpreted. Finally The MMSE algorithm as an alternative to decrease the computational complexity of LS channel estimation is investigated.

3.2 LS CHANNEL ESTIMATION: The least-square (LS) channel estimation method finds the channel estimate

in such a way that the following cost function

is minimized: ( ) =‖Y-X ‖ = (Y-X ) ^H(Y-X ) =

Y-

X - ̂

Y+

X

By setting the derivative of the function with respect to ( )

Figure 1: Transmitter block diagram of MIMO OFDM

= -2(

)∗+2(

to zero,

)∗ = 0

We have = channel estimation as

, which gives the solution to the LS

3.1 CHANNEL ESTIMATION =(

X)

=

by The ultimate goal at the receiver is to recover the signal that was Let us denote each component of the LS channel estimate originally transmitted. A variety of equalization and signal [ ], k = 0,1,2,3,…..N-1.since X is assumed to be diagonal due detection techniques has been developed for MIMO systems to the ICI-free condition,the LS channel estimate can be depending on whether it is a diversity or spatial multiplexing written for each subcarrier as system. [ ] = [ ]/ [ ],k = 0,1,2,…N-1 The mean square error(MSE) of this LS channel estimation is given as

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

29

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

=E{( − =

) (H-

)}

ISBN: 378 - 26 - 138420 - 5

3.4 CHANNEL ESTIMATION OF OFDM SYSTEM

/

MIMO

The problem of channel estimation for OFDM has been well researched however, the results are not directly applicable to MIMO-OFDM systems. In MIMO systems, the number of channels increases by M.Nr-folds, where M and Nr is the number of transmit and receive antenna, respectively. This significantly increases the number of unknowns to be solved. Using the MIMO-OFDM system model described in Chapter 4, the channel 3.3MMSE CHANNEL ESTIMATION estimator for MIMO-OFDM can be developed. For doing so, MIMO-OFDM 2 by 2 antenna configuration is assumed. Similar Consider the LS solution in the equation, = . Using the to SISO the least square channel estimation is given as weight matrix W.define ==W , which corresponding to the ( , ) MMSE estimate. Referring to the figure, MSE of the channel = ( ( )) , estimate is given J ( ) = E {‖ ‖ } = E {‖ − ‖ }. Note that the MMSE in the equation is inversely proportional to the SNR / , which implies that it may be subject to noise enhancement, especially when the channel is in a deep null. Due to its simplicity, however, the LS method has been widely used for channel estimation.

And the MMSE channel estimation for MIMO OFDM system for nth transmit antenna and nth receiver antenna is ( , ) ( ) given = Where ( , ) = ( ( )) , ( , ) = ( ) ( ( )) + ,Where n = 1, 2….NT, m = 1, 2….NR and NT, NR are the numbers of transmit and receive antennas, respectively, X(n) is an N X N diagonal matrix whose Then, the MMSE channel estimation method finds a better diagonal elements correspond to the pilots of the nth transmit (linear) estimate in terms of W in such a way that the MSEin antenna and Y(m) is N length received vector at receiver antenna equation is minimized. The orthogonality principle states that the m. estimation error vector e = − is orthogonal to ,such that

E{

3.5ADAPTIVE MODULATION

}= E{( − ) ̂

Adaptive modulation is a powerful technique for maximizing the data throughput of subcarriers allocated to a user. Adaptive modulation involves measuring the SNR Where is the cross-correlation matrix of N*N matrices A and of each subcarrier in the transmission, then selecting a B, and is the LS channel estimate given as modulation scheme that will maximise the spectral efficiency, while maintaining an acceptable BER. This = technique has been used in Asymmetric Digital Subscriber Line (ADSL), to maximise the system throughput. ADSL Solving equation for the W yields uses OFDM transmission over copper telephone cables. The channel frequency response of copper cables is W = relatively constant and so reallocation of the modulation scheme does not need to be performed very often, as a Where is the autocorrelation matrix of given as result the benefit greatly out ways the overhead required for measuring of the channel response. Using adaptive modulation in a wireless environment is much more =E{H ̂ } difficult as the channel response and SNR can change very rapidly, requiring frequent updates to track these changes. =E{H ̂ }+ / I Adaptive modulation has not been used extensively in And is the cross correlation matrix between the true channel wireless applications due to the difficulty in tracking the channel effectively. In the effectiveness of a vector and temporary channel estimate vector in the frequency radio domain. Using the equation the MMSE channel estimation multiuser OFDM system using an adaptive subcarrier, bit and power allocation was investigated. follows as =W . =

-W

=0

3.6ADAPTIVEMODULATOR ANDDEMODULATOR

= =

( _(

̂ ) +

/

)

The elements of and in equation are E{ℎ , ℎ , } = [ − ] [ − ] Where k and l denote the subcarrier frequency index and OFDM symbol (time) index, respectively . In an exponentially – decreasing multipath PDP (power delay profile), the frequency- domain correlation [ ] is given as [ ] = 1/1+j2i

At the transmitter the adaptive modulator block consists of different modulators which are used to provide different modulation orders. The switching between these modulators will depend on the instantaneous SNR. The goal of adaptive modulation is to choose the appropriate modulation mode for transmission depending on instantaneous SNR, in order to achieve good trade-off between spectral efficiency and overall BER. Adaptive modulation is a powerful technique for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

30

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

maximizing the data throughput of subcarriers allocated to a user. Table.2 Simulation Adaptive modulation involves measuring the SNR of each Estimation subcarrier in the transmission, then selecting a modulation scheme that will maximize the spectral efficiency ,while maintaining an acceptable BER.

Parameters

SYTEM

IV.SIMULATION RESULTS FFT Size, Guard Band, OFDM Symbol and No.of Symbols used

Table. 1 Simulation parameters in OFDM system System

OFDM

FFT size

128

Guard band size

32

Symbol duration

160

Channel

Rayleigh Fading

No of symbols used

96

Modulation

QAM

ISBN: 378 - 26 - 138420 - 5

for

OFDM

Channel

OFDM CHANNEL ESTIMATION Nfft=128; Ng=Nfft/8; Nofdm=Nfft+Ng; Nsym=100;

Pilot Spacing, Numbers of Pilots and Data per OFDM Symbol

Nps=4; Np=Nfft/Nps; Nd=Nfft-Np;

Number of Bits Per (modulated) Symbol Algorithm Calculate

Nbps=4; M=2^Nbps;

LS and MMSE No of symbol Errors and MSE value

Figure 4.OFDM channel estimation Figure 3.BER analysis of OFDM system THE SIMULATION RESULTS ARE Signal power= 5.860e-003,

Table.3. The Estimation

EbN0 = 0[dB], BER= 157/1152 = 1.363e-001

EbN0= 15[dB], BER= 27/115200000 = 2.344e-007

of

OFDM

Channel

MSE OF LSLINEAR

MSE OF LSSPLITTER

MSE OF MMSE

NO. OF SYMBOL ERRORS

25

5.8523e003

7.3677e-003

1.4212e003

84

30

1.8578e003

2.3317e-003

5.2873e004

32

35

5.9423e004

7.3929e-004

1.9278e004

15

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

31

Results

SNR

EbN0 = 5[dB], BER= 154/3456 = 4.456e-002 EbN0= 10[dB], BER= 104/47232 = 2.202e-003

Simulation

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

40

1.9446e004

2.3583e-004

6.5509e005

11

V.CONCLUSION

45

6.7919e005

7.6674e-005

2.1960e005

4

50

2.7839e005

2.6372e-005

7.8805e006

0

Hence the each and every block of OFDM is studied and plotted the BER analysis under AWGN channel as well as Rayleigh fading channel and compare the simulation results for different EbN0[dB] values and BER value.

Table.4: Simulation Parameters for MIMO OFDM Channel Estimation

SYSTEM No of receive antennas No of transmit antennas Channel Algorithm

MIMO-OFDM CHANNEL ESTIMATION 3 2 Rayleigh fading MMSE

Figure 5: Channel Estimation of MIMO OFDM System.

The channel estimation of MIMO OFDM system by using MMSE algorithm, which is quite complicated, and the simulation results shown that as the signal to noise ratio increases the error value slightly reduces. The actual estimation, estimation 1 and estimation 2 are plotted and the difference between estimation 1 and estimation 2 are also plotted. The simulation results shown that the channel condition is worst at 20 dB. Then the hybrid adaptation techniques to the channel estimation of MIMO OFDM to improve the spectral efficiency and to reduce the transmission power of the system.

REFERENCES [1]. C.Poongodi, P.Ramya ,A. Shanmugam,(2010) “BER Analysis of MIMO OFDM System using M-QAM over Rayleigh Fading Channel” Proceedings of the International Conference on Communication and Computational Intelligence. [2]. Dr.JayaKumari.J,(2010) “MIMO OFDM for 4G Wireless Systems”, International Journal of ngineering Science and Technology VOL.2 (7). [3].PallaviBhatnagar,JaikaranSingh,MukeshTiwari(2011)“ Performance of MIMO-OFDM System for Rayleigh fading channel” International Journal of Science and Advanced Technology(ISSN 2221-8386) Volume No 3 May 2011. [4].PuLi,Haibin Zhang, Job Oostveen, Erik Fledderus(2010)“ MIMO OFDM Performance in Relation to Wideband Channel Properties” The Netherlands 21 st IEEE International Symposium on Personal,Indoor and Mobile Radio Communications . [5].Jia Tang and Xi Zhang(2010), “Hybrid-daption-Enhanced Dynamic Channel Allocation for OFDM Wireless Data Networks “, Networking and information systems laboratory, department of electrical engineering. [6]. Viet- Ha Xian in Wang, MD.JahidurRahman, and Jay Nadeau(2010) , “Channel Prediction Based Adaptive Power Control for Dynamic Wireless Communications” Department of electrical and computer engineering the university of westernOntario, London, on, Canada N6A 5B8. [7]Ohno and iwaosasase(2010) “Adaptive Transmission Power Control for MIMO Diversity Employing Polarization Diversity in OFDM Radio Access” department of information and computer science,kieo university JAPAN. [8]Andrea Goldsmith “Wireless Communications” [9]TheodereS.Rappaport “Wireless Communications” Principles and Practices Second edition [10] http://www.cambridge.org/9780521837163

Figure 6: BER for Adaptive modulation in MIMO OFDM

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

32

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

VLSI Based Implementation of a digital Oscilloscope MS. DHANSHRI DAMODAR PATIL Department of Electronics, Amravati University, SSGM College of Engineering, Shegaon 444203, District: Buldhana, State: Maharashtra Country:India dhanshri.patil21@gmail.com Mobile No.9096584765

PROF. V. M. UMALE Department of Electronics, Amravati University, SSGM College of Engineering, Shegaon 444203, District: Buldhana, State: Maharashtra Country:India vmumale@rediffmail.com

• Semi-standard Oscilloscope look and feel • VGA display; drives a standard computer monitor • PS/2 Mouse User Interface • 9-bit input data width • Developed specifically for the Spartan III development kit from Xilinx.

Abstract— In today’s fast-paced world, engineers need the best tools available to solve their measurement challenges quickly and accurately. There are many types of oscilloscope available in the market. The main types of oscilloscopes are analog oscilloscope, digital oscilloscope and PC based oscilloscope. From which the digital oscilloscope are widely used now a days due there accuracy portability, high speed, high resolution, data storing capability etc. here we provide an alternative solution which is basically a digital oscilloscope with almost all the control options which any standard digital oscilloscope has. It has basically three large blocks, first is the ADC part which has the analog to digital IC which is controlled by the STK500 AVR kit. Second is the oscillator control part which is implanted on the FPGA Spartan 3 kit. It has the entire storage element and the user input control and driver part of IC and VGA. The third large block is display device which is a CRT monitor along this the LEDs and seven segment display part on FPGA is also used to display information. So this is a cheap alternative to expensive oscilloscopes; using a VGA display and a simple mouse interface, a user can use this scope to look at and measure signals up to about 80 MHz if an extra high frequency clock will provide to the design. Keywords- ADC(Analog To Digital IC), FPGA Spartan 3 kit, VGA Display.

Fig 1: Diagram of Digital oscilloscope 2) Logical diagram:

I INTRODUCTION This digital oscilloscope provides a cheap alternative to expensive oscilloscopes; using a VGA display and a simple mouse interface, a user can use this scope to look at and measure signals up to about 80Mhz. this kind of scope would be ideal for hobbyists and students looking to learn and debug circuits. Development is based on the Spartan III Starter Kit from Xilinx. The ADC is simply controlled by an MCU (another starter kit: the ATK400 from Atmel) but will soon be controlled by the FPGA (to achieve the faster speeds). In the future, schematics and PCB layout binaries will be available. 1. Features • Timescale selection • Selectable trigger (Rising, Falling / Normal, Single, Auto) • Horizontal and Vertical position offsets • Grid Display On/Off/Outline

Fig 2: Block diagram of Digital oscilloscope

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

33

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig 4: Block diagram of ADC Data Buffer Part Character Display Driver: The signal is display on VGA. But some information is given on the seven segment display about the signal and the working mode thus the character display part is used for this purpose.

3) Functional Description: The digital oscilloscope processes the digital data and shows it on the VGA. This is thus have three large block, these are: 1. ADC converter 2. Display device 3. controller part (FPGA part)

Mouse Driver: The user define values are needed to control the oscilloscope such as time scale, vertical offset, triggering style etc. thus a input device is needed to give the input in This project this block use mouse as PS2 format to give user input.

ADC converter: This block has an analog to digital IC to convert the input analog signal into its equivalent digital Signal. The ADC is simply controlled by an MCU (another starter kit: the ATK500 from Atmel) but will soon be controlled by the FPGA (to achieve the faster speeds). The digital data from ADC then goes to the FPGA Spartan 3 kit

Fig 5: Block diagram of Mouse Driver Part VGA Driver: The output is shown on the VGA. Thus this block need the VGA driver to control the CRT monitor and show the wave for on the monitor. So this block have to generate some signals as vertical synchronizing and horizontal synchronizing for scanning and RGB for color except this there are many other character and user line which divide the screen for measuring purpose thus this block have to also generate these fixed lines.

Fig 3: Block diagram of ADC Driver part Display (CRT) Monitor: This is the display devise at which the signal waveform is displayed .in this project I use the CRT monitor as display device. FPGA Spartan Kit Block: In this block the digital data is processed and synchronizing with ADC IC this block is very important and has many other processing blocks which are: ADC Data Buffer: This block has buffer the data so that the continuous waveform can be seen on VGA. It also has the time scale option so that the data can be read from the RAM at different frequency speed according to the user or signal.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

34

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

III. FUTURE SCOPE The purposed design is having great future possibility. Following are the some feature that can be added in this design • FFT display • Measurement Display (amplitude, frequency) • Cursors • Vectors • Multi-channel display (up to 8) • Channel Math • UART or USB computer communication (data export)

IV. CONCLUSION Fig 6: Block diagram of VGA Driver Part

The usefulness of an oscilloscope is not limited to the world of electronics. With the proper transducer, an oscilloscope can measure all kinds of phenomena. Oscilloscopes are used by everyone from physicists to television repair technicians. An automotive engineer uses an oscilloscope to measure engine vibrations. A medical researcher uses an oscilloscope to measure brain waves. The digital oscilloscopes are generally having very high costs. The proposed design provides a cheap alternative to expensive oscilloscopes; using a VGA display and a simple mouse interface, a user can use this scope to look at and measure signals up to about 80Mhz.this kind of scope would be ideal for hobbyists and students looking to learn and debug circuits.

Seven Segment Driver: The analog to digital converted data can also see on FPGA board this is done by seven segment display present on Spartan 3 kit. The digital data is shown on two seven segment block in hexadecimal format. VGA Data Buffer: The signal which is shown on VGA display is stored in RAM in digital from. Thus for continuous viewing of the waveform it is necessary to store the data and retrieve it time to time. This is done by this block in this block there RAM is built in FPGA and data is stored in it. Mouse user Input Driver: In this oscilloscope the input is given by mouse so this block is control the user input and gives the signals to display and control the waveform showing on the VGA display.

V. REFERENCE [1] Pereira, J.M.D., “The history and technology of oscilloscopes”, Instrumentation & Measurement Magazine, IEEE Volume 9, Issue 6, 2006 [2]Oscilloscope Types: http://www.radioelectronics.com/info/t_and_m/oscilloscope/ oscilloscope_types.php [3] “Hawkins Electrical Guide”, Theo. Audel and Co., 2nd ed. 1917, vol. 6, Chapter 63: Wave Form Measurement, pp. 1841-2625 [4] XYZ of Oscilloscopes Tutorial http://www.tek.com/Measurement/ programs/301913X312631/?lc=EN&PRODUCT=&returnUr l=ct=TI &cs=pri&ci=2280&lc=EN

DCM (Digital Clock Manager): The function of this block is to control the clock and removes the problem of clock skew.

II. SYNTHESIS REPORT Device utilization summary: Selected Device: 3s200ft256-4

[5] Bhunia C, Giri S, Kar S, Haldar S, Purkait P, “A low-cost PC-based virtual oscilloscope”,Education, IEEE Transactions on Volume 47, Issue 2, May 2004. [6] Moschitta A, Stefani F, Petri D, “Measurements of Transient Phenomena With Digital Oscilloscopes”, Instrumentation and Measurement, IEEE Transactions on, Volume 56, Issue6, Dec. 2007 Page(s):2486 - 2491 [7] English W.O., “Digital Storage Oscilloscope vs. Digital Multimeter”, Industrial and Commercial Power Systems Technical Conference, 2006 IEEE. [8] Hengkietisak S, Tipyakanont S, Tangsiriworakul C, Manop C, Senavongse W, “Laboratory digital signal analysis with virtual [9] Rapid Prototyping of Digital Systems, book by James o. Hamblen and Michael d. Furman, page(s) 134-151

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

35

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[10] Kuenzi C.D., Ziomek C.D., Fundamentals of Oscilloscope Measurements in Automated Test Equipment (ATE), Systems Readiness Technology Conference, IEEE, 18-21 Sept. 2006 Page(s):244 – 252 [11] Lembeye Y., Keradec J.P., Cauffet G., ‘Improvement in the linearity of fast digital oscilloscopes used in averaging mode Instrumentation and Measurement’, IEEE Transactions on Volume 43, Issue 6, Dec. 1994 Page(s):922 – 928 [12] Moschitta A., Stefani F., Petri D., “Measurements of transient phenomena with digital oscilloscopes” Instrumentation and Measurement Technology Conference, 2003. IMTC '03. Proceedings of the 20th IEEE Volume 2, 20-22 May 2003 Page(s):1345 - 1349 vol.2

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

36

4 www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

APPEARANCE BASED AMERICAN SIGN LANGUAGE RECOGNITION USING GESTURE SEGMENTATION AND MODIFIED SIFT ALGORITHM Author1 prof.P.Subba Rao Professor of ECE SRKR College of Engineering Dept. of Electronic and communications Bhimavaram, India

Author2 Mallisetti Ravikumar M.E. (Communication Systems) SRKR College of Engineering Dept. of Electronic and communications Bhimavaram, India

Abstract— the work presented in this paper is to develop a system for automatic Recognition of static gestures of alphabets in American Sign Language. In doing so three feature extraction methods and neural network is used to recognize signs. The system recognizes images of bare hands, which allows the user to interact with the system in a natural way. An image is processed and converted to a feature vector that will be compared with the feature vectors of a training set of signs. Further work is to investigate the application of the Scale-Invariant Feature Transform (SIFT) to the problem of hand gesture recognition by using MATLAB.The algorithm uses modified SIFT approach to match key-points between the query image and the original database of Bare Hand images taken. The extracted features are highly distinctive as they are shift, scale and rotation invariant. They are also partially invariant to illumination and affine transformations. The system is implemented and tested using data sets of number of samples of hand images for each signs. Three feature extraction methods are tested and best one is suggested with results obtained from ANN. The system is able to recognize selected ASL signs with the accuracy of 92.33% using edge detection and 98.99 using sift algorithm.

expressions are extremely important in signing. (www.nidcd.nih.gov (US government)). ASL also has its own grammar that is different from other sign languages Such as English and Swedish. ASL consists of approximately 6000 gestures of common words or proper Nouns. Finger spelling used to communicate unclear Words or proper nouns. Finger spelling uses one hand and 26 gestures to communicate the 26 letters of the alphabet.

Index Terms— ASL using MATLAB, Orientation Histogram, SIFT, ASL Recognition, ASL using ANN, ASIFT Algorithm

There are two types of gesture interaction, communicative gestures work as symbolic language (Which is the focus in this project) and manipulative gestures provide multi-dimensional control. Also, gestures can be divided into static gestures (hand postures) and dynamic gestures (Hong et al., 2000). The hand motion conveys as much meaning as their posture does. A static sign is determined by a certain configuration of the hand, while a dynamic gesture is a moving gesture determined by a sequence of hand movements and configurations. Dynamic gestures are sometimes accompanied with body and facial expressions.The aim of sign language alphabets recognition is to provide an easy, efficient and accurate mechanism to transform sign language into text or speech. With the help of computerized digital image processing and neural network the system can interpret ASL alphabets.

——————————  ——————————

The 26 alphabets of ASL are shown in Fig.1.

I.INTRODUCTION

A.

American Sign language:

The sign language is the fundamental communication method between the people who suffer from hearing defects. In order for an ordinary person to communicate with deaf people, a translator is usually needed the sign language into natural language and vice versa. International Journal of Language and Communication Disorders, 2005) Sign language can be considered as a collection of gestures, movements, posters, and facial expressions corresponding to letters and words in natural languages. American Sign Language (ASL) National Institute on Deafness & Other communication Disorders, 2005) is a complete language that employs signs made with the hands and other facial expressions and postures of the body. According to the research by Ted Camp found on the Web site www.silentworldministries.org, ASL is the fourth most used language in the United States only behind English, Spanish and Italian (Comp). ASL is a visual language meaning it is not expressed through sound but rather through combining hand shapes through movement of hands, arms and facial expressions. Facial

Figure. 1. The American Sign Language finger spelling alphabet

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

37

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

. B. Related Work Attempts to automatically recognize sign language began to appear in the 90s. Research on hand gestures can be classified into two cate Gories. First category relies on electromechanical devices that are used to measure the different gesture parameters such as hand’s position, angle, and the location of the fingertips. Systems that use such devices are called glove-based systems. A major problem with such systems is that they force the signer to wear cumbersome and inconvenient devices. Hence the way by which the user interacts with the system will be complicated and less natural. The second category uses machine vision and image processing techniques to create visual based hand gesture recognition systems. Visual based gesture recognition systems are further divided into two categories: The first one relies on using specially designed gloves with visual markers called “visual-based gesture with glove-markers (VBGwGM)” that help in determining hand postures. But using gloves and markers do not provide the naturalness required in humancomputer interaction systems. Besides, if colored gloves are used, the processing complexity is increased. The second one that is an alternative to the second kind of visual based gesture recognition systems can be called “pure visual-based gesture (PVBG)” means visual-based gesture without glove-markers. And this type tries to achieve the ultimate convenience naturalness by using images of bare hands to recognize

1. Feature extraction, statistics, and models 1. The placement and number of cameras used. 2. The visibility of the object (hand) to the camera for simpler extraction of hand data/features. 5. The efficiency and effectiveness of the selectedalgorithms to provide maximum accuracy and robustness 3. The extraction of features from the stream of Streams of raw Image data. 4. The ability of recognition algorithms to extracted features. 5. The efficiency and effectiveness of the selected algorithms to provide maximum accuracy and robustness. II.SYSTEM DESIGN AND IMPLEMENTATION The system is designed to visually recognize all static signs of the American Sign Language (ASL), all signs of ASL alphabets using bare hands. The user/signers are not required to wear any gloves or to use any devices to interact with the system. But, since different signers vary their hand shape size, body size, operation habit and so on, which bring more difficulties in recognition. Therefore, it realizes the necessity for signer independent sign language recognition to improve the system robustness and practicability in the future. The system gives the comparison of the three feature extraction methods used for ASL recognition and suggest a method based on recognition Rate. It relies on presenting thegesture as a feature vector that is translation, rotation and scale invariant. The combination of the feature extraction method with excellent image processing and neural networks capabilities has led to the successful development of ASL recognition system using MATLAB. The system has two phases: the feature extraction phase and the classification as shown in Fig.2. Images were prepared using portable document format (PDF) form so the system will deal with the images that have a uniform background.

Types of algorithms can be used for image recognition

1. Learning algorithms. a. Neural network (e.g. research work of Banarse, 1993). b. Hidden Markov Models (e.g. research work of Charniak, 1993). c. Instance-based learning(research work of Kadous,1995) 2. Miscellaneous techniques. a. The linguistic approach (e.g. research work of Hand, Sexton, and mullan, 1994) b. Appearance-based motion analysis (e.g. research Work of Davis and Shah, 1993). c. Spatio-temporal vector analysis (e.g. research Work of Wuek, 1994 a. Template matching (e.g. research work Darrell and Pentland, 1993) b. Feature extraction and analysis, (e.g. research work of Rbine, 1991) c. Active shape models “Smart snakes” (e.g. research work of Heap and Samaria, 1995) d. Principal component analysis (e.g. research work of Birk, Moeslund and Madsen, 1997) e. Linear fingertip models (Research work of Davis and shah, 1993) f. Causal analysis (e.g. research work of Brand and Irfan, 1995). Among many factors, five important factors must be considered for the successful development of a visionbased solution to collecting data for hand posture and gesture recognition

Fig 2. Designed System block diagram

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

38

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Figure. 4. System Overview.

A. Feature Extraction Phase

Images of signs were resized to 80 by 64, by default “imresize” uses nearest neighbor interpolation to determine the values of pixels in the output image but other interpolation methods can be specified. Here ‘Bicubic’ method is used because if the specified output size is smaller than the size of the input image, “imresize” applies a low pass filter before interpolation to reduce aliasing. Therefore we get default filter size 11- by11. To alleviate the problem of different lighting conditions of signs taken and the HSV “(Hue, Saturation, Brightness)” non-linearity by eliminating the HSV information while retaining the luminance. The RGB color space (Red, Green and Blue which considered the primary colors of the visible light spectrum) is converted through grayscale image to a binary image. Binary images are images whose pixels have only two possible intensity values. They are normally displayed as black and white. Numerically, the two values are often 0 for black and either 1 or 255 for white. Binary images are often produced by thresholding a grayscale or color image from the background. This conversion resulted in sharp and clear details for the image. It is seen that the RGB color space conversion to HSV color space then to a binary image produced images that lack many features of the sign. So edge detection is used to identify the Parameters of a curve that best fir a set of given edge points. Edges are significant local changes of intensity in an image. Edges typically occur on the boundary between two different regions in an image. Various physical events cause intensity changes. Goal of edge detection is to produce a line drawing of a scene from an image of that scene. Also important features can be extracted from the edges. And these features can be used for recognition. Here canny edge detection technique is used because it provides the optimal edge detection Solution. Canny edge detector results in a better edge detection co pared to Sobel edge detector. The output of the edge detector defines ‘where’ features are in the image. Canny method is better, but in some cases it provides extra details more than needed. To solve this Problem a threshold of 0.25 is decided after testing different threshold values and observing results on the overall recognition system.

Prepared image -----------------------------------------Image resizing

Rgb to gray conversion

Feature extraction

Edgedetetion

Featureextraction -----------------------------------------Feature vector

Classification

Neuralnework

Classified sign

1. Feature Extraction Methods Used. a. Histogram Technique b. Hough c. OTSU’s segmentation algorithm d. Segmentation and Extraction with edge detection

III.MODIFIED SIFT ALGORITHM

B.Classification Phase

A complete description of SIFT can be found in [1].An overview of the algorithm is presented here. The algorithm has the major stages as mentioned below: • Scale-space extrema detection: The first stage searches over scale space using a Difference of Gaussian function to identify potential interest points. • Key point localization: The location and scale of each candidate point is determined and key points are selected based on measures of stability. • Orientation assignment: One or more orientations are assigned to each key point based on local image gradients. • Key point descriptor: A descriptor is generated for each keypoint from local image gradients information at the scale found in the second stage. Each one of the above-mentioned stages is elaborated further in the following sections.

The next important step is the application of proper feature extraction method and the next is the classification stage, a 3-layer, feedforward back propagation neural network is constructed. The classification neural network is shown (see figure 3). It has 256 instances as its input. Classification phase includes network architecture, creating network and training the network. Network of feed forward back propagation with supervised learning is used.

Fig. 3: Classification network.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

39

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The goal is to design a highly distinctive descriptor for each interest point to facilitate meaningful matches, while simultaneously ensuring that a given interest point will have the same descriptor regardless of the hand position, the lighting in the environment, etc. Thus both detection and description steps rely on invariance of various properties for effective image matching. It attempts to process static images of the subject considered, and then matches them to a statistical database of preprocessed images to ultimately recognize the specific set of signed letter.

ISBN: 378 - 26 - 138420 - 5

that this step can't be elim nated. In this algorithm, the orientation is in the range [- PI, PI] radians.

D. KEYPOINT DESCRIPTORS First the image gradient magnitudes and orientations are calculated around the key point, using the scale of the key point to select the level of Gaussian blur for the image. The coordinates of the descriptor and the gradient orientations are rotated relative to the key point orientation. Note here that after the grid around the key point is rotated, we need to interpolate the Gaussian blurred image around the key point at non-integer pixel values. We found that the 2D interpolation in MATLAB takes much time. So, for simplicity, we always approximate the grid around the key point after rotation to the next integer value. By experiment, we realized that, this operation increased the speed much and still had minor effect on the accuracy of the whole algorithm. The gradient magnitude is weighted by a gaussian weighting function with σ, equal to one half of the descriptor window width to give less credit to gradients far from center of descriptor. Then, these magnitude samples are accumulated into an orientation histogram summarizing the content over 4x4 subregion. Fig. 4 describes the whole operation. Trilinear interpolation is used to distribute the value of each gradient sample into adjacent bins. The descriptor is formed from a vector containing the values of all the orientation histogram entries. The algorithm uses 4x4 array of histograms with 8orientation bins in each resulting in a feature vector of 128 elements. The feature vector is then normalized to unit length to reduce the effect of illumination change. The values in unit length vector are thresholded to 0.2 and then renormalized to unit length. This is done to take care of the effect of nonlinear illumination changes.

A. FINDING KEYPOINTS The SIFT feature algorithm is based upon finding locations (called key points) within the scale space of an image which can be reliably extracted. The first stage of computation searches over all scales and image locations. It is implemented efficiently by using a difference-of-Gaussian function to identify potential interest points that are invariant to scale and orientation. Key points are identified as local maxima or minima of the DoG images across scales. Each pixel in a DoG image is compared to its 8 neighbours at the same scale, and the 9 corresponding neighbours at neighbouring scales. If the pixel is a local maximum or minimum, it is selected as a candidate key point. We have a small image database, so we don't need a large number of key points for each image. Also, the difference in scale between large and small bare hands is not so big.

Figure5: Detected key points for Image representing “Y” Character

B.

KEYPOINT LOCALIZATION

In this step the key points are filtered so that only stable and more localized key points are retained. First a 3D quadratic function is fitted to the local sample points to determine the location of the maximum. If it is found that the extremum lies closer to a different sample point, the sample point is changed and the interpolation performed instead about that point. The function value at the extremum is used for rejecting unstable extrema with low contrast.The DoG operator has a strong response along edges present in an image, which give rise to unstable key points. A poorly defined peak in the DoG function will have a large principal curvature across the edge but a small principal curvature in the perpendicular direction.

C.

Figure6: Gaussian & DoG pyramids (Source: Reference 1)

ORIENTATION ASSIGNMENT

In order for the feature descriptors to be rotation invariant, an orientation is assigned to each key point and all subsequent operations are done relative to the orientation of the key point. This allows for matching even if the query image is rotated by any angle. In order to simplify the algorithm, we tried to skip this part and assume no orientation for all key points. When tested, it gave wrong results with nearly all the images where the bare hand image is rotated with an angle of 15º to 20º or more. We realized

Figure 7: 2x2 descriptor array computed from 8x8 samples (Source: Reference 1)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

40

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

tions are P-by-4 matrix, in which each row has the 4 values for a key-point location (row, column, scale, orientation). The orientation is in the range [-PI, PI] radians

E. SIMPLIFICATIONS TO SIFT ALGORITHM

The distance of one feature point in first image and all feature points in second image must be calculated when SIFT algorithm is used to match image, every feature point is 128- dimensional data, the complexity of the calculation can well be imagined.A changed Comparability measurement method is introduced to improve SIFT algorithm efficiency. First, Euclidean distance is replaced by dot product of unit vector as it is less computational. Then, Part characteristics of 128-dimensional feature point take part in the calculation gradually. SIFT algorithm time reduced. Euclidean Distance is distance between the end points of the two vectors. Euclidean distance is a bad idea because Euclidean distance is large for vectors of different lengths. This measure suffers from a drawback: two images with very similar content can have a significant vector difference simply because one is much longer than the other. Thus the relative distributions may be identical in the two images, but the absolute term frequencies of one may be far larger. So the key idea is to rank images according to angle with query images. To compensate for the effect of length, the standard way of quantifying the similarity between two images d1 and d2 is to compute the cosine similarity of their vector representations V (d1) and V (d2)

Fig8. SIFT Key-points Extraction, Image showing matched keypoints between input image and database image.

Algorithmblockdiagram

Sim (d1, d2) = V (d1). V (d2) / |V (d1) ||V (d2)| Where the numerator represents the dot product (also known as the inner product) of the vectors V (d1) and V (d2), while the denominator is the product of their Euclidean lengths. F. KEYPOINT MATCHING USING UNIT VECTORS

1. Match (image1, image2). This function reads two images, finds their SIFT [1] [6] features. A match is accepted only if its distance is less than dist Ratio times the distance to the second closest match. It returns the number of matches displayed. Where the numerator represents the dot product (also known as the inner product) of the vectors V (d1) and V (d2), while the denominator is the product of their Euclidean lengths. 2. Find SIFT (Scale Invariant Fourier Transform) Key points for each image. For finding the SIFT Key points specify what are its locations and descriptors. 3. It is easier to compute dot products between unit vectors rather than Euclidean distances. Note that the ratio of angles acos of dot products of unit vectors is a close approximation to the ratio of Euclidean distances for small angles. 4. Assume some distance ratio for example suppose distance ratio=0.5 it means that it only keep matches in which the ratio of vector angles from the nearest to second nearest neighbour is less than distance Ratio. 5. Now for each descriptor in the first image, select its match To second image. 6. Compute matrix transpose, Computes vector of dot Products, Take inverse cosine and sort reproducts, Take inverse cosine and sort results. Check if nearest neighbour has angle less than dist Ratio times second. 7. Then create a new image showing the two images side by side. Using this algorithm we read image and calculate key- points, descriptors and locations by applying threshold. Descriptors given as P-by-128 matrix where p is number of key-points and each row gives an invariant descriptor for one of the p key-points. The descriptor is a vector of 128 values normalized to unit length. Loca-

Now apply these steps in our previous image from which SIFT features are extracted.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

41

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

G H I J K L M N O P Q R S T U V W X Y Z TOTAL

7 7 8 8 7 7 8 7 7 8 8 7 7 8 8 8 8 6 8 6 193

ISBN: 378 - 26 - 138420 - 5

1 1 0 0 1 1 0 1 1 0 0 1 1 0 0 0 0 2 0 2 15

66.66 66.66 100 100 66.66 66.66 100 66.66 66.66 100 100 66.66 66.66 100 100 100 100 33.33 100 33.33 92.78

Table 1 Results of training 8 samples for each sign with (0.25) Canny Threshold

IV. EXPERIMENTAL RESULTS AND ANALYSIS

The network is trained on 8 samples of each sign. Samples of same size and other features like distance rotation and lighting effect and with uniform background are taken into consideration while discarding the others.

The performance of the recognition system is evaluated by testing its ability to classify signs for both training and testing set of data. The effect of the number of inputs to the neural network is considered. A.Data Set

The data set used for training and testing the recognition system consists of grayscale images for all the ASL signs used in the experiments are shown see fig. 4. Also 8 samples for each sign will be taken from 8 different volunteers. For each sign 5 out of 8 samples will be used for training purpose while remaining five signs were used for testing. The samples will be taken from different distances by WEB camera, and with different orientations. In this way a data set will be obtained with cases that have different sizes and orientations and hence can examine the capabilities of the feature extraction scheme. B.Recognition Rate

The system performance can be evaluated based on its ability to correctly classify samples to their corresponding classes. The recognition rate can be defined as the ratio of the number of correctly classified samples to the total number of samples and can be given as Recognition rate =

Figure. 9 Training chart for a network trained on 8 samples for each sign, (0.25) Canny threshold Figure 10 Percentage error recognition chart of neural network

no.ofcorrectlyclassifiedsigns totaL n oofsigns

C.Experimental Results

SIGN A B C D E F

Recognized samples 7 7 7 8 8 8

Misclassified samples 1 1 1 0 0 0

Ecognition rate 66.66 66.66 66.66 100 100 100

E.

GUI Simulating Results (sign to text)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

42

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Figure 11: comparison of key-points on given input with database For a single input image for first cycle.

For testing the unknown signs we have created a GUI as shown in Fig. 5 which provides the user an easy way to select any sigh He/She wants to test and then after clicking on the Apply pushbutton it will display the meaning of the selected sign. F.

In Figure 8, we compare Database Images 1, 3, 7 with input image key points. So Database Image 3 is closest match with input image.

GUI SIMULATING RESULTS (TEXT TO SIGN)

A text to sign interpreter means if the user types any sign or sentence corresponding signs are shown so that normal person to deaf people communication can be done. Examples of showing the sign to text converters are shown (See figure 6). Fig. 6 shows when the user type the name ‘BOB’ in the text box its corresponding signs are appear on the screen one by one above the text or spelling.

Figure 12: comparison of key-points on given input with database For a single input image after resetting threshold value and Distance ratio value

The problem now is how we can identify a 'No Match'. For this, we saw that the 'No Match' query images are in many cases confused with the database images that have a large number of feature vectors in the feature vectors database. We decided to compare the highest vote (corresponding to the right image) and the second highest vote (corresponding to the most conflicting image). If the difference between them is larger than a threshold, then there is a match and this match corresponds to the highest vote. If the difference is smaller than a threshold, then we declare a 'No Match'. The values of THRESHOLD were chosen by experiment on training set images either with match or no match.

The approach described above has been implemented usingMATLAB. The implementation has two aspects: training and inference. During the training phase locally invariant features (key points, orientation, scales and descriptors) from all training images are retrieved using the SIFT algorithm.During inference, the objective is to recognize a test image. A set of local invariant features are retrieved for the test image during the inference phase and compared against the training feature-set using the metric explained in section 4.The title of the closest match is returned as the final output. In order to prove the performance of our proposed system, we Predefined the number of gestures from B, C, H, I, L, O, Y and create a hand gesture database. Matching is performed between images by unit vectors. The matching is accomplished for proposed method and the result shows that it produces 98% accuracy. In Figure 7, we can easily identify Database Images 1, 3, 7 have more number of key-points matched with input image key-points .So Distance Ratio Parameter and threshold are adjusted.

Figure 13: comparison of key-points on given input with database For a single input image (No Match Case)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

43

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

(1.5 GHz AMD processor, 128 MB of RAM running under windows 2008.) WEB-CAM-1.3 is used for image capturing. The system is proved robust against changes in gesture.Using Histogram technique we get the misclassified results. Hence Histogram technique is applicable to only small set of ASL alphabets or gestures which are completely different from each other. It does not work well for the large or all 26 number of set of ASL signs. For more set of sign gestures segmentation method is suggested. The main problem with this technique is how good differentiation one can achieve. This is mainly dependent upon the images but it comes down to the algorithm as well. It may be enhanced using other image Processing technique like edge detection as done in the presenting paper. We used the well known edge detector like Canny, Sobel and Prewitt operators to detect the edges with different threshold. We get good results with Canny with 0.25 threshold value. Using edge detection along with segmentation method recognition rate of 92.33% is achieved. Also the system is made background independent. As we have implemented sign to text interpreter reverse also implemented that is text to sign interpreter.

Figure14: Example of a “no match” Image not in training set for figure3

Gesture Name B C H I L O Y

Testing Number 150 150 150 150 150 150 150

Success Number 149 148 148 149 148 148 149

Correct Rate

The Algorithm is based mainly on using SIFT features to match the image to respective sign by hand gesture. Some modifications were made to increase the simplicity of the SIFT algorithm. Applying the algorithm on the training set, we found that it was always able to identify the right sign by hand gesture or to declare ‘No Match’ in case of no match condition. The algorithm was highly robust to scale Difference, rotation by any angle and reflection from the test image. SIFT is a state-of-the-art algorithm for extracting locally invariant features and it gave me an opportunity to understand several aspects of application in image recognition. I believe this effort resulted in a robust image recognition implementation, which should perform quite well with the final test images. In future I would like to work on improving the performance of the SIFT for Global Features. The local invariant features of SIFT can be augmented by computing global features of an image.

99.3 98.7 98.7 99.3 98.7 98.7 99.3

Table2. The results of classifier for the training set and Testing set

FUTURE SCOPE / CHALLENGES

The work presented in this project recognizes ASL static signs only. The work can be extended to be able to recognize dynamic signs of ASL. The system deals with images with uniform background, but it can be made background independent. It is overcome and it is made Background independent. The network can be trained to the other types of images. It is important to consider increasing data size, so that it can have more accurate and highly performance system.

FIGURE15 RECOGNIZED STATIC SIGN USING PCA ALGORITHM

ACKNOWLEDGMENT

The authors wish to thank to guide Prof.P.SUBBA RAO for his valuable guidance for this work. And the volunteers who has given gesture images in the required format no of times whenever required till to the results.

FIGURE 16 RECOGNIZED DYNAMIC SIGN USING PCA ALGRITHM

7. CONCLUSIONS: REFERENCES

IV. HARDWARE AND SOFTWARE

a.

The system is implemented in MATALAB version R13.5. The recognition training and tests were run on a modern standard PC

J.S. Bridle, “Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition,” Neurocomputing—Algorithms, Architectures and Applications, F.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

44

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

b. c.

d. e. f.

g.

h.

i.

j. k. l. m.

n.

o.

ISBN: 378 - 26 - 138420 - 5

Fogelman-Soulie and J. Herault, eds., NATO ASI Series F68, Berlin: Springer-Verlag, pp. 227-236, 1989. (Book style with paper title and editor) W.-K. Chen, Linear Networks and Systems. Belmont, Calif.: Wadsworth, pp. 123-135, 1993. (Book style) Poor, “A Hypertext History of Multiuser Dimensions,” MUD History, http://www.ccs.neu.edu/home/pb/mudhistory.html. 1986. (URL link *include year) K. Elissa, “An Overview of Decision Theory," unpublished. (Unplublished manuscript) R. Nicole, "The Last Word on Decision Theory," J. Computer Vision, submitted for publication. (Pending publication) J. Kaufman, Rocky Mountain Research Laboratories, Boulder, Colo., personal communication, 1992. (Personal communication) D.S. Coming and O.G. Staadt, "Velocity-Aligned Discrete Oriented Polytopes for Dynamic Collision Detection," IEEE Trans. Visualization and Computer Graphics, vol. 14, no. 1, pp. 1-12, Jan/Feb 2008, doi:10.1109/TVCG.2007.70405. (IEEE Transactions ) S.P. Bingulac, “On the Compatibility of Adaptive Controllers,” Proc. Fourth Ann. Allerton Conf. Circuits and Systems Theory, pp. 8-16, 1994. (Conference proceedings) David G. Lowe. Distinctive Image Features from ScaleInvariant Key points. International Journal of Computer Vision, 60, 2 (2004), pp.91- 110. Sven Siggelkow. Feature Histograms for Content-Based Image Re trieal. PhD Thesis, Albert-Ludwigs-University Frieiburg, December 2002. Mikolajczyk, K., Schmid, C.: An Affine Invariant Interest Point Detector. In: ECCV, (2002) 128-142 Schaffalitzky, F., Zisserman, A.: Multi-view Matching for Unordered Image Sets, or “How Do I Organize My Holiday Snaps?” In: ECCV, (2002) 414-431 Van Gool, T. Moons, and D. Ungureanu. Affine photometric invariants for planar intensity patterns. In ECCV, pp. 642651, 1996. [6] D. Lowe, “Object Recognition from Local Scale-Invariant Features,” Proceedings of the Seventh IEEE International Conference.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

45

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

BAACK: Better Adaptive AcknowledgementSystemfor Secure Intrusion Detection System in Wireless MANETs Mr. G.Rajesh M.tech

Parvase Syed

Assistant Professor, Department of CSE Audisankara College of Engineering & Technology Gudur, Nellore, Andhra Pradesh, India

PG Student, Computer Science & Engineering Audisankara College of Engineering & Technology Gudur, Nellore, Andhra Pradesh, India

network [1][2]. MANETs consist of a self-healing, peerto-peer, self-organization network in contrast to a mesh topology has a central controller (to optimize, determine, and distribute the routing table). MANETs circa 2000-2015 habitually communicate at radio frequencies (30 MHz - 5 GHz).

Abstract: -In recent years, the use of mobile adhoc networks (MANETs) has been well-known in various applications, including some mission acute applications, and as such security has become one of the most important concerns in MANETs. MANETs have some unique characteristics, Due to that prevention methods alone are not enough to make them secure; therefore, detection should be added as an additional defense before an attacker can breach the device. In general, the intrusion detection techniques for usual wireless networks are not well-matched for MANETs. In this paper, A novel intrusion detection system named Better Adaptive Acknowledgement (BAACK) especially designed for MANETs. By the adoption of MRA scheme, BAACK is proficient of detecting nodes in spite of the existence of false misbehavior report and it compared with other popular mechanisms in different scenarios. This scenarios gives an outline for enhancing security level of IDS architecture in MANETs based on secure attributes and then various algorithms, namely RSA and DSA.

One of the major benefits of wireless networks arebetween different parties allows data communicationand still their mobility will be maintained. However, this communicationto the range of transmitters is partial. This means if the distance between the two nodes is outside the range they cannot communicate with each other. MANET answers this problem by allowing middle parties to relay data transmissions. This is accomplished by dividing MANET into two kinds of networks, namely, multihop and single-hop. In a single-hop network, Nodes can directlycommunicate with other nodes if all nodes are within the same radio range. On the other side, in a multihop network, if the destination node was out of their radio range then they take the help of intermediate nodes to transmit or communicate with each other in a network.

Keywords:Digital signature, MANET, DSR, AODV

I.INTRODUCTION A mobile ad hoc network (MANET) is a continuously self-configuring, infrastructureless network of mobile devices connected without wires.In MANET devices can move in any direction independently, and will therefore nodes frequently change its links to other nodes. Unrelated to its use every node must forward traffic in network, and therefore be a router. The key challenge in constructing a MANET is equipping each node to regularly maintain the information required to route trafficproperly. Such networks maybe connected to the larger Internet or mayoperate by themselves. In between nodes contain one or multiple and different transceivers. This results in a highly窶電ynamic, autonomous topology. MANETs are a kind of Wireless ad hoc network that usually has a routable networking surroundings on top of a Link Layer ad hoc

Fig 1: Architecture of MANET

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

46

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

By observing above described characteristics a MANET was mainly developed for military purpose by observing above described characteristics, in the battle nodes are spread across the battle field and there is no particular infrastructure to service them form a network.In last few years, rapidly MANETs are developed and the usage of MANETs gradually more in many applications, ranging from military tocommercial uses and civilian, without the help of human interaction or any infrastructure MANETs can setup easily by their own behavior. Some examples are: emergency services,in collecting data,and virtual classrooms and conferences where PDA, laptops or other mobile devices distribute their data through wireless medium and communicate with each other. As MANETs become extensively used, the primary concern about the MANETs is security issues. For example, the proposed protocols for MANETs most of it think that every node is not malicious and cooperative [1]. Therefore, in entire network any one compromised node can cause failure of the whole network.

ISBN: 378 - 26 - 138420 - 5

techniques, IDS are alsobe categorized into three modules as follows [2]. ADS (Anomaly detection systems): The normal behaviors (or normal profile)of operators are kept in the system. The captured data will be compared by the system with these profiles, and after that treats any action that differs fromthe baseline as a possible intrusion by initializing a proper response or informing system administrators. Misuse detection systems: The system keeps configurations (or signatures)of well-known attacks and capture data will be compared with all known attacks.A treat which matches with any accorded pattern is considered as intrusion;new kinds of attacks cannot be identified by it. Specification-based detection: The correct operation of protocol or program was described by the system defined constraints set.According to the defined constraints it displays the execution of the application. III.EXISTING SYSTEM

Digital Signature A digital signature or digital signature scheme was a type of asymmetric cryptography. For messages sent through an insecure medium, a fine implementation of digital signature algorithm is the one that gives confidence to the receiver the claimed sender only sending the messages to it, and trust the message.

The Watchdog/Path rater is anexplanation to the problem of selfish (or “misbehaving”) nodes in MANET. The system initiates two extensions to the DSR algorithm to moderate the effects of routing misbehavior: the Watchdog, to detect the misbehaving nodes and the Path rater, to respond to the intrusion by isolating the misbehaving.

In many respects Digital signatures are equivalent to traditional handwritten signatures;Digital Signature implemented properly, then it is more difficult to copy than the handwritten type. By using cryptography Digital Signature was implemented. Digital signatures can also provide acknowledgement, meaning thata message cannot successfully claim by the signer if they did not sign, while claiming its private key also remains secret.

With the presence of following Watchdog fails to identify malicious misbehaviors false misbehavior report, collusion, ambiguous collisions, receiver collisions, limited transmission power, partial dropping node from the network operation

II. RELATED WORKS III. PROPOSED SYSTEM

Many historical events have exposedthose intrusion prevention techniquesalone, such as authentication and encryption, which are generally a first levelof defense, are not sufficient. As the system turn out to be more complex, thereare also morelimitation, which may leads to more security issues. Intrusiondetection can be used as a second level of defense to protect the networkfrom such issues. If the intrusion is identified, a response can be initiatedto stop or reduce damage to the system.

Secure IDS architecture (BAACK) initiate to improve the security stage of MANETs based on various algorithmsandsecurity attributes, namely DSA and RSA. BAACK is designed to deal with three out of six weaknesses of Watchdog IDS, namely, 1) Receiver collision, 2) Limited transmission power, 3) False misbehavior.  Receiver collisions:Example of receiver collisions, shown in Fig. 2, after node X sends Packet 1 to m node Y, it tries to overhear if node Y forwarded this packet to node Z; meanwhile, node F is forwarding Packet 2 to node Z. In such case, node X overhears that node Y has successfully forwarded Packet 1 to node Z but failed to detect that node Z did not

Intrusion detection hadclassified based on audit data as either network-based or host-based. A networkbased IDS captures and then analyzes packets from network traffic while application logs in its study or a based on hosts IDS uses OS. Based on detection

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

47

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

receive this packet due to a collision between Packet 1 and Packet 2 at node Z.

Fig 2: Receiver collisions in MANETs

Limited transmission power: Example of Limited power, shown in Fig. 3, in order to manage the battery resources in MANETs, node Y limits its transmission power so it is very strong to be overheard by node X after transmitting the packet (P1) to node Z , but too weak to reach node Z because of transmission power can be reduced.

ISBN: 378 - 26 - 138420 - 5

Due to the open medium and remote distribution of typical MANETs, attackers can easily capture and compromise one or two nodes to achieve this false misbehavior report attack. As discussed in earlier sections, TWOACK and AACK resolve two of these three problems, namely, receiver collision and limited transmission power. However, two of them are vulnerable to the false misbehavior attack. In order to solves not only limited transmission power and receiver collision but also the false misbehavior problem to launch Secure IDS architecture (BAACK) [1].

Secure IDS description BAACK is consisted of three major parts, namely, ACK, secure ACK (S-ACK), and misbehavior report authentication (MRA). Sequentially to differentiate different packet types in different schemes to include a 2-b packet header in EAACK. In accordance with the Internet draft of DSR [7], there is 6 b reserved in the DSR header. In BAACK, use 2 b of the 6 b to flag different types of packets.

Fig5:EAACK protocol in MANETs In these secure IDS, It is assumed that the link between each device in the network is bidirectional.Moreover, for each communication process, both the destination node and the source node are not malicious. All packets of acknowledgment are required to be digitally signed by its sender and verified by its receiver.

Fig 3: Limited Transmission power in MANET

False misbehavior: Example of false misbehavior in MANETs, shown in Fig. 4, Even though node X and Y forwarded Packet 1 to node Z successfully, node X still inform node Y as misbehaving, as shown in Fig. .

Fig 4: False Misbehavior in MANET

ACK (Acknowledgment): ACK is fundamentally an end-to-end ACK IDS. It performs as a part of the hybrid IDS in BAACK, aiming to cut network overhead when no misbehavior is detected overnetwork. Consider the scenario source first sends out an ACK data packet to the target node D. If all the intermediary nodes along the route between nodes S and D are mutual cooperative and node D successfully receives packet, required to returnpacket with an ACK acknowledgment by node D along the unchanged route but in aninverse order.Within a predefined time span, if node S receives packet, then the transmission of packettonode D from node S is successful. If not, node S will change to S-ACK mode by sending out an S-ACK data packet to identify the malicious nodes in the route.

 S-ACK (Secure-Acknowledgment): It is an improved version of the TWOACK IDS [6]. The principle of S-

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

48

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ACK is to let every three successive nodes work in a group to identify misbehaving nodes. For every three consecutive nodes in the route, the every third node is required to send an S-ACK acknowledgment packet to the first node. Introducing S-ACK mode main intention is to identify misbehaving nodes in the occurrence of receiver collision or limited transmission power.

 MRA (Misbehavior Report Authentication):Unlike the TWOACK IDS, where the target node immediately believes the misbehavior report, BAACK requires the target node to shift to MRA mode and approve this misbehavior report. This is a crucial step to identify false misbehavior. Watchdog fails to detect the misbehavior nodes with the presence of false misbehavior because of its limitations to resolve this MRA field is designed. Themalicious attackersmay generate false misbehavior report to falsely report innocent nodes as malicious. The core of MRA field is to validate whether the target node has received the reported missing packet through another route. To initiate the MRA mode, the source node primarilylooks its local knowledge base and search for an alternative route to the target node. If there is no other that exists, the source node finds another route by using DSR routing request algorithm. Due to the nature of MANETs, it enables to find out various routes between the two nodes. If MRA packet is received by target node, then hunts its local knowledge base and compares the reported packet which was already received by the target node. If they both are matched, then it is safe to determine that this is a false misbehavior report and the false report generated node will be mark as malicious. Else, the misbehavior report wasaccepted andtrusted. By the enhancing of MRA scheme,In spite of the existence of false misbehavior report BAACK is capable of detecting malicious nodes. 

ISBN: 378 - 26 - 138420 - 5

Secure IDS in DSA and RSA:The signature size of DSA is much smaller than the signature size of RSA. So the DSA scheme always produces slightly less network overhead than RSA does. However, it is interesting to observe that the Routing Overhead differences between RSA and DSA schemes vary with different numbers of malicious nodes[16]. The more malicious nodes there are, the more ROs the RSA scheme produces. Assume that this is due to the fact that more malicious nodes require more packets, thus increasing the ratio of digital signature in the whole network overhead. With respect to this result, find DSA as a more desirable digital signature scheme in MANETs [1]. The reason is that data transmission in MANETs consumes the most battery power. Although the DSA scheme requires more computational power to verify than RSA, considering the tradeoff between battery power and performance, DSA is still preferable. IV. CONCLUSION

In this paper, a comparative study of Secure Intrusion- Detection Systems (SIDS) for discovering malicious nodes and attacks on MANETs is presented. Due to some special characteristics of MANETs, prevention mechanisms alone are not adequate to manage the secure networks. In this case detection should be focused as another part before an attacker can damage the structure of the system. We study about secure IDS named BAACK protocol specially designed for MANETs and in future it is required to compare against other popular mechanisms. Security is major part in MANETs;hybrid cryptography architecture will tackle the issue in an efficient manner. This way we can better preserve battery and memory space of mobile nodes. References [1] EAACK – A Secure Intrusion Detection System for MANETs Elhadi M. Shakshuki, Senior Member, IEEE, Nan Kang and Tarek R. Sheltami, Member, IEEE [2] Investigating Intrusion and Detection Systems in MANET and Comparing IDSs for Detecting Misbehaving Nodes Marjan Kuchaki Rafsan,Ali Movaghar and Faroukh Koroupi,World Academic of Science Engineering and Technology 44 2008. [3] L. Zhou, Z.J. Haas, Cornell Univ., “Securing ad hoc networks,” IEEE Network, Nov/Dec 1999, [4] Mishra Amitabh, Nadkarni Ketan M., and Ilyas Mohammad,2009.“Chapter 30:Security in wireless ad- hoc networks, the handbook of Ad hoc wireless network”. CRC PRESS Publisher

Digital Signature: BAACK is an acknowledgement-based IDS schema. All three modules of BAACK, namely, ACK, S-ACK, and MRA, are acknowledgment-based detection schemes. They all rely on ACK packets to detect misbehaviors in the network. Thus, it is very important to confirm that all acknowledgment packets in BAACK wereuntainted andauthentic. Otherwise, if the attackers are veryclever to forge acknowledgment packets, all of the three systems will be vulnerable. To overcome this problem, need to incorporate digital signature in secure IDS. In order to guarantee the integrity of the IDS, in BAACK before sending any message to destination it must be digitally signed and verified till they are accepted [1].

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

49

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[4] S. Marti, T. J. Giuli, K. Lai, and M. Baker, “Mitigating routing misbehavior in mobile ad hoc networks,” in Proc. 6th Annu. Int.Conf. Mobile Comput. Netw., Boston, MA, 2000, pp. 255–265. [5] “A study of different types of attacks on multicast in mobile ad hoc networks” Hoang Lan Nguyen, Uyen Trang Nguyen, Elsevier AdHoc Networks(2008) 32-46. [6] D. Johnson and D. Maltz, “Dynamic Source Routing in ad hoc wireless networks,” in Mobile computing. Norwell, MA: Kluwer, 1996, ch. 5, pp. 153–181. [7] T. Sheltami, A. Al-Roubaiey, E. Shakshuki, and A. Mahmoud, “Video transmission enhancement in presence ofmisbehaving nodes inMANETs,” Int. J. Multimedia Syst., vol. 15, no. 5, pp. 273–282, Oct. 2009. [8] K. Stanoevska-Slabeva and M. Heitmann, “Impact of mobile ad-hoc networks on the mobile value system,” in Proc. 2nd Conf. m-Bus., Vienna, Austria, Jun. 2010 [9] A. Tabesh and L. G. Frechette, “A low-power stand- alone adaptive circuit for harvesting energy from a piezoelectric micro power generator,” IEEE Trans. Ind. Electron., vol. 57, no. 3, pp. 840–849, Mar. 2010. [10] “Misbehavior Nodes Detection and Isolation for MANETs OLSR Protocol”Ahmed M. Abdulla, Imane A. Saroitb, Amira Kotbb, Ali H. Afsaric a* 2010 Published by Elsevier Ltd.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

50

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

PERFORMANCE ANALYSIS OF MULTICARRIER DS-CDMA SYSTEM USING BPSK MODULATION Prof. P. Subbarao 1, Veeravalli Balaji2, 1 2

MSc (Engg), FIETE, MISTE, Department of ECE, S.R.K.R Engineering College, A.P, India B.Tech (M.Tech), Department of ECE, S.R.K.R Engineering College, Bhimavarm, A.P, India

Abstract In this paper we apply a multicarrier signalling technique to a direct-sequence CDMA system, where a data sequence multiplied by a spreading sequence modulates multiple carriers, rather than a single carrier. The receiver provides a correlator for each carrier, and the outputs of the correlators are combined with a maximalratio combiner. This type of signalling has the desirable properties of exhibiting a narrowband interference suppression effect, along with robustness to fading, without requiring the use of either an explicit RAKE structure or an interference suppression filter.We use band limited spreading waveforms to prevent selfinterference, and we evaluate system performance over a frequency selective Rayleigh channel in the presence of partial band interference. There is no interference from the CDMA signals to the existing microwave systems. Thus, there is no need for either a narrowband suppression filter at the receiver or a notch filter at the transmitter. This paper specially analyses the BER performance under Rayleigh fading channel conditions of multicarrier DS-CDMA in presence of AWGN (Additive White Gaussian Noise) using BPSK modulation for different number of subcarrier, different number of users using MATLAB program

Keywords:CDMA, multicarrierDS-CDMA, AWGN, BER, Rayleigh fading channel -----------------------------------------------------------***----------------------------------------------------------system has a narrowband interference suppression 1.INTRODUCTION effect and finally, a lower chip rate is required, since, in a multicarrier DS system with M carriers, the entire bandwidth of the system is divided into M (not necessarily contiguous) equi-width frequency bands, and thus each carrier frequency is modulated by a spreading sequence with a chip duration which is M times as long as that of a single-carrier system. In other words, a multicarrier system requires a lower speed, parallel-type of signal processing, in contrast to a fast, serial-type of signal processing in a single carrier RAKE receiver [6]. This, in turn, might be helpful for use with a low power consumption device.

Direct sequence spread spectrum (DS-SS) techniques to multiple access communications [1]. This is partly due to its multiple access capability, robustness against fading, and anti-interference capability. In direct sequence spread spectrum, the stream of information to be transmitted is divided into small pieces, each of which is allocated across to a frequency channel across the spectrum. A data signal at the point of transmission is combined with a higher data-rate bit sequence (also known as a chipping code) that divides the data according to a spreading ratio. The redundant chipping code helps the signal resist interference and also enables the original data to be recovered if data bits are damaged during transmission.

In fact, multicarrier DS systems have already been proposed, and these proposed techniques can be categorized into two types, a combination of orthogonal frequency division multiplexing (OFDM) and CDMA or a parallel transmission scheme[8] of narrowband DS waveforms in the frequency domain. In the former system, a spreading sequence is serial-to-parallel converted, and each chip modulates a different carrier frequency. This implies that the number of carriers should be equal to the processing gain, and each carrier conveys a narrowband waveform, rather than a DS waveform. In other words, the resulting signal has a PN coded structure in the frequency

1.1 Multicarrier DS-CDMA In this paper, we propose a multicarrier DS SS system [2] [3] in which a data sequence multiplied by a spreading sequence modulates M carriers, rather than a single carrier. The receiver provides a correlator for each carrier, and the outputs of the correlators are combined with a maximal-ratio combiner. This type of system has the following advantages: First, a multicarrier DS SS system is robust to multipath fading, second, a multicarrier

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

51

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

domain. In the latter system, the available frequency spectrum is divided into M equi-width frequency bands, where M is the number of carriers, typically much less than the processing gain, and each frequency band is used to transmit a narrowband DS waveform. In fact, both systems show a similar fading mitigation effect over a frequency selective channel. However, the latter system requires only M adaptive gain amplifiers in the maximal ratio combiner, which may simplify the receiver. The system described in this paper belongs to the second group.

multipath. A multicarrier system can be considered as one realization of such a wideband DS system.

In a multicarrier system [9], carrier frequencies are usually chosen to be orthogonal to each other, i.e., carrier frequencies satisfy the following condition:

Fig 1. (a) PSD of Single Carrier (b) PSD of Multicarrier DS waveform

Tc

 cos( t   )  cos( t   )dt  0, for i  j.(1) i

i

j

j

Fig 1. (a) Shows a band limited single-carrier wideband DS waveform in the frequency domain, where the bandwidth, BW1, is given by

0

Where

ISBN: 378 - 26 - 138420 - 5

Tc is the chip duration, i and

 j are, respectively, the i thand j th carrier frequencies, and

BW1  (1   )

i and  j are arbitrary carrier

phases, respectively. This is done so that a signal in the j th frequency band does not cause interference

In (2),

1 ……..(2) Tc

0    1, and T, is the chip duration of

the single carrier system. In a multicarrier system, we divide BW1 into M equi-width frequency bands as shown in Fig. l(b), where all bands are disjoint.

in the correlation receiver for the i th frequency band. However, in an asynchronous CDMA system [5], signals from other users are no longer orthogonal to the desired signal, even if (1) is satisfied. In addition, orthogonality might be lost because of multipath propagation or Doppler frequency shift even for a synchronous system. This implies that co-channel interference [4] in one frequency band causes interference not only at the output of the correlator for that frequency band, but also in the signals out of all the other correlators. In this paper, we use band limited multicarrier DS waveforms to minimize such unnecessary selfinterference, and so orthogonality among carriers is not required. Also, this signalling scheme prevents narrowband waveforms from causing interference to all frequency bands

Then the bandwidth of each frequency band, BWM, is given by

BWM 

BW1 M

1 =(1+ ) MTC

…….(3)

2.1 TRANSMITTER The transmitter has input a random binary sequence representing data, and pseudo-random spreading signature sequences are given to multiplier. We assume that there are N chips per symbol, and that each user has a different signature sequence. The sequence modulates an impulse train, where the energy per chip is E,. After passing through a chip wave-shaping filter, the signal out of the filter modulates the multiple carrier signals and is transmitted. Note that the transmitter and receiver block diagrams for the proposed multicarrier

2. SYSTEM MODEL In recent years, several wideband CDMA systems have been proposed either to realize an overlay system [5], where DS CDMA waveforms are overlaid onto existing narrowband signals to enhance the overall capacity, or to combat

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

52

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

system can be effectively implemented by using a DFT technique

ISBN: 378 - 26 - 138420 - 5

Hence the spectral density of such a waveform is a sinc function squared, with first zeros at ± 1/TC PN sequences are periodic sequences that have a noise like behavior. They are generated using shift registers, modulo-2 adders (XOR gates) and feedback loops. The following diagram illustrates

Fig 2. Block Diagram of transmitter

2.2 Pseudo-Noise Sequences:

this:

PN sequences are periodic sequences that have a noise like behaviour. They are generated using shift registers, modulo-2 adders (XOR gates) and feedback loops.

Fig 4. Generation of PN sequence The maximum length of a PN sequence is determined by the length of the register and the configuration of the feedback network. An N bits register can take up to 2N different combinations of zeros and ones. Since the feedback network performs linear operations, if all the inputs (i.e. the content of the flip-flops) are zero, the output of the feedback network will also be zero. Therefore, the all zero combination will always give zero output for all subsequent clock cycles, so we do not include it in the sequence. Thus, the maximum length of any PN sequence is 2N-1 and sequences of that length are called Maximum-Length Sequences or m-sequences.

So far we haven't discussed what properties we would want the spreading signal to have. This depends on the type of system we want to implement. Let's first consider a system where we want to use spread spectrum to avoid jamming or narrow band interference. If we want the signal to overcome narrow band interference, the spreading function needs to behave like noise. Random binary sequences are such functions. They have the following important properties:  

Impulse Modulator: It is used to modulate the impulse train coming from the multiplier, where the energy per chip is E.

Balanced: they have an equal number of 1's and 0's Single Peak auto-correlation function

Wave shaping filter: This filter is used to modify the shape of the waveform i.e., it is used to adjust the energy levels of the waveform before the sequence modulates the multiple carriers.

In fact, the auto-correlation function of a random binary sequence is a triangular waveform as in the following figure, where TC is the period of one chip:

3. CHANNEL MODEL 3.1 RAYLEIGH CHANNEL MODEL Rayleigh fading is a statistical model [9] for the effect of a propagation environment on a radio signal, such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal that has passed through such a transmission medium will vary randomly, or fade, according to a Rayleigh distribution the radial

Fig 3. Auto Correlation function

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

53

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

component of the sum of two uncorrelated Gaussian random variables. Rayleigh fading is viewed as a reasonable model for tropospheric and ionospheric signal propagation as well as the effect of heavily built-up urban environments on radio signals. Rayleigh fading is most applicable when there is no dominant propagation along a line of sight between the transmitter and receiver Rayleigh fading is a reasonable model when there are many objects in the environment that scatter the radio signal before it arrives at the receiver, if there is sufficiently much scatter, the channel impulse response will be well modelled as a Gaussian process irrespective of the distribution of the individual components. If there is no dominant component to the scatter, then such a process will have zero mean and phase evenly distributed between 0 and 2  radians. The envelope of the channel response will therefore be Rayleigh distributed.

ISBN: 378 - 26 - 138420 - 5

Serial to parallel converter: At the transmitter end N bits are sent simultaneously over N subcarriers. Each sub-carrier transmit different symbol with spreading code in time-domain. In order toseparate different symbol of each subscriber we use this converter. Maximal

ratio

combiner:

Maximal

Ratio

Combining is defined as all paths co-phased and summed with optimal weighting to maximize combiner output SNR MRC is the optimum linear combining technique for coherent reception with independent fading. Its main advantage is the reduction of the probability of deep fades

3.2 AWGN CHANNEL MODEL Additive White Gaussian Noise channel model [9] as the name indicate Gaussian noise get directly added with the signal and information signal get converted into the noise in this model scattering and fading of the information is not considered. Additive white Gaussian noise (AWGN) is a channel model in which the only impairment to communication is a linear addition of wideband or white noise with a constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian distribution of amplitude. The model does not account for fading, frequency selectivity, interference etc. However, it produces simple and tractable mathematical models which are useful for gaining insight into the underlying behaviour of a system before these other phenomena are considered.

Fig 6. Maximum ratio combiner 2

f 1 H ( f )  x(t ) 

2

H ( f ) df  1

…….. (4)

 2

Also, we assume that H ( f ) is band limited to W’, where W’ is

W '  BWM / 2  ( fi 1  f i ) / 2 ……..(6) and fi is the

i th carrier frequency. This implies

that the DS waveforms do not overlap.

4. SIMULATION RESULTS 3.3 RECEIVER:

Bit error rate (BER) of a communication system is defined as the ratio of number of error bits and total number of bits transmitted during a specific period. It is the likelihood that a single error bit will occur within received bits, independent of rate of transmission. There are many ways of reducing BER. In our case, we have considered the most commonly used channel: the Additive White Gaussian Noise (AWGN) channel where the noise gets spread over the whole spectrum of frequencies.

Fig 5. Block Diagram of Receiver

BER has been measured by comparing the transmitted signal with the received signal and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

54

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

computing the error count over the total number of bits. For any given BPSK modulation, the BER is normally expressed in terms of signal to noise ratio (SNR).

ISBN: 378 - 26 - 138420 - 5

Fig 9. BER for 8 user system and 6 user system BER for BPSK modulation with Spread Spectrum thechniques 0.5 User 1 of 12 users One user system

0.45 0.4

The figure shows simulated graph between BER and SNR for 2 user system under Rayleigh fading channel conditions of multicarrier DS-CDMA in presence of AWGN (Additive White Gaussian Noise) using BPSK modulation

Bit Error Rate

0.35 0.3 0.25 0.2 0.15

BER for BPSK modulation with Spread Spectrum thechniques 0.1

0.5 User 1 of 2 users One user system

0.45

0.05

0.4

-50

-45

-40

-35

-30

-25 -20 SNR, dB

-15

-10

-5

0

Fig 10. BER for 12 user system and 10 user System

0.3 0.25

BER for BPSK modulation with Spread Spectrum thechniques

0.2

0.5

0.15

0.45

0.1

0.4

0.05

0.35

-50

-45

-40

-35

-30

-25 -20 SNR, dB

-15

-10

-5

Bit Error Rate

Bit Error Rate

0.35

0

User 1 of 14 users One user system

0.3 0.25 0.2 0.15

Fig 7. BER for 2 users system and single user system

0.1 0.05

BER for BPSK modulation with Spread Spectrum thechniques 0.5 User 1 of 4 users One user system

0.45

-50

0.4

-45

-40

-35

-30

-25 -20 SNR, dB

-15

-10

-5

0

Fig 11. BER for 14 user system and 3 user system

Bit Error Rate

0.35 0.3

Computer simulations are done to simulate SNR vs. BER performance of Multicarrier DS-CDMA for different channel noise conditions

0.25 0.2 0.15 0.1

CONCLUSION

0.05

-50

-45

-40

-35

-30

-25 -20 SNR, dB

-15

-10

-5

In this paper the performance of Multicarrier DSCDMA in AWGN channel and Rayleigh channel Using BPSK modulation technique is considered. It can be evident that as Eb/N0 increases the BER decreases. BER vs. SNR graphs for different number of users under Rayleigh Fading channel in presence AWGN is plotted and analysed successfully by using MATLAB programming.

0

Fig 8. BER for 4 user system and single user 2 system BER for BPSK modulation with Spread Spectrum thechniques 0.5 User 1 of 8 users One user system

0.45 0.4

Bit Error Rate

0.35

FUTURE SCOPE:

0.3 0.25

Multicarrier DS-CDMA technology is more useful for 3G and 4G mobile generations. There is a very wide scope for future scholars to explore this area of research in the field of Multicarrier DS-CDMA.

0.2 0.15 0.1 0.05

-50

-45

-40

-35

-30

-25 -20 SNR, dB

-15

-10

-5

Further works can be carried out

0

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

55

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

1. To evaluate the effect of timing jitter in a MC DS CDMA system in the presence of fading. 2. To evaluate the performance of MC DS CDMA system with Rake receiver to overcome the effect of fading. 3. To find the performance limitations due to nonorthogonality among the subcarriers (due to imperfect carrier synchronization). 4. To evaluate the performance improvement with forward error correction coding like convolution coding and Turbo coding etc.

REFERENCES [1] G. L. Turin, “Introduction to spread-spectrum antkultipath techniques and their application to urban digital radio,” Proc. IEEE, vol. 68. [2] R. E. Ziemer and R. L. Peterson, Digital Communications and Spread Spectrum Systems. New York Macmillan. [3] R. L. Pickholtz, D. L. Schilling, and L. B. Milstein, ‘Theory of spreadspectrum communications-A tutorial,” IEEE Trans. Commun., vol. COM-30, no. 5,. [4]S. Kondo and L. B. Milstein, “Multicarrier CDMA system with cochanne1 interference cancellation,” in Proc. VTC ’94, Stockholm, Sweden, pp. 164C-1644. [5] “Multicarrier DS CDMA systems in the presence of partial band interference,” in Proc. MILCOM , Fort Monmouth, NJ. [6] J.Proakis, Digital Communications, McGrawHill [7]R. E. Ziemer and R. L. Peterson, Digital Communications and Spread Spectrum Systems, Macmillan [8] M. Schwartz, W. R. Bennet, and S. Stein, Communication Systems and Techniques, McGrawHill [9] G. Brindha, “Performance Analysis Of McCdma System Using Bpsk Modulation”, International Journal Of Research In Engineering & Technology (Ijret) Vol. 1, Issue 1, 45-52

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

56

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Iterative MMSE-PIC Detection Algorithm for MIMO OFDM Systems Gorantla Rohini Devi

K.V.S.N.Raju

Department of ECE SRKR Engineering College Bhimavaram, AP, India grohini09@gmail.com

Head of ECE Department SRKR Engineering College Bhimavaram, AP, India kvsn45@yahoo.co.uk

Buddaraju Revathi Asst. Professor, Department of ECE,

SRKR Engineering College Bhimavaram, AP, India buddaraju.revathi@gmail.com

the space dimension to improve wireless system capacity, range, and reliability. MIMO system can be employed to transmit several data streams in parallel at the same time and on the same frequency but different transmit antennas. MIMO systems arise in many modern communication channels such as multiple user communication and multiple antenna channels. It is well known that the use of multiple transmit and receive antennas promises sub performance gains when compared to single antenna system. The combination MIMO-OFDM system is very natural and beneficial since OFDM enables support of more antennas and large bandwidth since it simplifies equalization in MIMO systems. In MIMOOFDM system offers high spectral efficiency and good diversity gain against multipath fading channels [2][3]. In MIMO system depends on the different detection techniques used at the MIMO receiver. The better detector that minimizes the bit error rate (BER) is the maximum likelihood (ML) detector. But the ML detector is practically difficult as it has computational complexity is exponential. On the other hand, linear detectors, such as zero-forcing (ZF) and minimum mean square error (MMSE) receivers, have low decoding complexity, but detection performance decrease in portion to the number of transmit antennas. Therefore, there has been a study on a low complexity nonlinear receiver, namely, parallel interference cancellation (PIC) receiver, which parallely decodes data streams through nulling and cancelling. PIC algorithm [4] relies on a parallel detection of the received block. At each step all symbols are detected by subtracted from the received block. PIC detection is used to reduce the complexity and prevents error propagation. The PIC detection uses the reconstructed signal to improve the detection performance by using iteration process. Iterative MMSE-PIC detection algorithm [5][6] best detection technique compared all nonlinear receivers. For improving the performance of overall system, the output of detector is regarded as input of the PIC detection to do again. By exchanging information between the MIMO detection and decoder, the performance of receiver may greatly be enhanced. Where number of iteration increases to improve the bit error rate (BER) performance. PIC introduces parallely, which enables to reduce the interference and therefore increases the reliability of the decision process. The channel as a flat fading Rayleigh multipath channel and the modulation as BPSK has been taken. MIMO-OFDM technology has been investigated as the infrastructure for next generation wireless networks.

Abstract- Wireless communication systems are required to provide high data rates, which is essential for many services such as video, high quality audio and mobile integrated services. When data transmission is affected by fading and interference effects the information will be altered. Multiple Input Multiple Output (MIMO) technique is used to reduce the multipath fading. Orthogonal Frequency Division Multiplexing (OFDM) is one of the promising technologies to mitigate the ISI. The combination of MIMO-OFDM systems offers high spectrum efficiency and diversity gain against multipath fading channels. Different types of detectors such as ZF, MMSE and PIC, Iterative PIC. These detectors improved the quality of received signal in high interference environment. Implementations of these detectors verified the improvement of the BER v/s SNR performance. Iterative PIC technique give best performance in noise environment compared to ZF, MMSE and PIC. Keywords: Orthogonal Frequency Division Multiplexing (OFDM), Multiple Input Multiple Output (MIMO), Zero Forcing (ZF), Minimum Mean Square Error (MMSE), Parallel Interference Cancellation (PIC), Bit Error Rate (BER), Signal to Noise Ratio (SNR), Inter Symbol Interference (ISI), Binary Phase Shift Keying (BPSK). I. INTRODUCTION

In wireless communication the signal from a transmitter will be transmitted to a receiver along with a number of different paths, collectively referred as multipath. These paths may causes interference from one another and result in the original data being altered. This is known as “Multipath fading�. Furthermore wireless channel suffer from co-channel interference (CCI) from other cells that share the same frequency channel, leading to distortion of the desired signal and also low system performance. Therefore, wireless system must be designed to mitigate fading and interference to guarantee a reliable communication. High data rate wireless systems with very small symbol periods usually face unacceptable Inter Symbol Interference (ISI) originated from multi-path propagation and their inherent delay spread. Orthogonal Frequency Division Multiplexing (OFDM) has emerged as one of the most practical techniques for data communication over frequency-selective fading channels into flat selective channels. OFDM is one of the promising technologies to mitigate the ISI. On the other hand, to increase the spectral efficiency of wireless link, MultipleInput Multiple-Output (MIMO) systems [1]. It is an antenna technology that is used both in transmitter and receiver equipment for wireless radio communication. MIMO exploit

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

57

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

II. SYSTEM MODEL

A. Spatial Multiplexing

Consider a MIMO OFDM system with transmitting and receiving antennas. When the MIMO technique of spatial multiplexing is applied encoding can be done either jointly over the multiple transmitter branches.

U s e r

The transmission of multiple data stream over more than one antenna is called spatial multiplexing. It yields linear (In the minimum number of transmit and receive antenna) capacity increases, compared to systems with a single antenna at one or both sides of the wireless link, at no additional power or bandwidth expenditure. The corresponding gain is available if the propagation channel exhibits rich scattering and can be realized by the simultaneous transmission of independent data stream in the same frequency band. The receiver exploits difference in the spatial signature induced by the MIMO channel onto the multiplexed data stream to separate the different signals, there by realizing a capacity gain.

X1 Modulation

IFFT

ISBN: 378 - 26 - 138420 - 5

MIMO Channel

X2 XL

B. Diversity Schemes Y1 Y2

YL

In which two or more number of signals sent over different paths by using multiple antennas at the transmitting and receiving side. The space is chosen, in such a way the interference between the signals can be avoided. To improve the link reliability we are using diversity schemes. Spatial diversity improves the signal quality and achieves higher signal to noise ratio at the receiver side. Diversity gain is obtained by transmitting the data signal over multiple independently fading dimensions in time, frequency, and space and by performing proper combing in the receiver. Spatial diversity is particularly attractive when compared to time or frequency diversity, as it does not incur expenditure in transmission time or bandwidth. Diversity provides the receiver with several (ideally independent) replicas of the transmitted signal and is therefore a powerful means to combat fading and interference and there by improve link reliability.

P-element

U s e r

Demodulation

ZF/MMSE/ PIC/ Iterative PIC

F F T

Receiver antenna array

Fig1.Schematic of PIC detection for MIMO OFDM system

According to the block diagram in Figure1 consists of two users, one user source while the other user as destination. The two users interchange their information as source to different instant of time. In MIMO channel model, L simultaneous antennas having same data for transmission, while receiver has P antennas. The binary data are converted into digitally modulated signal by using BPSK modulation technique and after that converted from serial to parallel through convertor. The digitally modulated symbols are applied to IFFT block. After the transformation, the time domain OFDM signal at the output of the IFFT. After that, Cyclic Prefix (CP) is added to mitigate the ISI effect. This information is sent to parallel to serial convertor and again, the information symbols are simultaneously transmitted over the MIMO channel and later AWGN noise added at receiver side. At the receiver side, firstly serial to parallel conversion occurs and cyclic prefix removed. The received signals samples are sent to a fast Fourier transform (FFT) block to demultiplex the multi-carrier signals and ZF / MMSE / PIC / Iterative-PIC detectors is used for separating the user signals at each element of the receiver antenna array. Finally demodulated outputs and the resulting data combined to obtain the binary output data.

Two kinds of spatial diversities are considered, Transmitter diversity and Receiver diversity. There are two famous space time coding schemes. Space time block code (STBC) and Space time trellis code (STTC). III. PROPOSED DETECTION ALGORITHM FOR MIMO-OFDM SYSTEMS The co-channel interference is one of the major limitations in cellular telephone network. In the case of cellular network such as 3G or beyond 3G (4G), the co-channel interference is caused by the frequency reuse. Our main idea is to reject the co- channel interference in MIMO-OFDM cellular systems. To eliminate the inter symbol interference (ISI) different types of highly interference channel equalization techniques are used. MIMO-OFDM detection method consists of linear and nonlinear detection methods. Linear equalizers are ZF [7] and MMSE [8] and nonlinear equalizers are PIC and Iterative PIC. 1.

Zero Forcing (ZF) equalizer:

Zero forcing Equalizer is a linear equalization algorithm used in communication systems, it inverse the frequency response of the channel. The output of the equalizer has an overall response function equal to one of the symbol that is being detected and an overall zero response for the other symbols. If possible, this results in the removal of the interference from all other symbols in the absence of the noise. Zero Forcing is a linear method that does not consider the effects of noise. In fact, the noise may be enhanced in the process of eliminating the interference.

MIMO Techniques: Current MIMO system includes MISO and SIMO system that uses MIMO technique to improve the performance of wireless system can be divided into two kinds. One is spatial multiplexing which provides a linear capacity gain in relation to the number of transmitting antenna and the other is spatial diversity schemes which can reduce the BER and improve the reliability of wireless link.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

58

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Consider a 2x2 MIMO system. The received signal on the first antenna is given by:

y1  h1,1 x1  h1,2 x2  n1   h1,1

x  h1,2   1   n1  x2 

estimator quality. The most important characteristic of MMSE equalizer is that it does not usually eliminate ISI totally but instead of minimizes the total power of the noise and ISI components in the output. If the mean square error between the transmitted symbols and the outputs of the detected symbols, or equivalently, the received SNR is taken as the performance criteria, the MMSE detector [9] is the optimal detection that seeks to balance between cancelation of the interference and reduction of noise enhancement.

(1)

The received signal on the second antenna is given by:

y2  h2,1 x1  h2,2 x2  n2   h2,1

x  h2,2   1   n2  x2 

(2)

The received signal on the first receive antenna is,

Where, y1 and y2 are the received symbol on the first and second antenna, h1,1 is the channel from 1st transmit antenna to 1st receive antenna, h 1,2 is the channel from 1st transmit antenna to 2nd receive antenna, h2,1 is the channel from 2nd transmit antenna to 1st receive antenna, h2,2 is the channel from 2nd transmit antenna to 2nd receive antenna, x1, and x2 are the transmitted symbols and n1 and n2 are the noise on 1st and 2nd receive antennas respectively. The sampled baseband representation of signal is given by:

x  y1  h1,1 x1  h1,2 x2  n1  h1,1 h1,2   1   n1  x2 

H 1, 2

H 2 ,2

 

H N R ,2

H 1, N T   H 2 ,NT     H N R , N T 

(8)

The received signal on the second antenna is,

 x1  y2  h2,1x1  h2,2 x2  n2  h2,1 h2,2     n2  x2 

(9)

Where, y1, y2 are the received symbol on the 1st and 2nd antenna respectively, h1,1 is the channel from 1st transmit antenna to 1st receive antenna, h1,2 is the channel from 1st transmit antenna to 2nd receive antenna, h2,1 is the channel from 2nd transmit antenna to 1st receive antenna, h2,2 is the channel from 2nd transmit antenna to 2nd receive antenna, x1, x2 are the transmitted symbols and n1, n2 is the noise on 1st , 2nd receiver antennas. The above equation can be represented in matrix notation as follows:

y= Hx+n (3) Where, y = Received symbol matrix, H = Channel matrix, x = Transmitted symbol matrix, n = Noise matrix. For a system with NT transmit antennas and NR receiver antennas, the MIMO channel at a given time instant may be represented as NT x NR matrix:

 H 1,1   H 2 ,1 H      H N R ,1

ISBN: 378 - 26 - 138420 - 5

 y1  h1,1 h1,2   x1  n1   y   h h  x   n   2   2,1 2,2   2   2 

(4)

Equivalently,

(10)

y = Hx+n

To solve for x, we find a matrix W which satisfies WH = I. The Zero Forcing (ZF) detector for meeting this constraint is given by, W = (HHH)-1 HH (5)

To solve for x, we know that we need to find a matrix W which satisfies WH=I. The Minimum Mean Square Error (MMSE) linear detector for meeting this constraint is given by,

Where, W= Equalization matrix H= Channel matrix This matrix is known as the pseudo inverse for a general m x n matrix where

W=[HHH+NoI]-1HH

H

H

 h* H   1*,1  h1 , 2

h 2* ,1   h1 ,1  h 2* , 2   h 2 ,1

h1 , 2   h2 , 2 

Using MMSE equalization, the receiver can obtain an estimate of the two transmitted symbols x1, x2, i.e.

 xˆ1  H -1 H  xˆ  = (H H+N0I) H  2

(6)

 y1  y   2

(12)

3. Parallel Interference Cancellation (PIC):

It is clear from the above equation that noise power may increase because of the factor (HHH)-1. Using the ZF equalization approach, the receiver can obtain an estimate of the two transmitted symbols and x1 and x2 i.e.

 xˆ1  H -1 H  y 1   xˆ  = (H H) H  y   2  2

(11)

Here the user’s symbols are estimated in a parallel manner. This detects all layers simultaneously by subtracting interference from other layers regenerated by the estimation from ZF or MMSE criteria. PIC detection is used to reduce the complexity and prevents error propagation. The parallel MMSE detector consists of two or more stages. The first stage gives a rough estimation of substreams and the second stage refines the estimation. The output can also be further iterated to improve the performance. The first stage will be implemented by using either ZF or MMSE detection algorithm. The MMSE detector minimizes

(7)

2. Minimum Mean Square Error (MMSE) Equalizer: A MMSE estimator is a method in which it minimizes the mean square error (MSE), which is a universal measure of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

59

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

the mean square error between the actually transmitted symbols and the output of the linear detector is W=[HHH+No I]-1 HH

IV. SIMULATION RESULTS In all simulation results shown by using four equalizers (ZF, MMSE, PIC and Iterative PIC) in MIMO OFDM system. Rayleigh fading channel is taken and BPSK modulation scheme was used. Channel estimation as well as synchronization is assumed to be ideal. We analyze the BER performance of data transmission in Matlab software.

(13)

By using MMSE detector the output of the first stage is d = Dec(W.y)

ISBN: 378 - 26 - 138420 - 5

(14)

Where, W is the parameter of Equalization matrix which is assumed to be known and Dec(.) is the decision operation. In each a vector symbol is nulled. This can be written as S=I.d

(15)

Where, I is identity matrix and d is rough Estimated symbols of MMSE. The PIC detection algorithm can be expressed as R= y-H.S

(16)

Hence S is the estimated symbols of MMSE Equalizer. The estimated symbol using the detection scheme of the appropriate column of the channel matrix Z= Dec (W.R)

(17) Fig. 2. BER for BPSK modulation with ZF and MMSE equalizers in 2x2 MIMO-OFDM system.

Where, R is the output of PIC Equalizer W is the parameter of MMSE Equalization matrix Z is the estimated symbols of PIC Equalizer

From the plot it is clear that 2x2 MIMO-OFDM system with MMSE equalizer for case of pure equalization compared to ZF equalizer. Modulation scheme employed here is BPSK.

4. Iterative PIC detection: In which, the estimated signal by decoder is used to reconstruct the transmitted code signal. The PIC detection uses the reconstructed signal to improve the detection performance by using iterative process. PIC cancellation estimates and subtract out all the interference for each user in parallel in order to reduce the time delay. At iteration process the output of PIC detector is given it as input. Combing MMSE detection with the PIC cancellation directly impacts on the global performance of the systems and also on the associated complexity. The complexity directly linked with the number of iterations for the detection. The Iterative PIC detection scheme based on MIMO system algorithm is given by:

For i = 1: nT nT - 1 c = y-∑ H (: , J). Z j=1 E = Dec (W. c)

Fig. 3. Performance comparison of PIC and Iterative PIC equalizers in 2x2 MIMO-OFDM system.

(18) From the plot it is clear that 2x2 MIMO-OFDM system with Iterative PIC equalizer for case of pure equalization compared to PIC equalizer. The code BER of proposed scheme is produced after iteration. when iteration increases the BER is significantly improved. From simulation results the proposed scheme Iterative PIC is quite effective compared to PIC. Modulation scheme employed here is BPSK.

Where, E is the estimation of transmitted symbols of iterative PIC detector, W is the MMSE equalization matrix, c is the output of iterative PIC detector, nT is the number of transmitting antennas.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

60

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

V. CONCLUSION The combination of MIMO-OFDM systems are used to improve the spectrum efficiency of wireless link reliability in wireless communication systems. Iterative PIC scheme for MIMO OFDM systems transmission including the feasibility of using the priori information of the transmit sequence of MMSE compensation. Performance of Iterative PIC detection technique is better compared to ZF, MMSE, PIC using BPSK modulation scheme in high interference environment. The simulation result shows that the performance of proposed scheme is greatly improved compared to other detection receivers for MIMO-OFDM systems. VI. FUTURE SCOPE Any type of modulation techniques such as QPSK or QAM will integrate the channel encoding part.

REFERENCES

Fig. 4. Performance comparison of ZF, PIC and Iterative PIC equalizers in 2x2 MIMO-OFDM system. [1]

From the plot it is clear that 2x2 MIMO-OFDM system with Iterative PIC equalizers for case of pure equalization compared to ZF, MMSE, and PIC equalizer. The code BER of proposed scheme is produced after iteration. when iteration increases the BER is significantly improved. The Zero Forcing equalizer removes all ISI and is ideal only when the channel is noiseless. From simulation results the proposed scheme Iterative PIC is quite effective compared to ZF and PIC. Modulation scheme employed here is BPSK.

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

Fig .5. Performance comparison of ZF, MMSE, PIC and Iterative PIC equalizers in 2x2 MIMO-OFDM system .

From the plot it is clear that 2x2 MIMO-OFDM system with Iterative PIC equalizers for case of pure equalization compared ZF, MMSE, and PIC equalizer. The code BER of proposed scheme Iterative PIC is produced after iteration. when iteration increases the BER is significantly improved. From simulation results the proposed scheme is quite effective in all simulation configurations. However, Iterative PIC detection scheme is better in the diversity gain and when the intefrence comes from the other layers is completely cancelled. Modulation scheme employed here is BPSK.

I. E. Telatar, “Capacity of multiple-antenna Gaussian channels,” Eur. Trans. Telecommun., vol. 10, no. 6, pp. 585–595, Nov/Dec. 1999. G. J. Foschini and M. J. Gans, “On limits of wireless communications in a fading environment when using multiple antennas,” Wirel. Pers. Commun., vol. 6, no. 3, pp. 311–335, Mar. 1998. A. Paulraj, R. Nabar, and D. Gore, Introduction to Space– Time Wireless Communications, 1st ed. Cambridge, U.K.: Cambridge Univ. Press, 2003 Junishi Liu, Zhendong Luo,Yuan’an Liu, “MMSE–PIC MUD for CDMA BASED MIMO OFDM System,’’ IEEE Transaction Communication., vol.1, oct.2005 . Hayashi,H.Sakai , “Parallel Interference Canceller with Adaptive MMSE Equalization for MIMO-OFDM Transmission,” France telecom R&D Tokyo. Z.Wang, “Iterative Detection and Decoding with PIC Algorithm for MIMO OFDM System” ,Int.J.communication, Network and System Science, published august 2009. V.JaganNaveen, K.MuraliKrishna, K.RajaRajeswari "Performance analysis of equalization techniques for MIMO systems in wireless communication" International Journal of Smart Home, Vol.4, No.4, October, 2010 Dhruv Malik, Deepak Batra "Comparison of various detection algorithms in a MIMO wireless communication receiver" International Journal of Electronics and Compute Science Engineering, Vol.1, No 3, page no16781685. J.P.Coon and M. A. Beach, “An investigation od MIMO single-carrier frequency-domain MMSE equalizer” in Proc London comm.Symposium, 2002,pp. 237-240.

.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

61

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Computational Performances of OFDM using Different Pruned Radix FFT Algorithms Alekhya Chundru

P.Krishna Kanth Varma

M.tech Student, Department Of Eelectronics and Communications, SRKR Engineering College, ANDHRA PRADESH, INDIA. Email: alekhya.chundru@gmail.com

Asst Professor, Department Of Eelectronics and Communications, SRKR Engineering College, ANDHRA PRADESH, INDIA. Email: Varmakk.Pen3@gmail.com

Abstract- The Fast Fourier Transform (FFT) and its inverse (IFFT) are very important algorithms in signal processing, software-defined radio, and the most promising modulation technique i.e. Orthogonal Frequency Division Multiplexing (OFDM). From the standard structure of OFDM we can find that IFFT/FFT modules play the vital role for any OFDM based transceiver. So when zero valued inputs/outputs outnumber nonzero inputs/outputs, then general IFFT/FFT algorithm for OFDM is no longer efficient in term of execution time. It is possible to reduce the execution time by “pruning” the FFT. In this paper we have implemented a novel and efficient input zero traced radix FFT pruning (algorithm based on radix-2 DIF FFT, radix-4 DIF FFT, radix-8 DIF FFT). An intuitive comparison of the computational complexity of orthogonal frequency division multiplexing (OFDM) system has been made in terms of complex calculations required using different radix Fast Fourier transform techniques with and without pruning. The different transform techniques are introduced such as various types of Fast Fourier transform (FFT) as radix-2 FFT, radix-4 FFT, radix-8 FFT, mixed radix 4/2, mixed radix 8/2 and split radix 2/4. With intuitive mathematical analysis, it has been shown that with the reduced complexity can be offered with pruning, OFDM performance can be greatly improved in terms of calculations needed.

FFT's spectrum resolution and computational time consumption limits its application. To match with the order or requirement of a system, the common method is to extend the input data sequence x(n) by padding number of zeros at the end of it and which is responsible for a increased value of computational time. But calculation on undesired frequency is unnecessary. As the OFDM based cognitive radio [2] has the capability to nullify individual sub carriers to avoid interference with the licensed user. So, that there could be a large number of zero valued inputs/outputs compare to non-zero terms. So the conventional radix FFT algorithms are no longer efficient in terms of complexity, execution time and hardware architecture. Several researchers have proposed different ways to make FFT faster by “pruning” the conventional radix FFT algorithms. In this paper we have proposed an input zero traced radix DIF FFT pruning algorithm for different radix FFT algorithms, suitable for OFDM based transceiver. The computational complexity of implementing radix-2, radix-4, radix-8, mixed radix and split radix Fast Fourier Transform with and without pruning has been calculated in an OFDM system and compared their performance. Result shows IZTFFTP of radix algorithms are more efficient than without pruning.

Index terms- OFDM (Orthogonal frequency division multiplexing), Fast Fourier Transform (FFT), Pruning Techniques, MATLAB.

I.

II.

INTRODUCTION

OFDM SYSTEM MODEL

OFDM is a kind of FDM (Frequency Division Multiplexing) technique in which we divide a data stream into a number of bit streams which are transmitted through sub-channels [3]. The characteristics of these sub-channels are that they are orthogonal to each other. As the data that are transmitted through a sub-channel at a particular time are only a portion of the data transmitted through a channel so bit rate in a sub-channel can be kept much low. After splitting the data in N parallel data streams each stream is then mapped to a tone at a unique frequency and combined together using the Inverse

Orthogonal Frequency Divisional Multiplexing (OFDM) is a modulation scheme that allows digital data to be efficiently and reliably transmitted over a radio channel, even in multi-path environments [1]. In OFDM system, Discrete Fourier Transforms (DFT)/Fast Fourier Trans- forms (FFT) are used instead of modulators. FFT is an efficient tool in the fields of signal processing and linear system analysis. DFT isn't generalized and utilized widely until FFT was proposed. But the inherent contradiction between

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

62

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Fast Fourier Transform (IFFT) to yield the time domain waveform to be transmitted [4]. After IFFT is done, the time domain signals are then converted to serial data and cyclic extension is added to the signal. Then the signal is transmitted. At the receiving side we do the reverse process to get original data from the received one [4,5]. In case of deep fade, several symbols in single carrier is damaged seriously, but in parallel transmission each of N symbol is slightly affected. So even though the channel is frequency selective, the sub-channel is flat or slightly frequency selective. This is why OFDM provide good protection against fading [6]. In an OFDM system there are N numbers of subchannels. If N is high then it will be very complex to design a system with N modulators and demodulators. Fortunately, it can be implemented alternatively using DFT/FFT to reduce the high complexity. A detailed system model for OFDM system is shown in Figure 1 [5,6].

ISBN: 378 - 26 - 138420 - 5

fast Fourier transform (FFT) algorithms. Decomposing is an important role in the FFT algorithms. There are two decomposed types of the FFT algorithm. One is decimation-in-time (DIT), and the other is decimation-in-frequency (DIF). There is no difference in computational complexity between these two types of FFT algorithm. Different Radix DIF algorithms we used are A. Radix-2 DIF FFT Algorithm Decomposing the output frequency sequence X[k] into the even numbered points and odd numbered points is the key component of the Radix-2 DIF FFT algorithm [6]. We can divide X[k] into 2r and 2r+1, then we can obtain the following equations : [2 ] =

[ ]

[2 + 1] =

(

)

(

[ ]

= 0,1,2, . . ,

2

(1)

)

(2)

−1

Because the decomposition of the Equation (1) and Equation (2) are the same, we only use Equation (1) to explain as shown in Equation (3).

[2 ] =

[ ]

(

)

(

[ ]

+

)

(3)

Finally, by the periodic property of twiddle factors, we can get the even frequency samples as

[2 ] =

(

( [ ] + [ + /2])

= 0,1,2, . . ,

2

)

(4)

−1

Similarly, the odd frequency samples is

Figure1: OFDM System Model III. FOURIER TRANSFORM ALGORITHM

[2 + 1] =

Discrete Fourier Transform (DFT) computational complexity is so high that it will cause a long computational time and large power dissipation in implementation. Cooley and Tukey provided a lot of ways to reduce the computational complexity. From that, many fast DFT algorithms have been developing to reduce the large number of the computational complexity, and these fast DFT algorithms are named

[ ] –

+

2

(

)

= 0,1,2, . . , −1 (5) From Equation (4) and (5), we can find out the same components, x[n] and x[n+N/2], so we can combine the two equations as one basic butterfly unit shown in Figure 2. The solid line means that x[n] adds x[n + N

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

63

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

/ 2] , and the meaning of the dotted line is that x[n] subtracts x[n + N / 2] .

Figure 3: The simplified butterfly signal flow graph of radix-4 DIF FFT Figure 2: The butterfly signal flow graph of radix-2 DIF FFT

This algorithm results in (3/8)N log complex multiplications and (3/2)N log complex additions. So the number of multiplications is reduced by 25%, but the number of addition is increased by 50%.

We can use the same way to further decompose N-point DFT into even smaller DFT block. So from the radix-2 dif FFT, there is a reduction of number of multiplications, which is about a factor of 2, showing the significance of radix-2 algorithm for efficient computation. So this algorithm can compute N-point FFT in N/2 cycles.

C.Radix-8 DIF FFT Comparing with the conventional radix-2 FFT algorithm and radix-4 FFT algorithm, the advantage of developing radix-8 FFT algorithm is to further decrease the complexities, especially the number of complex multiplications in implementation. We can split Equation (2.1) and replace index k with eight parts, including 8r, 8r+1,8r+2, 8r+3, 8r+4, 8r+5, 8r+6, and 8r+7. Hence, we can rewrite Equation (6) and obtain the Equation (9).

B.Radix-4 DIF FFT In case N-data points expressed as power of 4M, we can employ radix-4 algorithm [9] instead of radix2 algorithm for more efficient estimation. The FFT length is 4M, where M is the number of stages. The radix-4 DIF fast Fourier transform (FFT) expresses the DFT equation as four summations then divides it into four equations, each of which computes every fourth output sample. The following equations illustrate radix-4 decimation in frequency. ( )=

=

( )

( )

(8 + ) =

=

+

( )

( )

+

[ ] +2 +4

⎜ ⎨ + ⎪ + +6 ⎩⎝ + 8 ⎛+ +3 8 +⎜ ⎜+ +5 8 +7 8 ⎝+

(6)

+

⎧ ⎪⎛ +

( )

(7)

8 8 8 ⎞ ⎟ ⎟ ⎠

⎞ ⎟ ⎠ ⎫ ⎪ ⎬ ⎪ ⎭

/

(9) The butterfly graph can be simplified as shown in Figure 4

Equation (7) can thus be expressed as

( )=

( ) + (− ) ( + ⁄4) + (−1) ( + ⁄2) + ( ) ( + 3 ⁄4)

(8) So, Equation (8) can then be expressed as four N/ 4 point DFTs. The simplified butterfly signal flow graph of

Figure 4: The simplified butterfly signal flow graph of radix-8 DIF FFT

radix-4 DIF FFT is shown in Figure 3.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

64

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

D. Mixed radix DIF FFT There are two kinds of mixed-radix DIF FFT algorithms. The first kind refers to a situation arising naturally when a radix-q algorithm, where q = 2m > 2, is applied to an input series consisting of N = 2k × qs equally spaced points, where1 ≤ k < m. In this case, out of necessity, k steps of radix-2 algorithm are applied either at the beginning or at the end of the transform, while the rest of the transform is carried out by s steps of the radix-q algorithm.

ISBN: 378 - 26 - 138420 - 5

complexities. If we use radix-2 DIF FFT algorithm for the even frequency terms and the radix-22 DIF FFT algorithm for the odd parts, we can obtain the split-radix 2/4 algorithm [10,11] as shown in the equation in the Equation (10). [2 ] = ∑

[ ]+

+

(

)

(10)

= 0,1,2, . . ,

For example if N = 22m+1 = 2 × 4m, the mixedradix algorithm [7][8] combines one step of the radix-2 algorithm and m steps of the radix-4 algorithm. The second kind of mixed-radix algorithms in the literature refers to those specialized for a composite N = N0 × N1 × N2 ...× Nk. Different algorithms may be used depending on whether the factors satisfy certain restrictions. Only the 2 × 4m of the first kind of mixed-radix algorithm will be considered here. The mixed-radix 4/2 butterfly unit is shown in Figure5.

2

−1

(4 + 1) = ( ) + (− ) ( + ⁄4) +(−1) ( + ⁄2) + ( ) ( + 3 ⁄4)

(11) (4 + 3) = ( ) + ( ) ( + ⁄4) − ( + ⁄2) + (− ) ( + 3 ⁄4) (12)

Thus the N-point DFT is decomposed into one N/2 point DFT without additional twiddle factors and two N/4 -point DFTs with twiddle factors. The N-point DFT is obtained by successive use of these decompositions up to the last stage. Thus we obtain a DIF split-radix-2/4 algorithm. The signal flow graph of basic butterfly cell of split-radix-2/4 DIF FFT algorithm is shown in Figure 6

Figure 5: The butterfly signal flow graph of mixedradix-4/2 DIF FFT It uses both the radix-22 and the radix-2 algorithms can perform fast FFT computations and can process FFTs that are not power of four. The mixed-radix 4/2, which calculates four butterfly outputs based on X(0)~X(3). The proposed butterfly unit has three complex multipliers and eight complex adders. E. Split-Radix FFT Algorithms Split-radix FFT algorithm assumes two or more parallel radix decompositions in every decomposition stage to fully exploit advantage of different fixedradix FFT algorithm. As a result, a split-radix FFT algorithm generally has fewer counts of adder and multiplication than the fixed-radix FFT algorithms, while retains applicability to all power-of-2 FFT length. More computational complexity of the odd frequency terms than the even frequency terms, so we can further decompose the odd terms to reduce

Figure 6: The butterfly signal flow graph of mixedradix-2/4 DIF FFT (0) = ( ) +

we have

(2) =

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

65

+

4

+

+ +

3 4

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

(1) =

(3) =

( ) + (− ) ( + ⁄4) +(−1) ( + ⁄2) + ( ) ( + 3 ⁄4)

ISBN: 378 - 26 - 138420 - 5

B. Input Zero Traced Radix-4 DIF FFT Pruning In radix-4 since we couple four inputs to obtain four outputs, we therefore have 16 combinations of those four inputs at radix-4 butterfly. Now therefore for radix-4 pruning there exist five conditions only based upon zeros at the input.

( ) + ( ) ( + ⁄4) − ( + ⁄2) + (− ) ( + 3 ⁄4)

 No zero at the input: No pruning takes place, butterfly calculations are same as radix-4  Any one input zero: Output will be only the copied version of remaining inputs available, butterfly calculations are reduced compared to radix-4.  Any two inputs are zeros: Output will be only the copied version of that remaining two inputs available, butterfly calculations are reduced compared to radix-4 pruning with one zero at input.  Any three inputs are zeros: Output will be only the copied version of that remaining single input available, butterfly calculations are reduced compared to radix-4 pruning with two zero at input.  All zeros input: Output is zero and is obtained from mathematical calculations is zero.

(13) As a result, even and odd frequency samples of each basic processing block are not produced in the same stage of the complete signal flow graph. This property causes irregularity of signal flow graph, because the signal flow graph is an “L”-shape topology. IV PRUNING TECHNIQUES To increase the efficiency of the FFT technique several pruning and different other techniques have been proposed by many researchers. In this paper, we have implemented a new pruning technique i.e. IZTFFTP by simple modification and some changes and also includes some tricky mathematical techniques to reduce the total execution time.

C. Input Zero Traced Radix-8 DIF FFT Pruning In radix-8 since we couple eight inputs to obtain eight outputs, we therefore have 256 combinations of those eight inputs at radix-8 butterfly. Now therefore for radix-8 pruning there exist seven conditions only based upon zeros at the input. Similarly to radix-4 pruning, output is the version of non zero input. The more the number of zeros at input leads to less mathematical calculations compared to radix-8.

Zero tracing- as in wide band communication system a large portion of frequency channel may be unoccupied by the licensed user, so no. of zero valued inputs are much greater than the non-zero valued inputs in a FFT/IFFT operation at the transceiver. Then this algorithm will give best response in terms of reduced execution time by reducing the no. of complex computation required for twiddle factor calculation. IZTFFTP have a strong searching condition, which have an array for storing the input & output values after every iteration of butterfly calculation. In a input searching result whenever it found “zero” at any input, simply omit that calculation by considering useful condition based on radix algorithm used.

D. Input Zero Traced Mixed radix DIF FFT Pruning If we consider mixed radix 4/2, it uses the combination of radix-2 pruning and radix-4 pruning. Similarly mixed radix 8/2 uses the combination of radix-2 pruning and radix8 pruning. E. Input Zero Traced Split radix DIF FFT Pruning If we consider spilt radix 2/4, it uses the combination of radix-2 pruning and radix-4 pruning.

A Input Zero Traced Radix-2 DIF FFT Pruning In radix-2 since we couple two inputs to obtain two outputs, we therefore have 4 combinations of those two inputs at radix-2 butterfly. Now there exist three conditions only based upon zeros at the input.

V RESULTS In order to compare the computational complexities among the different radix DIF FFT algorithms on OFDM, the calculations based on the OFDM block sizes have been performed which are given in Table 1 and with pruning comparison in Table 2. The speed improvement factors from without to with pruning of different radix algorithms are seen in Table 3.

 No zero at input: No pruning happens in this case, butterfly calculations are same as conventional radix-2.  Any one input zero: Output will be only the copied version of input available, butterfly calculations are reduced compared to conventional radix-2.  All zero input: Output is zero and is obtained from mathematical butterfly calculations is zero.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

66

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

OFDM Block Size

Radix -2 cm 1 4 12 32 80 192

2 4 8 16 32 64

cadd 2 8 24 64 160 384

Radix-4 cm 3 24 144

Radix-8

cadd 8 64 384

cm 7 112

cadd 24 384

Mixed Radix-4/2 cm 10 28 64 160

cadd 24 64 160 384

Mixed Radix-8/2 cm 22 60 152

cadd 64 160 384

ISBN: 378 - 26 - 138420 - 5

Split Radix-2/4 cm 0 4 12 36 92

cadd 8 24 64 160 384

Table 2: Comparison of complex additions(cadd) and complex multiplications(cm) of different radix algorithms without pruning

OFDM Block Size

Radix -2 cm 0 0 12 31 76 179

2 4 8 16 32 64

cadd 2 8 24 64 160 384

Radix-4 cm 3 24 141

Radix-8

cadd 8 64 384

cm -

cadd -

7 112

24 384

Mixed Radix-4/2 cm 8 26 64 157

cadd 24 64 160 384

Mixed Radix-8/2 cm 22 60 152

cadd 64 160 384

Split Radix-2/4 cm 0 4 12 36 90

cadd 8 24 64 160 384

Table 2: Comparison of complex additions(cadd) and complex multiplications(cm) of different radix algorithms with pruning

FFT Size

Radix2

Radix4

Radix8

Mixed Radix4/2

Mixed radix8/2

Split radix2/4

8

1

-

1

1.25

-

1

16 32

1.03 1.05

1 -

-

1.07 1

1 1

1 1

64

1.07

1.02

1

1.01

1

1

Table 3: Speed Improvement Factor without to with pruning in terms of Multiplications Output shows the significant reduction of computational complexity by reducing the total no. of complex operation i.e. both the multiplications and additions compare to the ordinary radix FFT operations. The complex multiplications and additions are compared for different radix and pruned algorithms. The comparison of complex multiplications for different radix DIF FFT algorithms is shown in Figure 7 and for different input zero traced radix DIF FFT pruned algorithms are shown in Figure 8.

Figure 7: Comparison of complex multiplications for different radix DIF FFT

Figure 8: Comparison of complex multiplications for different Radix DIF FFT pruned algorithms

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

67

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

VI CONCLUSION The computational performance of an OFDM system depends on FFT as in an OFDM system. FFT works as a modulator. If the complexity decreases, then the speed of OFDM system increases. Results shows input zero traced radix DIF FFT pruned algorithms are much efficient than the Radix DIF FFT algorithms as it takes very less time to compute where number of zero valued inputs/outputs are greater than the total number of non zero terms, with maintaining a good trade-off between time and space complexity, and it is also independent to any input data sets. VII References [1] B. E. E. P. Lawrey, “Adaptive Techniques for Multi-User OFDM,” Ph.D. Thesis, James Cook University, Townsville,2001, pp. 33-34. [2] J. Mitola, III, "Cognitive Radio: An Integrated Agent Architecture for Software Defined Radio," Thesis (PhD), Dept. of Teleinformatics, Royal Institute of Technology (KTH), Stockholm Sweden, May 2000. [3] S. Chen, “Fast Fourier Transform,” Lecture Note, Radio Communications Networks and Systems, 2005. [4] “OFDM for Mobile Data Communications,” The International Engineering Consortium WEB ProForum Tutorial, 2006. http://www.iec.org. [5] Andrea Goldsmith,” Wireless Communications” Cambridge university press, 2005, ISBN: 978052170416. [6] J.G. Proakis and D.G. Manolakis, “Digital Signal Proc-essing: Principles, Algorithms and Edition, 2002, pp. 448-475. [7] E. Chu and A. George, Inside the FFT Black Box :Serial & Parallel Fast FourierTransform Algorithms. CRC Press LLC, 2000. [8] B. G. Jo and M. H. Sunwoo, “ New ContinuousFlow Mixed-Radix (CFMR) FFT Processor Using Novel In-Place Strategy,” Electron Letters, vol. 52, No. 5, May 2005. [9] Charles Wu, “Implementing the Radix-4 Decimationin Frequency (DIF) Fast Fourier Transform (FFT) Algorithm Using aTMS320C80 DSP”, Digital Signal Processing Solutions,January 1998. [10] P. Duhamel and H. Hollmann, “Split-radix FFT Algorithm,” Electron Letters, vol. 20, pp 14-16, Jan. 1984. [11] [4] H. V. Sorensen, M. T. Heideman and C. S. Burrus, “On Computing the Split-radixFFT,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 152-156,Feb. 1986.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

68

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Improvement of Dynamic and Steady State Responses in Combined Model of LFC and AVR Loops of One-Area Power System using PSS 1

1,2

Anil Kumar Sappa, 2Prof.Shyam Mohan S.Palli Dept. of EEE, Sir C R Reddy College of Engg., Andhra Pradesh, India

Abstract—This paper describes the improvement in stability by reducing the damping oscillations in one area power system using a power system stabilizer. A power system model, which includes both LFC and AVR loops, is considered. Low-frequency oscillation studies are made.PSS is designed to improve the stability of the system. The simulation results obtained indicate that, adding a PSS to a combined model of LFC and AVR improves the dynamic stability by reducing the low-frequency oscillations.

LFC since many years [7]-[8], but these researches gave little attention to AVR effects on the results. In fact, in LFC power system control literature there is a lack of stability analysis for AVR effects or the mutual effects between these loops. Usually, these studies are based on the assumption that there is no interaction between the power/frequency and the reactivepower/voltage control loops. But in practical systems some interactions between these control channels do exist during dynamic perturbations [9]. Also by neglecting the effect of voltage deviation on load demand, an important interaction in LFC systems is ignored. A combined model with LFC and AVR loops and their mutual effects is considered. In this paper the power system is designed by adding a PSS to a combined model of LFC and AVR loops in order to improve for more dynamic stability. The interaction of coupling effects between LFC and AVR loops are shown [3] and also the performance of proposed model is shown with simulations. The results of the proposed method with PSS are compared with a combined model without adding PSS and also by separate models of LFC and AVR loops without any interaction between those loops. The simulations are shown by adding a PSS to a combined model of LFC and AVR loops. It is observed that this proposed model can improve the dynamic stability of a complete power system by reducing the damping oscillations.

Index Terms— Automatic Voltage Regulator, Power System, Load Frequency control, Power system stabilizer, voltage, deviations, stability.

I. INTRODUCTION The change in operating conditions of a power system leads to low frequency oscillations of small magnitude that may exist for long periods of time. In some cases these oscillations will limit the amount of power transmitted through interconnecting lines. So a power system stabilizer is designed to provide an additional input signal to the excitation system in order to damp these power system oscillations [1]. The interconnected power system model for low frequency oscillation studies should be composed of mechanical and electrical loops. These oscillations can be damped by varying the exciter and speed-governor control parameters [2]. Furthermore, it has been shown that the load-voltage characteristic of the power system has a significant effect on its dynamic responses, and suggestions have been made for the proper representation of these characteristics in simulation studies [3]-[5].For economic and reliable operation of a power system, the two main control loops are required. The Load Frequency Controller loop (LFC) and Automatic Voltage Regulator loop (AVR) as shown in Figure 1. The turbine is fed by speed governor whose steam rate can be controlled by varying the internal parameters. Automatic Generation Control method deals with frequency through the LFC loop and with voltage through the AVR loop. The main purpose of these two loops is to maintain frequency and voltage at permissible levels [6]. Lot of studies have been made about

Fig 1. Automatic generation control with LFC and AVR loops.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

69

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig 2. Block diagram model of Load Frequency Control (LFC).

comprises of excitation control mechanism and its main aim is to control the field current of the synchronous machine. Here the controlling of field current is to regulate the voltage generation by the machine. The maximum permissible limit for voltage is about 5  6% .

II. LOAD FREQUENCY CONTROL Load Frequency Control (LFC) loop is the basic control mechanism loop in the operation of power system. As the active power demand changes continuously with respect to load changes ,steam input to turbo-generators (water input to hydro-generators) must be continuously regulated in order to maintain the constant active power demand, failing which the machine speed changes with consequent change in frequency which may be highly undesirable. The maximum permissible change in power frequency is about ± 0.5Hz. Hence continuous regulation is to be provided by the LFC system to the generator turbine. At the time of Load change, the deviation from the nominal frequency is referred as frequency error (∆f), This symbol indicates that there is a mismatch and it can be used to send the appropriate command to change the generation by altering the LFC system [6].From Figure 2, it is seen that there are two control loops, one is primary control loop and the other one is secondary control loop. With primary control loop, if there is any change in system load, it will result in to a steady-state frequency deviation, which depends on the governor speed regulation. To nullify the frequency deviation, we should provide a reset action. The reset action is accomplished by secondary control loop which introduces an integral controller to act on the load reference setting in order to change the speed at the desirable operating point. Here the integral controller increases the system type by 1 which forces and makes the final frequency deviation to zero. Thus by using integral controller action in LFC system a zero steady state frequency error and a fast dynamic response will be achieved. The gain of the integral controller should be adjusted such that its transient response is satisfactory. The negative sign for gain of integral controller shown in Figure 2 is to provide a negative or reduced command signal for a positive frequency error.

Fig 3. Block diagram model of AVR system.

The Amplifier and Exciter block shown in Figure 3 regulates and amplifies input control signals to an appropriate level which is convenient to provide DC power to the generator field winding. This block must be expandable if the excitation system has rotating exciter and voltage regulator [2]. Depending upon how the DC supply is given to the generator field winding, the excitation systems are classified as DC, AC, and Static excitation systems [1]. IV. MATHEMATICAL MODELLING OF POWER SYSTEM In order to improve the dynamic stability of overall system, modeling of major components of power system is required. The study of low frequency oscillation studies is completely based on a single machine connected to an infinite bus system [2]. The single machine connected to an infinite bus system through transmission lines with a local load is shown in Figure 4. Here Z is the series impedance of transmission line and Y is the shunt admittance representing the local load.

III. AUTOMATIC VOLTAGE REGULATOR The basic mechanism of Automatic Voltage Regulator (AVR) is to regulate the system voltage magnitude. The AVR

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

70

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Iq 

Vd Xq

ISBN: 378 - 26 - 138420 - 5

(5)

Eq   Vq  X d  I d

(6)

To determine the values of V0 and δ0.From the definition of torque angle (δ0) which is the angle between the infinite-bus voltage (V0) and the internal voltage (Eq ′) [2], we have

Fig 4. Single machine connected to an infinite bus model of a power system.

A. Combined model of LFC and AVR of one-area power system A Combined model of LFC and AVR of one-area power system with PI controller is shown in figure5 .The PI controller used in figure 6 produce an output signal with a combination of proportional and integral controllers and the transfer function of that signal consists of a proportional gain (KP) which is proportional to the error signal and the term consisting of integral time constant (Ti) which is proportional to the integral of error signal [10]. Transfer function of PI controller = K 1  1  p    Ti s 

(1)

The block diagram shown in Figure 5 shows the coupling effects between LFC and AVR loops [9].Here the gain constants K1, K2, K3, K4, K5 and K6 are calculated using the equations (2)-(27) [2]. Where K1 is the change in electrical power for a change in rotor angle with constant flux linkages in the d-axis, K2 is change in electrical power for a change in the direct axis flux linkages with constant rotor angle, K3 is an impedance factor, K4 is the demagnetizing effect of a change in rotor angle, K5 is the change in terminal voltage with change in rotor angle for constant Eq′ and K6 is the change in terminal voltage with change in Eq′ for constant rotor angle.

  Vt 2  2  Vd  PV P  Q   e t e  e X   q   

2

   

1/2

V0 d  C1Vd  C2Vq  Rx I d  XI q

(7)

V0 q  C2Vd – C1Vq  XI d  Rx I q

(8)

V  0  tan 1  0d  V0q 

(9)

  

V0  (V0 d 2  V0 q 2 )1/2

(10)

R1  Rx  C2 X d 

(11)

R2  Rx  C2 X q

(12)

X 1  X  C1 X q

(13)

X 2  X  C1 X d 

(14)

Z e 2  R1 R2  X 1 X 2

(15)

C1  1  Rx G  XB

(16)

C2  XG  Rx B

(17)

Yd 

C1 X 1 – C2 R2 Ze2

(18)

Yq 

C1 R1  C2 X 2 Ze2

(19)

Fd 

V0 ( R2 cos 0  X 1sin 0 ) Ze2

(20)

Fq 

V0 ( X 2cos 0  R1sin 0 ) Ze2

(21)

(2)

1/ 2

Vq  Vt 2  Vd 2  Id 

Pe  I qVq Vd

(3)

(4)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

71

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig 5. PSS with combined LFC and AVR.

B. Combined LFC-AVR model with PSS

Finally, the constants K1 to K6 can be represented as

K1  Fd  X q – X d  I q  Fq  Eq   X q – X d   I d 

The proposed model of combined LFC and AVR with PSS is shown in figure 5 for dynamic improvement of overall system response.

K 2  I q  Yd  X q  X d   I q  Yq  Eq    X q  X d   I d 

(24)

K3 

1 1   X d  X d   Yd 

(24)

K4 

Xd  Xd  Fd

(25)

 ( X d Vq )   X qVd  K 5  Fd    Fq    Vt   Vt  K6 

 ( X d Vq )   X qVd   Yd    Yq   Vt  Vt   Vt 

Vq

The basic function of a power system stabilizer (PSS) is to provide an additional input signal to the regulator to damp the Power System oscillations. This can be achieved by modulating the generator excitation in order to produce a component of electrical torque in phase with the rotor speed deviations. Some of the commonly used input signals are rotor speed deviation, accelerating power, and frequency deviation. A block diagram model of PSS is shown in Figure 6.

(23)

(26)

(27) Fig 6. Block diagram model of PSS.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

72

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The frequency deviation (∆f) is used as input signal for PSS. Here the PSS will generate a pure damping torque at all frequencies that is the phase characteristics of PSS must balance the phase characteristics of GEP(s) at all frequencies. As this is not practicable, the time constants of PSS are to be adjusted in order to produce the phase compensation characteristic which shows the best performance [11]. The model of PSS shown in Figure 6 consists of three blocks, phase compensation block, washout filter block and gain block. The phase compensation block provides a suitable phase-lead characteristic to compensate the phase lag between the input of the exciter and the machine electric torque. The transfer function GEP(s) of the block shown with dashed lines in in figure 5 which represents the characteristics of the generator excitation and power system is given by [11]

GEP  s  

K a K3 K 2 (1  sTa )(1  sK 3Td 0 )  K a K 3 K 6

ISBN: 378 - 26 - 138420 - 5

From the simulations results shown in Figures 7-10 it can be observed that, by using PSS the settling time is reduced with a better dynamic stability. Hence desirable damping for low frequency oscillations in overall single-area power system is achieved.

(28)

The washout block acts as a high-pass filter. For local mode of oscillations, the washout time constant should be in between 1 to 2s for desirable operating point [12]. The PSS gain is chosen such that it is fraction of gain corresponding to instability [11].The PSS complete transfer function is given by

 sTW PSS (s)   1  sTW

  KS 

 1  sT1    1  sT2 

Fig 7.Frequency deviations in a single-area power system in pu.

(29)

Where GEP(s) is the plant transfer function through which the stabilizer must provide compensation. TW is washout time constant. KS is the gain of the stabilizer. T1 and T2 are the time constants of phase compensator. An optimal stabilizer is obtained by the proper selection of the time constants and gain of the PSS. V. SIMULATION RESULTS In this paper a power system stabilizer is designed to show the improvement in dynamic response for combined model of LFC and AVR. In this study the performance of the proposed model of PSS with combined LFC and AVR is compared with combined model of LFC and AVR loops and also with classic model of load frequency control system by separating AVR loop (i.e. excitation system). The simulations shown in this paper are carried out using MATLAB platform. The simulations results shown in this paper are performed by assuming real power as 0.9 pu, reactive power as 0.61 pu and the machine terminal voltage as 0.91 pu.

Fig 8.Turbine output power deviations in a singlearea power system in pu.

Turbine and Governor System Parameters are given in table-I, One Machine-Infinite Bus System Parameters are given in table-II and AVR and Local parameters are given in tableIII. The calculated gain parameter constants in AVR system for load change in real power at 10% for LFC is given in Table IV.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

73

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

TABLE III.

ISBN: 378 - 26 - 138420 - 5

AVR AND LOCAL LOAD PARAMETERS IN PER UNIT VALUE.

Ka

Ta

Kr

Tr

G

B

20

0.05

1

0.05

0.89

0.862

TABLE IV.

CALCULATED GAIN PARAMETER CONSTANTS IN AVR SYSTEM IN P ER UNIT VALUE.

K1

K2

K3

K4

K5

K6

0.40

1.85

0.38

0.22

0.07

1.02

VI. CONCLUSION In this paper the Load Frequency Control loop and Automatic Voltage Regulator loop are combined to show the mutual effects between these two loops in a one-area power system. A Power System Stabilizer is designed to improve the dynamic and steady state responses for one-area power system. Finally it is observed that a better dynamic stability is accessed by using a PSS to the combined LFC-AVR model.

Fig 9.Internal electrical power deviations in a singlearea power system in pu.

NOMENCLATURE

Fig 10.Terminal voltage deviations in a single-area power system in pu.

TABLE I.

T URBINE AND GOVERNOR SYSTEM PARAMETERS FOR LFC IN PER UNIT VALUE.

KP

TP

TT

TG

R

102

20

0.32

0.06

1.7

TABLE II.

ONE MACHINE-INFINITE B US SYSTEM PARAMETERS IN P ER UNIT VALUE.

Xd

Xq

X d'

Td 0 '

Rx

X

1.973

0.82

0.1

7.76

0.004

0.74

R Ka

The overall gain of excitation system

Ta

The overall time constant of excitation system

Pm

Turbine output power deviation

PL

Load disturbance

KP

The equivalent overall gain of a power system

Speed regulation due to governor action

TP

The equivalent time constant of a power system

Kr

The equivalent gain of a sensor

Tr

The equivalent time constant of a turbine

TT

The equivalent time constant of a turbine

TG

The equivalent time constant of a governor

E q '

Deviation of internal voltage

 Td 0 '

Deviation of torque angle Transient time constant of generator field

Xd

Synchronous reactance of d-axis

Xd '

Transient reactance of d-axis

Vref

Reference input voltage

V0

Infinite bus voltage

Vt

Terminal voltage

VS

Stabilizer output

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

74

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[7] E.Rakshani and J.Sadeh,”A Reduced-Order control with prescribed degree of stability for Two-Area LFC System in a deregulated environment”, Proceeding 2009 IEEE PES Power Systems Conference and Exhibition (PSCE09). [8] J.Sadeh and E.Rakshani,”Multi-area load frequency control in a deregulated power system using optimal output feedback method, “International conf. on European Electricity Market, pp.1-6, May 2008. [9] Haadi Saadat, Power system analysis, Tata McGraw-Hill Edition 2002, pp.528. [10] K.Ogata, Modern Control Systems, 5th edition, Prentice Hall Publications, 2002, pp.669-674. [11] E.V.Larsen and D.A.Swan,”Applying power system stabilizers parts I, II and III”, IEEE Trans. On Power Apparatus and Systems, Vol.PAS-100, June-1981, pp.3017-3046. [12] P.Kundur, M.Klein, G.J.Rogers and M.S.Zywno,”Application of power system stabilizers for enhancement of overall system stability”, IEEE Trans. on Power systems, Vol.4, No.2, May 1989, pp.614-626.

REFERENCES [1] P.Kundur, Power System Stability and Control, McGraw-Hill Inc., 1994, pp.766-770. [2] Yau-NaN yu, Electrical Power System Dynamics, London Academic Press, 1983, pp.66-72. [3] E.Rakhshani, K.Rouzehi, S.Sadeh, “A New Combined Model for Simulation of Mutual Effects between LFC and AVR Loops”, Proceeding on Asia-Pacific Power and Energy Engineering, Wuhan, China, 2009. [4] S.C.Tripathy, N.D.Rao, and L.Roy,”Optimization of exciter and speed governor control parameters in stabilizing intersystem oscillations with voltage dependent load characteristics. “Electric power and energy systems, vol.3, pp.127-133, July 1981. [5] K.Yamashita and H.Miyagi,”Multivariable self-tuning regulator for load frequency control system with interaction of voltage on load demand”, IEEE Preceeedings-D, Vol.138, No.2, March 1991. [6] D.P.Kothari, I.J.Nagrath “Modern Power System Analysis”, Third Edition, pp.290-300

Authors Biography –

Anil Kumar Sappa received his B.Tech degree in Electrical and Electronics Engineering from JNTUK, Andhra Pradesh in 2011. He is currently pursuing M.E degree in Power Systems and Automation from Sir C R Reddy college of Engineering affiliated to A.U Visakhapatnam, Andhra Pradesh. His areas of interests include Power Systems.

Prof Shyam Mohan S Palli received his B.Tech degree in Electrical and Electronics Engineering from JNTU Kakinada in 1978; M.Tech degree in Power systems from same institute in 1980. He joined in teaching profession in 1981. He has put up an experience of 33 years in teaching as lecturer, Asst. Professor, Professor. Presently he is working as Professor and HOD, EEE of Sir C.R. Reddy college of Engineering, Andhra Pradesh. He is author of Circuits and Networks, Electrical Machines Published by TMG New Delhi. He published many papers in referred journals and conferences. His areas of interests include Machines and Power Systems.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

75

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Enhanced Cryptography Algorithm for Providing Data Security Dr. K.Kiran Kumar, K.Kusuma Kavya, K.R.L.Sindhura Department of Information Technology, Bapatla Engineering College, Bapatla. Abstract: Information security is the process of protecting information. Due to enormous development in internet technology the security of data has now become a very important challenge in data communication network. One cannot send any confidential data in raw form from one machine to another machine as any hacker can intercept the confidential message. In this paper we have developed a new cryptography algorithm which is based on block cipher concept. In this algorithm we have used logical operations like XOR and shift operations which is used to increase security.

must consider many factors like: security, time and space complexity. A Simplified Model of Conventional Encryption

This paper is further divided into three sections. In section 2, we are presenting detailed description of Information security using cryptography and various algorithms. In section 3, we are presenting our proposed algorithm. In section 4, we have explained the proposed algorithm by taking an example of 128 bits (16 characters) and in section 5 we have conclusion.

Keywords:Information security, Encryption, Decryption, Cryptography. 1. Introduction The main feature of the encryption/ decryption program implementation is the generation of the encryption key. Now a days, cryptography has many commercial applications. The main purpose of the cryptography is used not only to provide confidentiality, but also to provide solutions for other problems like:  data integrity,  authentication,  non-repudiation. Cryptography is the method that allows information to be sent in a secure form in such a way that the only receiver able to retrieve the information. However, it is very difficult to find out the specific algorithm, because we have already known that they

2. Information security using cryptography Here a newly developed technique named, “A new Symmetric key Cryptography Algorithm using extended MSA method: DJSA symmetric key algorithm” is discussed. In this a symmetric key method where we have used a random key generator for generating the initial key and the key is used for encrypting the give source file and we are using substitution method where we take 4 characters from any input file and then search the corresponding characters in the random key matrix file after getting the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

76

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

encrypted message they store the encrypted data in another file. To decrypt any file one has to know exactly what is the key and to find the random matrix theoretically. They apply method on possible files such as executable file, Microsoft word file, excel file, access database, FoxProfile, text file, image file, pdf file, video file, audio file, oracle database and they have found in all cases it giving 100% correct solution while encrypting a file and decrypting a file. In the following section we are going in detail. Here another newly developed technique named, “Effect of Security Increment to Symmetric Data Encryption through AES Methodology� is discussed. In this method they describe about symmetric cipher algorithm which is much more similar. The difference is that, Rijndael algorithm start with 128 bits block size, and then increase the block size by appending columns[10], whereas his algorithm start with 200 bits.

ISBN: 378 - 26 - 138420 - 5

Encryption Approach Used Here we are using symmetric encryption approach, and already we know that symmetric encryption approach is divided into two types: 1. Block cipher symmetric cryptography technique 2. Stream cipher symmetric cryptography. But here we are choosing block cipher type because its efficiency and security. In the proposed technique we have a common key between sender and receiver, which is known as private key. Basically private key concept is the symmetric key concepts where plain text is converting into encrypted text known as cipher text and is decrypted by same private key into plane text. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain private information. Proposed Key Generation Steps

3. Proposed Algorithm 1. Create any private key of Size 16 characters. It will be varying from 16 to 64 characters. 2. We can choose any character from 0 to 255 ASCII code. 3. Use of 16*8 key that means 128 bits in length. 4. Divide 16 bytes into 4 blocks of KB1, KB2, KB3 and KB4. 5. Apply XOR operation between KB1 and KB3.Results will store in new KB13. 6. Apply XOR operation between KB2 and KB13.Results will store in new KB213. 7. Apply XOR operation between KB213 and KB4. Results will store in new KB4213. ( KB:KEY BLOCK)

In this section we are presenting a new block based symmetric cryptography algorithm, and using a random number for generating the initial key, where this key will use for encrypting the given source file using proposed encryption algorithm. Basically In this technique a block based substitution method will use. In the present technique I will provide for encrypting message multiple times. The proposed key blocks contains all possible words comprising of number of characters whose ASCII code is from 0 to 255 in a random order. The pattern of the key blocks will depend on text key entered by the user. To decrypt any file one has to know exactly what the key blocks is and to find the random blocks theoretically one has to apply 2^256 trial run and which is intractable.

Steps for proposed Algorithm 1. Initially select plain text of 16 bytes.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

77

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

2. Apply XOR operation between key (KB4213) and plain text block. Result will store in CB1. 3. Apply right circular shift with 3 values. Result will store in CB2. 4. Apply XOR operation between CB2 and KB2. 5. Result will store in new CB3. 6. Apply XOR operation between CB3 and KB4. Result will store in CB4.

KB 13: 00011110 00011100

ISBN: 378 - 26 - 138420 - 5

00010100

11110011

Step:3 Apply the EX-OR operation between KB2 & KB13 KB2: 10000100 01110110 01101001 KB13:00011110 00010100 00011101 KB213: 10011010 01100010 01110100

(CB:CIPHER BLOCK) 4. Implementation example of the Proposed Algorithm

01100101 11110011 10010110

Step:4 Encryption Method Apply the EX-OR operation between KB4 & KB213:

Key Generation:

KB213:10011010 01100010 10010110 01110100 KB4: 01101001 01101001 10000010 10000011 KB4213: 11110011 00001011 00010100 11110111

Step:1 1.Take the 16 characters plain text 2.Plain text: BAPATLAENGINEERS 3. Divide the key in to 4 blocks BAPA KB 1

TLAE KB2

NGIN KB 3

EERS The obtained key is: 11110011 00001011 11110111

KB 4

00010100

Step:2 Encryption: Apply the EX-OR operation between KB1 & KB3

Step:1

BAPA KB1

Apply the EX-OR operation between KB 4213 & KB1:

NGIN KB3

BAPA- 66 65 80 65 NGIN- 78 71 73 78 KB1: 01100110 01100101 KB3: 01111000 01111000

KB4213: 11110011 00001011 11110111 KB1: 01100110 01100101 01100101 CB1: 10010101 01101110 10010010

01100101 10000000 01110001

01110011

00010100 10000000 10010100

Step:2

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

78

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

P1: 01100110 01100101

Apply right circular shift with 3bits of CB1. Result will store in new CB2. CB2: 01010010 10010010

10101101

10000000

10101101

11010010

CB 2: 01010010 10101101 10010010 CB 3: 11010110 11011011 11111011 P2: 10000100 01110110 01101001

01110110

01100101

Step:3

11011011

10110111

Apply the EX-OR operation between CB3 & CB4:

Apply the EX-OR operation between CB2 & KB2:

10110111

CB3: 11010110 11111011 CB4: 10111111 01111000 P4: 01101001 10000011

10000010

Step:4

00110101

Plain text 3 is : Apply the EX-OR KB4321 & P4

Step:4 Apply the EX-OR operation between CB3 & KB4: CB3: 11010110 11111011 KB4: 01101001 10000011 CB4: 10111111 01111000

01100101

Step:2 Apply the EX-OR operation between CB2& CB3:

11010010

Step:3

CB2: 01010010 10010010 KB2: 10000100 01101001 CB3: 11010110 11111011

ISBN: 378 - 26 - 138420 - 5

11011011 01101001 10110010

Now CB1, CB2, CB3, CB4 are the cipher blocks respected to KEY1, KEY2, KEY3, KEY4

Step:1 Apply the EX-OR KB4213 & CB1

operation

KB4213: 11110011 00001011 11110111 CB1: 10010101 01101110 10010010

between

10110111 01100101

11011011

10110111

10110010

00110101

01101001

10000010

operation

KB4213: 11110011 00001011 11110111 P4: 01101001 01101001 10000011 X1: 10011010 1100010 01110100

Decryption Method:

11010010

between

00010100 10000010 10010110

Step:5 Apply the EX-OR operation between X1 & P2

00010100 10010100

X1: 110011010 01110100

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

79

01100010

10010110

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

P2: 10000100 01101001 X2: 00011110 00011101 Step:6

01110110

01100101

6. References

00010100

11110011

[1] Dripto Chatterjee, Joyshree Nath, Suvadeep Dasgupta, Asoke Nath “A new Symmetric key Cryptography Algorithm using extended MSA method: DJSA symmetric key algorithm” published in 2011 International Conference on Communication Systems and Network Technologies, 978-07695-4437-3/11 $26.00 © 2011 IEEE. [2] Yan Wang and Ming Hu “Timing evaluation of the known cryptographic algorithms “2009 International Conference on Computational Intelligence and Security 978-0-7695-3931-7/09 $26.00 © 2009 IEEE DOI 10.1109/CIS.2009.81. [3] Symmetric key cryptography using random key generator, A.Nath, S.Ghosh, M.A.Mallik, Proceedings of International conference on SAM-2010 held at Las Vegas(USA) 12-15 July,2010, Vol-2,P-239244. [4] Data Hiding and Retrieval, A.Nath, S.Das, A.Chakrabarti, Proceedings of IEEE International conference on Computer Intelligence and Computer Network held at Bhopal from 26-28 Nov, 2010. [5] Neal Koblitz “A Course in Number Theory and Cryptography” Second Edition Published by Springer-Verlag. [6] T Morkel, JHP Eloff “ ENCRYPTION TECHNIQUES: A TIMELINE APPROACH” published in Information and Computer Security Architecture (ICSA) Research Group proceeding. [7] Text book William Stallings, Data and Computer Communications, 6eWilliam 6e 2005. [8] [Rijn99]Joan Daemen and Vincent Rijmen, AES submission document on Rijndael, Version 2, September 1999.

Apply the EX-OR operation between X2 & P1 X2: 00011110 00011101 P1: 01100110 01100101 P3: 01111000 01111001

ISBN: 378 - 26 - 138420 - 5

00010100

11110011

01100101

10000000

01110001

01110011

Here P1,P2,P3,P4 are the plain text’s with respect to key blocks KB1,KB2,KB3,KB4. If we want to get P3, we are going to use KB4321,P4,P2,P1. Decrypted Text is: BAPATLAENGINEERS 5. Conclusion In this paper we have specified a new algorithm based on block cipher principle. In this algorithm we have used logical operations like XOR and shift operations which is used to increase security. The algorithm is explained with the help of an example clearly. Our method is essentially block cipher method and it will take less time if the file size is large. The important thing of our proposed method is that it is almost impossible to break the encryption algorithm without knowing the exact key value. We propose that this encryption method can be applied for data encryption and decryption in any type of public application for sending confidential data.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

80

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Sinter coolers

Ramireddy. Pavankalyan Reddy

Telukutla. Harika Sivani

Dept. of electrical and electronics engineering Lakkireddy Balireddy College of engineering Mylavaram, Krishna district Andhrapradesh, India pawankalyanreddy@gmail.com

Dept. of electrical and electronics engineering Lakkireddy Balireddy College of engineering Mylavaram, Krishna district Andhrapradesh, India harika6rajasekhar@gmail.com

Abstract— At present, distributed generation (DG) has been a research focus all over the world. As a kind of DG system, cogeneration system utilizing waste heat from sintering-cooling process plays an important role in modern iron and steel enterprises

the Blast Furnace. The vertical speed of sintering depends on the suction that is created under the grate. At VSP, two exhausters are provided for each machine to create a suction of 1500 mm water column under the grate. There are several types of sintering machines based on their construction and working, they are a) Belt type b) Stepping type c) Air Draft type d) Box type and so on. Smelting is the term related to metallurgy and we use blast furnaces for smelting. We can call blast furnaces differently in different relations like bloomeries for iron, blowing houses for tin, smelt mills for lead, sinter plants for base metals like steel, copper, iron ultimately. Iron ore cannot be directly charged in a blast furnace. In the early 20th century sinter technology was developed for converting ore fines into lumpy material chargeable in blast furnaces, though it took time to gain acceptance in the iron ore making domain but now places an important role in generating steel, metallurgical waste generated in steel plants to enhance blast furnace operation.

I. INTRODUCTION

The frequently used and most worrying thing now a days is global warming which actually we know as it is the increase of earth’s average temperature due to green house gases which trap heat that would otherwise escape from earth but recent studies specifying that waste heat produced from industries (large scale industries like steel making plants, oil refinery industries etc.,) is more speedily deteriorating the environment now a days than above said green house gases so we are converting that waste heat produced from steel making industries into electricity in order to reduce the heat even in a small quantity. Most of our steel plants are now using sinter plants or sinter coolers to convert iron into steel and these are producing the exhaust steam in a larger quantity. .

III. WASTE HEAT RECOVERY IN SINTER PLANT In sinter plant sensible heat can be recovered both from the exhaust gases of the sinter machine and off-air of the sinter cooler. Heat recovery can be in different forms.  Hot air steams from both sinter machine and sinter cooler can be used for the generation of steam with the installation of recovery boilers. This steam can be used to generate power or can be used as process steam. For increased heat recovery efficiency, a high temperature exhaust section should be separated from a low temperature exhaust section and heat should be recovered only from high temperature exhaust section.  Sinter machine exhaust section can be recirculated to the sinter machine, either after going through a heat recovery boiler or without it.

II. SINTER PLANT Sintering is an agglomeration process of fine mineral particles into a porous mass by incipient fusion caused by heat produced by combustion within the mass itself. Iron ore fines, coke breeze, limestone and dolomite along with recycled metallurgical wastes are converted into agglomerated mass at the Sinter Plant, which forms 70-80% of iron bearing charge in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

81

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

V. RECOVERY OF WASTE HEAT EMITTED BY THE COOLING

Heat recovered from the sinter cooler can be recirculated to the sinter machine or can be used for pre heating the combustion air in the ignition hood, for pre heating of the raw mix to sinter machine. It can be used to produce hot water for district heating.

PROCESS INTO STEAM

As like heat recover ventilators this system will work i.e. whenever the heat recovery hoods take heat from sinter cooler this will be directly given to the boiler which on high temperature and water convert this into steam. The water tube boiler which has water in its tubes heated by this hot recovered air and this water will be converted into steam. This steam drives the turbine and give mechanical energy which is input to the generator and this generator will give electricity.

A. Features  Waste gas heat of a sintering plant is recovered as steam or electric energy. The heat recovery efficiency is 60% for waste gas from cooler and 34% for waste gas from sintering machine proper.  Waste gas heat recovery from sintering machine proper also leads to the reduction of coke consumption.  Applicable whether the cooler is of a circular type or linear type.  CO2 emissions can be reduced, leading to a possibility of employing this system in a CDM project.

IV. SINTER PLANT COOLER WASTE HEAT RECOVERY SYSTEM

Fig. 2 Recovery of waste heat emitted VI. ADVANTAGES AND DISADVANTAGES A. Advantages 1) Reduction in pollution: A number of toxic combustible wastes such as carbon monoxide gas, sour gas, carbon black off gases, oil sludge, Acrylo nitrile and other plastic chemicals etc. releasing to atmosphere if/when burnt in the incinerators serves dual purpose i.e. recovers heat and reduces the environmental pollution levels. 2) Reduction in equipment sizes: Waste heat recovery reduces the fuel consumption, which leads to reduction in the flue gas produced. This results in reduction in equipment sizes of all flue gas handling equipment such as fans, stacks, ducts, burners, etc. 3) Reduction in auxiliary energy consumption: Reduction in equipment sizes gives additional benefits in the form of reduction in auxiliary energy consumption like electricity for fans, pumps etc. Recovery of waste heat has a direct effect on the efficiency of the process. This is reflected by reduction in the utility consumption & costs, and process cost.

Fig. 1 Block diagram of sinter cooler plant This is a system for recovering the sinter cooler’s hightemperature exhaust gas as steam which can be used for power generation. Furthermore reuse of the exhaust heat as the thermal source of sintered ore production will improve the productivity of sinter machines. The main principle involved in this system is converting heat into steam then we use the normal generation process where the turbine rotates giving mechanical energy as input to the generator in order to get electricity as output. The system recovers sensible heat from hot air emitted by the cooling process of two sinter coolers located downstream of two sinter machines. The heat is captured by heat recovery hoods and then routed to a heat recovery boiler to generate super-heated steam, which is converted to electricity by a turbine connected to a generator.

B. Disadvantages 1) Capital cost: The capital cost to implement a waste heat recovery system may outweigh the benefit gained in heat recovered. It is necessary to put a cost to the heat being offset. 2) Quality of heat: Often waste heat is of low quality (temperature). It can be difficult to efficiently utilize the quantity of low quality heat contained in a waste heat medium. Heat exchangers tend to be larger to recover significant quantities which increases capital cost. 3) Maintenance of Equipment: Additional equipment requires additional maintenance cost.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

82

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Waste heat constitute almost 20% in global warming where as in this maximum amount of heat is from large scale industries and power plants. As 70% of our steel plants containing sinter plants and the circulation system is for waste heat, which has been emitted only to the atmosphere. The system is expected to promote energy efficiency by utilizing waste heat, thereby reducing CO2 emissions. It will enhance environmental effects because cooling air is used in a closed cycle without emitting a high concentration of dust into the atmosphere and power shortages can also be overcome.

CONCLUSION To meet the increasing world demand for energy, the rate of depletion of non-renewable energy sources must be reduced while developing alternative renewable sources. This can be achieved by increasing the overall thermal efficiency of conventional power plants. One way to do this is by waste heat recovery. Most of the techniques currently available recover waste heat in the form of thermal energy which is then converted to electricity in a conventional steam power plant. Another approach which has received little attention so far is direct conversion of thermal waste energy into electricity. soo In this article, a configuration of waste heat recovery system is described we studied the composition and characteristics of waste heat resources and found out a typical process of energy recovery, conversion and utilization.

ACKNOWLEDGMENT The preferred spelling of the word “acknowledgment” in America is without an “e” after the “g”. Avoid the stilted expression, “One of us (R. B. G.) thanks . . .” Instead, try “R. B. G. thanks”. Put applicable sponsor acknowledgments here; DO NOT place them on the first page of your paper or as a footnote.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

83

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Chaos CDSK Communication System Arathi. C M.Tech Student, Department of ECE, SRKR Engineering College, Bhimavaram, India arathidhaveji.c@gmail.com

performance according to chaos maps, and find a chaos map that has the best BER performance [4]. In addition, chaos users evaluate the BER performance according to chaos modulation system [5][6], and propose a new chaos map that has the best BER performance. In this paper, in AWGN and Rayleigh fading channel, BER performances of chaotic CDSK system is evaluated. At existing study, we proposed a novel chaos map in order to improve the BER performance [7], and we named a novel chaos map "Boss map".

Abstract: In recent years chaotic communication systems have emerged as an alternative solution to conventional spread spectrum systems. The chaotic carrier used in this kind of modulation-demodulation schemes, have unique properties that make them suited for secure, and multi-user communications. The security of chaos communication system is superior to other digital communication system, because it has characteristics such as non-periodic, wide-band, non predictability, easy implementation and sensitive initial condition. In this paper, a new approach for communication using chaotic signals is presented. Keywords — Chaos Communication System, CDSK

I.

II.

INTRODUCTION

CHAOTIC SYSTEM

A chaotic dynamical system is an unpredictable, deterministic and uncorrelated system that exhibits noise-like behavior through its sensitive dependence on its initial conditions, which generates sequences similar to PN sequence. The chaotic dynamics have been successfully employed to various engineering applications such as automatic control, signals processing and watermarking. Since the signals generated from chaotic dynamic systems are noise-like, super sensitive to initial conditions and have spread and flat spectrum in the frequency domain, it is advantageous to carry messages with this kind of signal that is wide band and has high communication security. Numerous engineering applications of secure communication with chaos have been developed [8].

Previous digital communication technology continually used a linear system. However, as this technology reached basic limit, people started to improve performance of nonlinear communication systems applying chaos communication systems to nonlinear systems [1]. Chaos communication systems have the characteristics such as non - periodic, wideband, non-predictability and easy implementation. Also, chaos communication system is decided by initial conditions of equation, and it has sensitive characteristic according to initial condition, because chaos signal is changed to different signal when initial condition is changed [2]. Chaos signal is expressed as randomly and non-linearly generated signal. If initial conditions of chaos signal is not exact, users of chaos system are impossible to predict the value of chaos signal because of its sensitive dependence on initial conditions [1][3]. As these characteristics, the security of chaos communication system is superior to other digital communication system. Due to security and other advantages, chaos communication systems are being studied continuously. Look at existing research, in order to solve disadvantage that bit error rate (BER) performance of this system is bad, chaos communication system is evaluated the BER

III.

CHAOTIC SIGNALS

A chaotic sequence is non-converging and non-periodic sequence that exhibits noise-like behavior through its sensitive dependence on its initial condition [1]. A large number of uncorrelated, random-like, yet deterministic and reproducible signals can be generated by changing initial value. These sequences so generated by chaotic systems are called chaotic sequences [8]. Chaotic sequences have been proven easy to generate and store. Merely a chaotic map and an

1 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

84

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

initial condition are needed for their generation, which means that there is no need for storage of long sequences. Moreover, a large number of different sequences can be generated by simply changing the initial condition. More importantly, chaotic sequences can be the basis for very secure communication. The secrecy of the transmission is important in many applications. The chaotic sequences help achieve security from unwanted reception in several ways. First of all, the chaotic sequences make the transmitted signal look like noise; therefore, it does not attract the attention of an unfriendly receiver. That is, an ear-dropper would have a much larger set of possibilities to search through in order to obtain the code sequences [4][8]. Chaotic sequences are created using discrete, chaotic maps. The sequences so generated even though are completely deterministic and initial sensitive, have characteristics similar to those of random noise. Surprisingly, the maps can generate large numbers of these noise-like sequences having low cross-correlations. The noise-like feature of the chaotic spreading code is very desirable in a communication system. This feature greatly enhances the LPI (low probability of intercept) performance of the system [4]. These chaotic maps are utilized to generate infinite sequences with different initial parameters to carry different user paths, as meaning that the different user paths will spread spectrum based on different initial condition [8]. IV.

CDSK transmitter is composed of sum in which delayed chaos signal multiplied with information bit is added to generated chaos signal from chaos signal generator. Here, information bit that is spread as much as spreading factor is multiplied by delay chaos signal. s = x + d x

(1)

Above equation (1) indicates transmitted signal from transmitter. r y d ∑r r r L Figure 2: Receiver of CDSK system CDSK receiver is correlator based receiver, and it is performed in order to recover the symbol. Received signal and delay received signal are multiplied, and this signal is as much added as spreading factor. Afterward the signal pass through the threshold, and information signal recover through decoding. Information bits are possible to recover when delay time and spreading factor have to use exact value that is used in transmitted signal. B. Chaos maps In this paper, types of chaos map used are Tent map and Boss map. At existing study, Boss map means a novel chaos that we proposed for BER performance improvement [8].

SYSTEM OVERVIEW

A. Correlation delay shift keying system CDSK system has an adder in transmitter. Existing modulation system than CDSK system consists switch in transmitter, and problem of power waste and eavesdropping occurs by twice transmission. Technique that has been proposed for overcoming these problems is CDSK system. And, transmitted signal does not repeat by replacing an adder with a switch in the transmitter [9]. Chaotic signal

ISBN: 378 - 26 - 138420 - 5

Sk

L

d ∈ {+1, −1} Figure 1: Transmitter of CDSK system

Figure 3: Trajectory of tent map

2 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

85

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Figure (3) shows trajectory of Tent map. The x-axis and the y-axis of figure (3) mean xn and xn+1, and Tent map has trajectory of triangular shape. x

= α − b|x − c| ≡ F(x )

ISBN: 378 - 26 - 138420 - 5

chaotic CDSK system with tent map and boss map is observed. Here, we observe that the BER performance of Boss map is better than Tent map at each stage i.e. at different values of SNR we observe that the Boss map shows better performance than Tent map. We also observe that at initial values the BER is same for both maps but as SNR increases the BER for Boss map is less than Tent map.

(2)

Equation (2) of tent map is expressed as above. Equation (2) of Tent map uses existing output value as current input value, and it is indicated as figure when initial value is 0.1 and parameter alpha is 1.9999.

Figure 5: BER analysis in AWGN channel In Rayleigh fading channel, figure (6) shows the BER performance of chaotic CDSK system. Here, the performance is evaluated for both Tent map and Boss map. We observe that at initial values of SNR the BER performance is the same for both maps. But as SNR value increases the BER performance of Boss map is better than Tent map.

Figure 4: Trajectory of boss map Figure (4) shows trajectory of Boss map, a novel map that is proposed in order to improve the BER performance. The x-axis and the y-axis of Boss map mean xn and yn unlike the Tent map, it draws trajectory like pyramid shape. = 0.45|0.503 − = − 0.3

| (3)

Equation (3) of Boss map is expressed as above. Equation (3) form of Boss map is similar to Tent map because Boss map was proposed by transforming from Tent map. And, trajectory of Boss map is indicated as figure (4) when initial value is 0.1 and parameter alpha is 2.5.

V.

PERFORMANCE EVALUATION Figure 6: BER performance in Rayleigh fading channel

In this paper, the BER performance of chaotic CDSK system in AWGN (adaptive white Gaussian noise) channel and Rayleigh fading channel is evaluated for Tent map and Boss map. In AWGN channel, figure (5) shows BER performance of chaotic CDSK system is evaluated. Looking at the figure (5), the BER performance of

VI.

CONCLUSION

In this paper, a new type of communication system using chaos is proposed. Chaos sequences are non – periodic sequences which are sensitive to their

3 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

86

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

initial conditions. Chaos sequences are generated using chaos map. CDSK system using chaos has many advantages over other systems. But the BER performance of chaos communication system is bad. In order to improve this, we proposed a new chaos map that has better BER performance than existing map. In AWGN and Rayleigh fading channel the chaotic CDSK system is evaluated and we observed that the BER performance of chaos system with Boss map has better than with Tent map which improves BER of CDSK communication system. VII.

Technical Conference on Circuit/System, Computers and Communication (ITC-CSCC 2013), Yeosu, Korea, pp. 775-778, July2013. 7. M.A. Ben Farah, A. Kachouri and M. Samet, "Design of secure digital communication systems using DCSK chaotic modulation," Design and Test of Integrated Systems in Nano-scale Technology, 2006. DTIS 2006.International Conference on, pp. 200-204, Sept. 2006. 8. Ned J. Corron, and Daniel W. Hahs “A new approach to communication using chaotic signals”, IEEE transactions on circuits and systems—I: fundamental theory and applications, VOL. 44, NO. 5, MAY 1997. 9. Wai M. Tam, Francis C. M. Lau, and Chi K. Tse, “Generalized Correlation-Delay-Shift-Keying Scheme for Non - coherent Chaos-Based Communication Systems” IEEE transactions on circuits and systems—I: regular papers, VOL. 53, NO. 3, MARCH 2006.

FUTURE SCOPE

Chaos communication system increases the number of transmitted symbols by spreading and transmitting information bits according to characteristic of chaos maps. So the research that improves data transmission speed is necessary for chaos communication system. If many antennas are applied to chaos communication system, the capacity of data is proportional to the number of antenna. So it is good way applying multiple-input and multipleoutput (MIMO) to the chaos communication system. VIII.

ISBN: 378 - 26 - 138420 - 5

REFERENCE

1. M. Sushchik, L.S. Tsimring and A.R. Volkovskii, "Performance analysis of correlationbased communication schemes utilizing chaos," Circuits and Systems I: Fundamental Theory and Applications, IEEE Transactions on, vol. 47, no. 12, pp. 1684-1691, Dec. 2000. 2. Q. Ding and J. N. Wang, "Design of frequency-modulated correlation delay shift keying chaotic communication system," Communications, IET, vol. 5, no. 7, pp. 901-905, May 2011. 3. Chen Yi Ping, Shi Ying and Zhang Dianlun, "Performance of differential chaos-shift-keying digital communication systems over several common channels," Future Computer and Communication (ICFCC), 2010 2nd International Conference on, vol. 2, pp. 755- 759, May 2010. 4. Suwa Kim, Junyeong Bok and HeungGyoon Ryu, "Performance evaluation of DCSK system with chaotic maps," Information Networking (ICOIN), 2013 International Conference on, pp. 556559, Jan. 2013. 5. S. Arai and Y. Nishio, “Noncoherent correlation-based communication systems choosing different chaotic maps,” Proc. IEEE Int. Symp. On Circuits and Systems, New Orleans, USA, pp. 14331436, June 2007. 6. Jun-Hyun Lee and Heung-Gyoon Ryu, "New Chaos Map for CDSK Based Chaotic Communication System," The 28th International

4 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

87

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

A Review on Development of Smart Grid Technology in India and its Future Perspectives

fiscal year, which in the impending years is set to reach double digit growth (10%+) [1]. But India suffers from serious power shortage which is likely to worsen over the next few decades. India has a power sector characterized by deficient generation and high distribution losses. In addition to that, abhorrent geological and environmental factors have encouraged carbon footprints since its grass roots level of CO2emissions, greenhouse effect and the adverse effect of globalization in the country [2]. This may cause instability in the power system and problems like brownout and blackout might arise. In order to prevent the occurrence of instability, it is essential to upgrade the prevailing power systems. One of such incipient technology, Smart Grid (SG) plays a very vital role in achieving the key technical benefits like power loss reduction; refining quality of supply, peak reduction, economic load dispatch etc. Smart Grid technology has been a high priority topic of research and development in many developing as well as developed countries. This technology also has a dynamic role in remodeling the energy scenario of the global market. Factors like policies, regulation, efficiency of market, costs and benefits and services normalizes the marketing strategy of the Smart Grid technology. Other concerns like securecommunication, standardprotocol, advance database management and efficient architecture with ethical data exchange add to its essentials [3]. Such technology has a potential to prolific other technologies like Flexible AC Transmission System (FACTS)

Abstract India is truculent to meet the electric power demands of a fast expanding economy. Restructuring of the power industry has only increased several challenges for the power system engineers. The proposed vision of introducing viable Smart Grid (SG) at various levels in the Indian power systems has recommended that an advanced automation mechanism needs to be adapted. Smart Grids are introduced to make the grid operation smarter and intelligent. Smart grid operations, upon appropriate deployment can open up new avenues and opportunities with significant financial implications. This paper presents various Smart grid initiatives and implications in the context of power market evolution in India. Various examples of existing structures of automation in India are employed to underscore some of the views presented in this paper. It also Reviews the progress made in Smart grid technology research and development since its inception. Attempts are made to highlight the current and future issues involved for the development of Smart Grid technology for future demands in Indian perspective. Index Terms-- Smart Grid; Indian Electricity Act 2003; Availability Based Tariff (ABT); Demand Side Management (DSM); Renewable Energy; Rural Electrification (RE); Micro Grid.

I. INTRODUCTION THE economic growth of developing countries like India depends heavily on reliability and eminence of its electric power supply. Indian economy is anticipated to grow at 8 to 9% in 2010- 2011

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

88

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

and Wide Area Monitoring (WAM) to redefine the capability of power system engineering and unite the necessity of the rural, suburban and urban regions across the globe under single roof [4]. In addition, the technology employs the reduction of carbon footprints and foot-dragging the greenhouse gas emission. This paper designates about the Smart Grid initiatives along with various examples of existing structures of automation in India. It also reviews the encroachment made in Smart Grid technology in R&D, initiated by various public and private sector organizations supported by prominent institutions across the globe. Limelight on the current and future issues involved for the development of Smart Grid technology for future demands has also been debated. The organization of the paper is as follows: In section II, an overview of the Indian Power market along with its current strategy of power system is presented. Section III describes the vision of India on Smart Grid (SG) technology along with section IV debriefing about the prevailing units and its future enactments. Section V reveals some of the required focus areas and advent of enhanced smart grid technologies. Section VI is dedicated to general conclusion followed by references.

ISBN: 378 - 26 - 138420 - 5

provision for national policy, Rural Electrification (RE), open access in transmission, phased open access in distribution, mandatory state electricity regularity commission (SERCs), license free Generation and distribution, power trading, mandatory metering, and stringent penalties for theft of electricity. In Addition to these guidelines, a concept called as Availability Based Tariff (ABT) has also been implemented to bring effective day ahead scheduling and frequency sensitive charges for the deviation from the schedule for efficient real-time balancing and grid discipline. Exclusive terms like fixed cost and variable cost, and unscheduled interchange (UI) mechanism in ABT acts as a balancing market in which real-time price of the electricity is determined by the availability and its capacity to deliver GWs on day-to-day basis, on scheduled energy production and system frequency . Indian power system has an installed capacity of around 164 GW and meets a peak demand of 103 GW. According to the Current five year plan (2007-2012) by the year 2012, the installed capacity is estimated to be over 220 GW and the peak demand is expected to be around 157 GW and is projected to reach about 800 GW by next two decades. However certain complexities are envisaged in integrating IPPs into grid such as, demarcation, scheduling, settlement and gaming. But these issues are being addressed by proper technical and regulatory initiatives. In addition to that, the transmission Sector has progressed in a very subsequent rate, currently at installed capacity of 325,000 MVA at 765, 400, 220kV Voltage levels with 242,400 circuit kilometers (ckt-km) of HVAC and HVDC transmission network, including 765kV transmission system of 3810 ckt-km . On distribution sector, the Ministry of Power has also maneuvered to leverage the digital technology to transform

I. OVERVIEW OF INDIA POWER M ARKET AND ITS STRATEGY The re-evaluation of the Indian Electricity Supply Act, 1948 and Indian Electricity Act, 1910, has led the Electricity Act 2003 which has facilitated government and many non-government organizations to participate and to alleviate the electricity demand. The act redefines the power market economy, protection of consumer’s interest and provision of power to urban, sub-urban and rural regions across the country. The act recommends the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

89

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

and reshape the power sector in India to make an open and flexible architecture so as to meet the core challenges and burning issues, and get the highest return on investment for the technology .The Electricity Act 2003, created a liberal and competitive environment, facilitating investments by removal of energy barriers, redefining the role of system operation of the national grids. New transmission pricing, loss allocation schemes, introduction of ULDC scheme and Short Term Open Access (STOA) schemes have been introduced based on distance and direction so that power could be traded from any utility to any utility across the nation on a nondiscriminatory basis currently; Indian transmission grid is operated by a pyramid of 1 NLDC, 5 RLDCs and 31 SLDCs, monitoring round the clock with SCADA system enabled with fish as well as bird eye view, along with advance wideband speech and data communication infrastructure. In addition, other key features like smart energy metering, CIM,Component Interface Specification (CIS), synchro phasor technology, Wide Area Monitoring (WAM) system using phasor measurements, enhanced visualization and self-healing functions are being exclusively employed III. VISION TECHNOLOGY

OF

INDIA

ON

ISBN: 378 - 26 - 138420 - 5

the scenario of the nation’s electric power grid, by the convergence of information and operational technology applied to electrical grid, allowing sustainable option to the customers and upgraded security, reliability and efficiency to utilities The elite vision of Smart Grid (SG) Technology allows energy to be generated, transmitted, distributed and utilized more effectively and efficiently. Demand Side Management (DSM) is an essential practice for optimized and effective use of electricity, particularly in the developing countries like India where the demand is in excess of the available generation. Such kind of non-technical losses can be overcome by electricity grid which focuses on advanced control and communication protocols integrated with the utility providing a complete package for the requirement of “Smart Grid”. With the introduction of the Indian Electricity Act 2003, the APDRP was transformed to restructured APDRP (R-APDRP) which has improvised the operation and control , , and has attempted a seamless integration of generation (including distributed energy resources (DER), transmission and distributed system through usage of intervening information technology (IT) that uses high speed computers and advance communication network, and employing open standard with vendor-neutrality is deemed a cornerstone for embracing the upand-coming conceptualization of Smart Grid for India scenario. A vivid study of the power scenario has been illustrated each classified rendering to the timeline in brief. Introducing with the power strategy management in the past, the whole system was monitored and controlled using telephonic medium which was purely a bluecollar job. The system was solely dependent on a single generation unit or the interconnected substations. On further progress in science and technology, the

SMART GRID

Due to the consequence of cutting edge technology, buzzwords like energy conservation and emission reduction, green sustainable development, safety factor, reduction ofT&D losses, optimal utilization of assets, have turn out to be the core of discussion. AsIndia is struggling to meet its electricity demands, both in terms of Energy and Peak Load, Smart Grids can help better manage the shortage of power and optimize the power grid status in the country. A “Smart Grid” is a perception of remodeling

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

90

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

IV. SMART GRID INITIATIVES IN INDIA

system is monitored round the clock using advance data communication protocols. As well the substation has the islanding facility with immediate power backups to maintain the grid stable. India as a developing country, the scenario of the power system changes in exponential basis. Moreover the system is expected to be more reliable and flexible with its advancement in data communication and data analysis facility. Fig. 1 illustrates about the advancement and it immediate results during its implementation in future. The conclusive approach for the Indian Smart Grid would be visualized accordingly, with latest technological advancement and extensive features as shown in Fig. 2 .

As it has been acknowledged earlier that, Smart Grid technology has a widespread overview of transforming the Indian power grid from technology based standard to performance based standard. The Ministry of Power (Mop) participated in the SMART 2020 event with “The Climate Group” and “The Global Sustainability Initiative (GeSI)” in October 2008 which aimed to highlight the reports relevant to key stakeholders in India Unfortunately, the possible “way forward” has not yet been drilled out and is still a question mark for the Government. But to facilitate demand side management distribution networks has been fully augmented and upgraded for IT enabling, which has enhanced the grid network with amended customer service. Table-1 provides a brief analysis of some of the initiative which has been taken under the supervision of many government and private bodies and allies. In the view of multitude that could be accrued, it is suggested that there should be ample Government regulatory support and policy initiatives to move towards Smart Grids. India is in its nascent stage of implementing various other controls and monitoring technology, one of such is ADA Further researches are being carried out in some of the elite institutes in the country in collaboration with some of the various multinational companies and power sectors across the nation.

Hierarchy of super grid

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

91

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

With the perception of renewable energy, the energy converges to; reduction in carbon footprints, cleaner environment, plug-in EV, decentralized power which increases the quality of living standard and enhances the power system quality along with the stability of the grid network. But in contrary to that the power quality also bids some of the potential challenges such as; voltage regulation, power system transient and harmonics, reactive power compensation, grid synchronization, energy storage, load management and poor switching action etc., [27]. These problems are mainly visualized for major renewable energy sources like wind and solar energy. Other energy sources like biomass, hydro and geothermal sources have no such significant problem on integration of grid.

V. ENHANCED SMART GRID TECHNOLOGY Due to advent of advance information and communication technology (ICT) and proliferation of green energy, it’s liable that Smart Grid technology transforms to more superior and advanced form. Some the newly innovated prospects like renewable energy integration, rural electrification and micro grid are to be featured in it [25]. . A. Renewable Energy Integration Present-day environmental awareness, resulting from coal fired power station, has fortified interest in the development of the modern smart grid technology and its integration with green and sustainable energy. Table-2 provides and brief analysis of the renewable energy development in India which has been planned according to Five year Plans by the Indian Government and the Ministry of New and Renewable Energy (MNRE)

Integration of renewable with the Smart Grids makes the system more reliable and flexible in economic load dispatch, not only in a specified location but in a wide area, even between the nations. Nordic counties have practiced such grid integration among its neighboring nations and still future implementations are being focused on [28].However, forecasting approaches, design algorithm and other models are being developed by many research analysis teams and are to be established in many regions across the nationwide. Fig. 4 below represents a brief analysis of solicitation of renewable in smart grid technology in its whole network of power system engineering.

SMART GRID INITIATIVES IN INDIA BY VARIOUS ORGANIZATIONS

The volatility of fossil fuels has opened the ground for new and renewable energy sources. With the inherent unpredictability, the implementations of the renewables need

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

92

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

to have motivating government policies and well established standards. Proper financial support is the governing factor for a generation deficient and developing country like India. Wind and the photo voltaic cell should be supported by upcoming technologies like Micro Grid and ICT [27]. Such emerging technologies will play a major role in sustainable standard of living with economical insolence. Large scale.

ISBN: 378 - 26 - 138420 - 5

Rajiv Gandhi Gramen Vidyutkaran Yojana (RGGVY).Other schemes like, Pradhan Mantri Garmodaya Yojana (PMGY), Three phase feeders-single phasing and Smart metering,KutirJyotiProgram(KJP),Accelerat ed Rural Electrification Program (AREP), Rural Electricity Supply Technology Mission (REST), Accelerated Electrification of one hundred villages and 10 million households, Remote Village Renewable Energy Program me (RVREP) and Gridconnected Village Renewable Programme (GVREP) [5], [29-30]. Some of them have got a remarkable success but some of them got trapped in for their own interest due to various non-technical issues [31], [32]. Some of the key features of such projects are; to achieve 100% electrification of all villages and habitation in India, provide electricity access to all households, free-ofcost electricity to BPL households, DG system, smart based metering, promote fund, finance and facilitate alternative approaches in rural electrification, single light solar lightning system for remote villages and its hamlets. The present rural electrification scenario in the nation is still uncertain, and is yet to be put on more exploration and verified by the Ministry of Power (Mop) and Ministry of New and Renewable Energy (MNRE). Over 500,000 thousand of India’s 600,000 thousand villages are deemed to be electrified [33]. As in such case, the Indian Government and Indian businesses sector would need to invest on more such projects and schemes, for low-footprint technologies, renewable sources of energy, smart metering and resource efficient infrastructure.

B.Rural Electrification Technologies are advancing day-by-day, Smart distribution technologies allowing for increased levels of distributed generation have a high potential to address rural electrification needs and minimize the erection costs, transmission losses and maintenance costs associated with large transmission grids. Rural Electrification Corporation Limited (REC) is a leading public infrastructure finance company in India’s power sector which finances and promotes rural electrification projects across the nation, operating through a network of 13 Project offices and 5 Zonal offices. Along with the government of India has launched various programs and schemes for the successful promotion and implementation of rural electrification. One such major scheme is

Suggestions for Future Works As the report only had pulled the grid connection requirement for wind power generation, which has been planned

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

93

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

to stretch upon to the study of photovoltaic (PV) and its grid connection planning in Indian scenario. Also, few more work related to micro grids and hybrid energy with energy storage system is premeditated to complete by near future. Upon the finalizing of the entire study, the further research perspective would deliberately act as an advocate to discover the rank and strategy of nation’s development in power and energy with respect to current and future energy demand.

ISBN: 378 - 26 - 138420 - 5

regulatory affairs of India is also presented here. In this connection, the paper should act as advocate to bring forth the significance and fortification of Smart Grid philosophy and implanting it on the basis of proposed ideology in Indian subcontinent. REFERENCES [1]Sinha, A.; Neogi, S.; Lahiri, R.N.; Chowdhury, S.; Chowdhury, S.P.; Chakraborty, N.; , "Smart grid initiative for power distribution utility in India," IEEE Power and Energy Society General Meeting, 2011 , vol., no., pp.1-8, 24-29 July 2011 [2]“The Green Grid: Energy Savings and Carbon Emission Reductions Enabled by a Smart Grid,” EPRI Palo Alto, CA: 2008 [3]V.S.K. Murthy Balijepalli, S.A. Kharparde, R.P. Gupta, Yemula Pradeep, “Smart Grid Initiatives and Power Market in India” Proc. Of IEEE Power and Energy Society General Meeting, pp.1-7, Jul. 2010. [4]Bossart, S.J.; Bean, J.E.; , "Metrics and benefits analysis and challenges for Smart Grid field projects," Energetic, 2011 IEEE, vol., no., pp.1-5, 25-26 May 2011. [5]“Electricity Act ‘2003”, Govt. of India, New Delhi, 2003 [6]Central Electricity Authority, 2010. [Online]Available:http://www.cea.nic.in/reports/electric ity_act2003.pdf [7] Ministry of Power, Government of India Website. [Online] Available: http://powermin.nic.in, Nov. 2009 [8]Pradeep, Y.; Thomas, J.; Sabari, C.L.; Balijepalli,V.S.K.M.; Narasimhan, S.R.; Khaparde, S.A.; , "Towards usage of CIM in Indian Power Sector," IEEE Power and Energy Society General Meeting, 2011 , vol., no., pp.1-7, 24-29 July 2011. [9]Central Electricity Authority, 2010. [Online] Available:http://www.cea.nic.in/reports/yearly/energy_g eneration10_11.pdf. [10]Raoot, M.G.; Pentayya, P.; Khaparde, S.A.; Usha, S.; , "Complexities in integrating IPPs in Indian power system," IEEE Power and Energy Society General Meeting, 2010, vol., no., pp.1-9, 25-29 July 2010. [11]Central Electricity Authority, 2010. [Online] Available: http://www.cea.nic.in/power sec reports/executivesummary/2010 08/index.htm. [12]Power Grid Corporation of Indian Limited, “Unified Load Despatch & Communications pp.1-7, 25-29 July 2010.

CONCLUSIONS The paper presents a discussion on Indian Power Strategy along with its pitfalls in various technical and nontechnical themes, with an organized approach to evolve the conceptualization of Smart Grid. An overview of Indian Power Market along with brief analysis about the power system units is described. Power market in India is generally characterized by the poor demand side management and response for lack of proper infrastructure and awareness. Smart Grid Technology can intuitively overcome these issues. In addition to that, it can acknowledge reduction in line losses to overcome prevailing power shortages, improve the reliability of supply, power quality improvement and its management, safeguarding revenues, preventing theft etc.. Model architecture as well as India’s Smart Grid initiatives taken by the government and many private bodies, are presented in the paper. Further, various prospects of sustainable energy and off-grid solutions, Rural Electrification (RE) and evolution of Micro Grid along with various policies and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

94

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Design and Analysis of Water Hammer Effect in a Network of Pipelines. V. Sai Pavan Rajesh * *

Department of Control Systems, St. Mary’s Group of Institutions, Jawaharlal Nehru Technological UniversityHyderabad, Main Road, Kukatpally Housing Board Colony, Kukatpally, Hyderbad, Telangana, India. *

pavanlbce@gmail.com,

Abstract-There will be a chance for the destruction of the system due to transient if it is not provided with adequate protection devices. Generally, transient takes place when parameters involving in conventional flow are distorted with respect to the time. Rapid closing of valve in a pipe network will be resulting into hydraulic transient known as water hammer occurs due to sudden change in pressure and velocity of flow with respect to time. Due to impulsive action, pressure surges are induced in the system travel along the pipe network with the rapid fluid acceleration leading to the dramatic effects like pipe line failure, damage to the system etc. Considering the importance of hydraulic transient analysis, we design a system capable of verifying pipe network containing fluid flow.This paper demonstrates design of different pipe structures in pipe line network and analysis of various parameters like excess pressure distribution, velocity variations and water hammer amplitude with respect to time using COMSOL Multiphysics v 4.3. The magnitude of water transient in pipe line network at different pressure points has been discussed in detail.

Keywords- COMSOL, Pressure distribution, Velocity variation, Water Hammer.

Corresponding Author E-mail*: pavanlbce@gmail.com

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

95

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

I.

ISBN: 378 - 26 - 138420 - 5

INTRODUCTION The key to the conservation of water is good water measurement practices. As fluid will be running in water distribution system, system flow control is dependent based on the requirement for opening or closing of valves, and starting and stopping of pumps. When these operations are performed very quickly, they convert the kinetic energy carried by the fluid into strain energy in pipe walls, causing hydraulic transient

[1]

phenomena to come into existence in the water

distribution system i.e., a pulse wave of abnormal pressure is generated which travels through the pipe network. Pressure surges that are formed or fluid transients in pipelines are referred to as Water hammer. This oscillatory form of unsteady flow generated by sudden changes results in system damage or failure if the transients are not minimized. So now the steady state flow conditions are altered by this effect [2] resulting in the disturbance to the initial flow conditions of the system. Where the system will tend to obtain a static flow rate by introduction of new steady state condition. The intensity of water hammer effects will depend upon the rate of change in the velocity or momentum. Conventional water hammer analyses provide information under operational conditions on two unknown parameters i.e., pressure and velocity within a pipe system. Generally effects such as unsteady friction, acoustic radiation to the surroundings or fluid structure interaction are not taken into account in the standard theory of water hammer, but were considered in general approach [3]. But mechanisms acting all along the entire pipe section such as axial stresses in the pipe and at specific points in the pipe system such as unrestrained valves will fall under fluid structure interaction extension theory for conventional water hammer method.

Figure 1. Pipe connected to control valve at the end with water inlet from reservoir. In the past three decades, since a large number of water hammer events occurred in the light-water- reactor power plants [4], a number of comprehensive studies on the phenomena associated with water hammer events have been performed. Generally water hammer can occur in any thermal-hydraulic systems and it is extremely dangerous for the thermal-hydraulic system since, if the pressure induced exceeds the pressure range of a pipe given by the manufacturer, it can lead to the failure of the pipeline integrity. Water hammers occurring at power plants are due to rapid valve operation [5], void induced operation, and condensation induced water hammer [6]. In existing Nuclear Power Plants water hammers can occur in case of an inflow of sub-cooled water into pipes or other parts of the equipment, which are filled with steam or steam-water mixture [7].

The water hammer theory has been proposed to account for a number of effects in biofluids under mechanical stress, as in the case of the origin of Korotkoff sounds during blood pressure measurement cavity within the spinal cord

[8, 9]

, or the development of a fluid-filled

[10]

. In the voice production system, the human vocal folds act as a valve

[11

which induces

pressure waves at a specific ‘point’ in the airways (the glottis), through successive compressing and decompressing actions (the glottis opens and closes repeatedly). Ishizaka was probably the first to advocate in 1976 the application of the water hammer theory, when discussing the input acoustic impedance looking into the trachea [12]. More recently, the water hammer theory was invoked in the context of tracheal wall motion detection

[13]

. Generally Water utilities, Industrial Pipeline

Systems, Hydropower plants, chemical industries, Food, pharmaceutical industries face this water transient problem.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

96

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

The present work reports the design of different pipe channels and analysis of the pressure distribution and velocity variation produced all along the pipe flow network when subjected to one pressure measuring point. Various parameters like inlet input pressure, wall thickness and measurement point are changed for analysis.

II.

USE OF COMSOL MULTIPHYSICS The software package selected to model and simulate the pipe flow module was COMSOL Multiphysics Version

4.3. It is a powerful interactive environment for modelling and Multiphysics were selected because there was previous experience and expertise regarding its use as well as confidence in its capabilities. A finite element method based commercial software package, COMSOL Multiphysics, is used to produce a model and study the flow of liquid in different channels. This software provides the flexibility for selecting the required module using the model library, which consists of COMSOL Multiphysics, MEMS module, micro fluidics module etc. Using tools like parameterized geometry, interactive meshing, and custom solver sequences, you can quickly adapt to the ebbs and flows of your requirements, particle tracing module along with the live links for the MATLAB. At present this software can solve almost problems in multi physics systems and it creates the real world of multi physics systems without varying there material properties. The operation of this software is easier to understand and easier to implement in various aspects for designers, in the form of finite element analysis system.

Figure 2. Multiphysics modelling and simulation software-COMSOL In this model as the valve is assumed to close instantaneously the generated water hammer pulse has a step function like shape. To correctly solve this problem requires a well posed numerics. The length of the pipe is meshed with N elements giving a mesh size dx = L/N. For the transient solver to be well behaved requires that changes in a time step dt are made on lengths less than the mesh size. This gives the CFL number condition

CFL= 0.2= c.dt/dx

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

97

(1)

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS ENGINEERING 378 - 26 - 138420 - 5 Meaning that changes during the time dt maximallyINmove 20 % of theAND meshTECHNOLOGY length dx. Thus increasing the meshISBN: resolution

also requires decreasing the time stepping. This advanced version of software helps in designing the required geometry using free hand and the model can be analysed form multiple angles as it provides the rotation flexibility while working with it.

III.

THEORITICAL BACKGROUND Water hammer theory dates to 19th century, where several authors have contributed their work in analyzing this effect. Among them, Joukowsky

[14]

conducted a systematic study of the water distribution system in Moscow and derived a

formula that bears his name, that relates to pressure changes, ∆p, to velocity changes, ∆v, according to the equation

(2)

∆P = ρc∆U

Where ρ is the fluid mass density and c is the speed of sound. This relation is commonly known as the “Joukowsky equation”, but it is sometimes referred to as either the “Joukowsky-Frizell” or the “Allievi” equation. For a compressible fluid in an elastic tube, c depends on the bulk elastic modulus of the fluid K on the elastic modulus of the pipe E, on the inner radius of the pipe D, and on its wall thickness. The water hammer equations are some version of the compressible fluid flow equations. The choice of the version is problem-dependent: basic water hammer neglects friction and damping mechanisms, classic water hammer takes into account fluid wall friction, extended water hammer allows for pipe motion and dynamic Fluid Structure Interaction [15, 16].

In water hammer at static condition pressure wave is a disturbance that propagates energy and momentum from one point to another through a medium without significant displacement of the particles of that medium. A transient pressure wave, subject’s system piping and other facilities to oscillating at high pressures and low pressures. This cyclic loads and pressures can have a number of adverse effects on the hydraulic system. Hydraulic transients can cause hydraulic equipment’s in a pipe network to fail if the transient pressures are excessively high. If the pressures are excessively higher than the pressure ratings of the pipeline, failure through pipe or joint rupture, or bend or elbow movement may occur. Conversely, excessive low pressures (negative pressures) can result in buckling, implosion and leakage at pipe joints during sub atmospheric phases. Low pressure transients are normally experienced on the down streamside of a closing valve. But when the valve is closed energy losses are introduced in the system and are normally prescribed by means of an empirical law in terms of a loss coefficient. This coefficient, ordinarily determined under steady flow conditions, is known as the valve discharge coefficient, especially when the pipeline is terminated by the valve. It enables to quantify the flow response in terms of the valve action through a relationship between the flow rate and pressure for each opening position of the valve. The discharge coefficient provides the critical piece of missing information for the water hammer analysis. Because the existing relationship between pressure and flow rate is often a quadratic law type, the empirical coefficient is defined in terms of the squared flow rate. When water distribution system comprising a short length of pipes (i.e., <2,000 ft. {600m}) will usually be less vulnerable to problems with hydraulic transient. This is because wave reflections e.g., at tanks, reservoirs, junctions tend to limit further changes in pressure and counteract the initial transient effects. An important consideration is dead ends, which may be caused by closure of check valves that lock pressure waves into the system in cumulative fashion. Wave reflections will be both positive and negative pressures; as a result the effect of dead ends must be carefully evaluated in transient analysis.

These pressure surges provide the most effective and viable means of identifying weak spots, predicting potentially negative effects of hydraulic transient under a number of worst case scenarios, and evaluating how they may possibly be avoided and controlled. The basic pressure surge modeling is based on the numerical conservation of mass and linear

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

98

www.iaetsd.in


momentum equations. For this Arbitary Lagrangian Elurian(LE)[17] numerical solution helps in providing the exact analytical

INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

solution. On the other hand when poorly calibrated hydraulic network models results in poor prediction of pressure surges thus leading to more hydraulic transients. In more complex systems especially, the cumulative effect of several types of devices which influence water hammer may have an adverse effect. However, even in simple cases, for example in pumping water into a reservoir, manipulations very unfavorable with regard to water hammer may take place. For example, after the failure of the pump, the operator may start it again. Much depends on the instant of this starting. If it is done at a time when the entire water hammer effect has died down, it is an operation for which the system must have been designed.

IV.

DESIGN PROCEDURE The design and analysis of the hydraulic transient in a pipe flow includes geometry, defining the parameters for the required geometry, providing mesh & inputs. The 3D model is constructed in the drawing mode of COMSOL Multiphysics. In this, a pipe of length L = 20 m is constructed assuming that one end is connected to a reservoir, where a valve is placed at the other end. The pipe with inner radius of 398.5mm, the thickness of the wall about 8 mm and Young’s modulus of 210GPa was designed. In order to verify pressure distribution, a pressure sensor measurement point at a distance of z 0 = 11.15 m from the reservoir was arranged and flow has been sent into pipe with an initial flow rate of Q 0= 0.5 m3/sec.

Figure 3. Single pipe line with closing vale at the output.

Figure 4. Three pipe line intersection in a network.

After designing the geometry for a flow channel in a pipe, materials are to be selected from the material browser. Water, liquid and structural steel are selected from the built in section of material browser. Edges are selected for water flow and steel pipe model sections. Now pipe properties are defined one by one by first selecting the round shape from the shape list of pipe shape. Initially the reservoir acts as a constant source for pressure producing p 0 which is equal to 1 atm. As the fluid is allowed to flow from the reservoir tank into the pipe model, the fluid enters the left boundary of the pipe first and leaves the right boundary of the pipe with the valve in open condition. As the valve is open water is flowing at a steady flow rate, so at time t = o seconds the valve on the right hand side is closed instantaneously creating disturbance in the normal flow rate leading to a change in discharge at the valve. As a result of the compressibility of the water and the elastic behaviour of the pipe a sharp pressure pulse is generated travelling upstream of the valve. The water hammer wave speed c is given by the expression

1/ c2 =1/c2s + A

(3)

Where cs is the isentropic speed of sound in the bulk fluid and is 1481 m/s while the second terms represents the component due to the pipe flexibility. The water density is ρ and βA the pipe cross sectional compressibility. Resulting in an effective wave speed of 1037 m/s. The instantaneous closure of the valve will results in a water hammer pulse of amplitude P given by Joukowsky’s fundamental equation [18]

(4)

P = ρcuo

Where u0 is the average fluid velocity before valve was closed. Exact solution can only be obtained based on the verification of the pipe system and valve point [19]. Study was extended for different pipe line intersection models based on the reference

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

99

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

model

[20]

ISBN: 378 - 26 - 138420 - 5

. To design a three axis pipe line intersection, from geometry more primitives, three polygons are chosen where

first polygon coordinates corresponds to x in (1 z0 L), while y & z remain in (0 0 0). For second polygon y coordinates are (1 z0 L) and the remaining are left for (0 0 0), in a similar way z coordinates are (1 z0 L), and the resulting geometry is depicted in figure 4.

V.

RESULTS AND DISCUSSION Using this Multiphysics modelling and simulation COMSOL software three different kinds of studies were carried out i.e., Study 1 is the time dependent is based on the fluid flow and its corresponding interaction with the pipe which results in pressure distribution along the pipe line along with the velocity variation. Where the Study 2 is not only based on the excess pressure measurement point but also valve point. On the other hand Study 3 corresponds to the pressure profile in the pipe line with respect to different times As it is a time dependent study, the ranges of the times are edited in the study setting section from range (0, 0.005, and 0.75) to range(0, 1e-3, 0.24). Now from study 1 right click and select show default solver. Then expand the solver window and click the time stepping section in which the maximum step check box is marked and in the associated edit field type dt. Now right click study 1 to compute the results. So pressure along pipe line at t=0.24s is obtained along with the velocity variation. Results are analysed by transforming the pipe line shapes to different geometry’s. From the analysis of results, it allow us to conclude that when more pipes lines are inter connected there is more chance for water transient to tack place at an easy rate and cause damage to the system.

VI.

MESHING Meshing can provide the information of required outputs anywhere on the proposed structure along with the input given. Numerical ripples are visible on the point graphs of excess pressure history at pressure sensor and water hammer amplitude. As the closure in the valve is instantaneous the pressure profile has a step like nature. This is difficult to resolve numerically. The ripples can be reduced by increasing the mesh resolution parameter N. So the number of mesh points (N) selected in this model is about 400. In this model meshing is done for the Edges of the pipe. Where the maximum element size of the parameter is defined for L/N m (L= 20 m & N= 400) and the minimum element size is 1[mm].

VII.

SIMULATION In this study, the simulations are performed using the fundamental equation in water hammer theory that relates to pressure changes, ∆p, to velocity changes, ∆v, according to the equation (2). Simulation comprises of application of different input initial pressure at the inlet portion for different pipe network sections. Pressure measurement points are changed along the pipe length L and computed for time interval from T=0 to T=0.24 seconds. Both the velocity and pressure are measured at the above time interval. Other parameters like water hammer amplitude, maximum and minimum pressure for two different geometry’s; along with velocity variations are listed in table 1. Table 1. Pressure distribution and velocity variation values for single pipe line & three pipe lines geometry.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

100

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Parameters

Single pipe

Three pipes

Min. Pressure at T=0.07s

-9.579*105 Pa

-1.577*105 Pa

Max. Pressure at T=0.07s

4.047*105 Pa

1.1617*106 Pa

Velocity

Variation

Range

-4 at 4.257*10 to 1.1632 m/s

ISBN: 378 - 26 - 138420 - 5

0.2672 to 1.2742 m/s

T=0.23s Excess

Pressure

distribution 1.35*106 Pa

-0.85*106 Pa

along the pipe for t = 0.24 s.

VIII.

CONCLUSION Flow channel is designed in a pipe network and its reaction with the valve when closed was analysed using COMSOL Multiphysics Version 4.3. Simulation for the proposed model is done by changing the initial flow rates along with the pipe networks to explore the variations in the properties of fluid like pressure distribution and velocity variation with respect to time. When inlet mean velocity is increased the magnitude of water hammer amplitude remains the same but the chances for the water transient is more which results in easy breakdown of pipe section. When multiple pipe line was connected the maximum pressure distribution and velocity variation were very less even though the water hammer amplitude remains the same when compared for different cases. Positive pressure difference exists when multiple pipes were connected where negative pressure difference exists for singe pipe line geometry, which states that network of pipe lines results in less water transient effect. This study can be extended by observing the changes in the flow by inclining the pipe line, by using T, L shaped piping geometries. Further extension was made for micro piping system by changing the dimensions of the geometry from meters to micro meters. This study helps in building micro piping network systems that are used in bio medical applications and Automobile industries.

IX.

ACKNOWLEDGEMENTS The authors would like to thank NPMASS for the establishment of National MEMS Design Centre (NMDC) at Lakireddy Bali Reddy Engineering College. The authors would also like to thank the Director and Management of the college for providing the necessary facilities to carry out this work.

REFERENCES 1.

Avallone, E.A., T. Baumeister ID, "Marks' Standard Handbook for Engineers" McGraw-Hill, 1987, 9th Edition. pp. 3-71.

2.

Moody, F. J., "Introduction to Unsteady Thermo fluid Mechanics" John Wiley and Sons, 1990,Chapter 9, page 405.

3.

A.S. Tijsseling, Fluid-structure interaction in liquid-filled pipe systems: A review, Journal of Fluids and Structures, 1996,(10), PP. 109-146.

4.

Algirdas Kaliatka.; Eugenijus Uspuras.; Mindaugas Vaisnoras.; “Analysis of Water Hammer Phenomena in RBMK-1500 Reactor Main Circulation Circuit”, International Conference on Nuclear Energy for New Europe 2006, Portorož, Slovenia, 2006 September 18-21.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

101

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

5.

ISBN: 378 - 26 - 138420 - 5

M. Giot, H.M. Prasser, A. Dudlik, G. Ezsol, M. Habip, H. Lemonnier, I. Tiselj, F. Castrillo, W. Van Hove, R. Perezagua, & S. Potapov, “Twophase flow water hammer transients and induced loads on materials and structures of nuclear power plants (WAHALoads)” FISA-2001 EU Research in Reactor Safety, Luxembourg, 2001, November, pp. 12- 15.

6.

P. Griffith, Screening Reactor Steam/Water Piping Systems for Water Hammer, Report Prepared for U. S. Nuclear Regulatory Commission NUREG/CR-6519, 1997.

7.

M. Giot, J.M. Seynhaeve, Two-Phase Flow Water Hammer Transients : towards the WAHA code, Proc. Int. Conf. Nuclear Energy for New Europe '03, Portorož, Slovenia, 2003,Sept. 8–11, Paper 202, 8p.

8.

D. Chungcharoen, Genesis of Korotkoff sounds, Am. J. Physiol.; 1964, 207, pp. 190–194.

9.

J. Allen, T. Gehrke, J. O. Sullivan, S. T. King, A. Murray , Characterization of the Korotkoff sounds using joint time-frequency analysis, Physiol. Meas.; 2004, (25), Pp. 107–117.

10. H. S. Chang, H. Nakagawa, Hypothesis on the pathophysiology of syringomyelia based on simulation of cerebrospinal fluid dynamics, Journal of Neurology Neurosurgery and Psychiatry, 2003, (74), pp. 344–347. 11. N. H. Fletcher, Autonomous vibration of simple pressure-controlled valves in gas flows, J. Acoust. Soc. Am, 1993, 93 (4), pp. 2172–2180. 12. K. Ishizaka, M. Matsudaira, T. Kaneko, Input acoustic-impedance measurement of the sub glottal system, J. Acoust. Soc. Am, 1976, 60 (1), pp. 190–197. 13. G. C. Burnett, Method and apparatus for voiced speech excitation function determination and non-acoustic assisted feature extraction, U.S Patent 20020099541, (2002) A1. 14. N. Joukowsky, Uber den hydraulischen stoss in wasserleitungsrohren, M´emoires de l’Acad´emie Imp´eriale des Sciences de St. P´etersbourg Series 8, 9 (1900). 15. F. D’Souza, R. Oldeburger, Dynamic response of fluid lines, ASMEJournal of Basic Engineering, 1964, (86), pp. 589–598. 16. D. J. Wood, A study of the response of coupled liquid flow-structural systems subjected to periodic disturbances, ASME

Journal of Basic Engineering, 1968,(90) , pp.532–540.

17. J. Donea, Antonio Huerta,J. Ph. Ponthot and A. Rodr´ıguez-Ferran, “Arbitrary Lagrangian – Eulerian Methods”,Universit´edeLi`ege, Li`ege, Belgium. 18. M.S. Ghidaoui, M. Zhao, D.A. McInnis, and D.H. Ax worthy, “A Review of Water a. Hammer Theory and Practice,” Applied Mechanics Reviews, ASME, 2005. 19. A.S. Tijsseling, “Exact Solution of Linear Hyperbolic Four-Equation Systems in Axial Liquid-Pipe Vibration,” Journal Fluids and Structures, vol. 18, pp. 179–196, 2003. 20. Model library path: pipe_flow_module /verification _models / water_hammer_verification{.http://www.comsol.co.in/showroom/gallery/12683/}

FIGURES AND TABLES CAPTION FOR FIGURES FIGURE 1. Pipe connected to control valve at the end with water inlet from reservoir. FIGURE 2. Multiphysics modelling and simulation software-COMSOL FIGURE 3. Single pipe line with closing vale at the output. FIGURE 4. Three pipe line intersection in a network.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

102

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

FIGURE 5. Pressure distribution at T=0.07s for single pipe line geometry. FIGURE 6. Pressure distribution at T=0.07s for three pipe line intersection geometry. FIGURE 7. Velocity variation at T=0.23s for single pipe. FIGURE 8. Velocity variation at T=0.23s for three pipe line intersection geometry. FIGURE 9. Excess pressure history measured at the pressure sensor for single pipe line geometry. FIGURE 10. Excess pressure history measured at the pressure sensor for three pipe line geometry. FIGURE 11. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for single pipe line. FIGURE 12. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for three pipe lines. FIGURE 13. Excess pressure distribution along the pipe for t= 30 s for single pipeline geometry. FIGURE 14. Excess pressure distribution along the pipe for t= 30 s for three pipeline geometry.

CAPTION FOR TABLE Table 1. Pressure distribution and velocity variation values for single pipe line & three pipe lines geometry.

Figure 5: Pressure distribution at T=0.07s for single pipe line geometry.

Figure 6: Pressure distribution at T=0.07s for three pipe line intersection geometry.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

103

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Figure 7: Velocity variation at T=0.23s for single pipe line geometry

Figure 8: Velocity variation at T=0.23s for three pipe line intersection geometry.

Figure 9. Excess pressure history measured at the pressure sensor for single pipe line geometry.

Figure 10. Excess pressure history measured at the pressure sensor for three pipe line geometry.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

104

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Figure 11. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for single pipe.

ISBN: 378 - 26 - 138420 - 5

Figure 12. Excess pressure at the valve (green line) & Predicted water hammer amplitude (Blue line) for three pipe lines.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

105

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Figure 13. Excess pressure distribution along the pipe for t= 30 s for single pipeline geometry.

Figure 14. Excess pressure distribution along the pipe for t= 30 s for three pipe line intersection geometry.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

106

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

A Secured Based Information Sharing Scheme via Smartphone’s in DTN Routings T. Mohan Krishna1 V.Sucharita2 2

PG Scholar1, M.Tech. Associate Professor Department of Computer Science and Engineering, mohan.gkce@gmail.com Audisankara College of Engineering & Technology, Gudur. A.P - India Abstract :

With the growing range of smart phone users, peer-to-peer content sharing is anticipated to occur a lot of usually. Thus, new content sharing mechanisms ought to be developed as ancient information delivery schemes aren't economical for content sharing attributable to the isolated property between sensible phones. To achieve data delivery in such complex environments, researchers have anticipated the use of epidemic routing or store-carry-forward protocols, in this a node stores a message may be a note and carries it until a forwarding chance arises through an encounter with other node. Earlier studies in this field focused on whether two nodes would come across each other and the place and time of encounter.During this paper, we tend to propose discover-predict-deliver as associate economical content sharing theme for delay-tolerant Smartphone networks. In our projected theme,. Specifically, our approach employs a quality learning formula to spot places inside and outdoors. A hidden markov model and viterbi algorithm is employed to predict associate individual’s future quality info. analysis supported real traces indicates that with the projected approach, eighty seven p.c of contents is properly discovered and delivered among a pair of hours once the content is out there solely in thirty p.c of nodes within the network .In order to decrease energy consumption we are using asymmetric multi core processors and efficient sensor scheduling is needed for that purpose we are using POMPDs in sensor scheduling. Keywords:- Tolerant Network, ad hoc networks, Store and forward networks, peer-to-peer network I.INTRODUCTION Now a days Number of advanced smart phone clients has rapidly expanded in the recent years. users can make different sorts of substance effectively utilizing easy to use interfaces accessible within the advanced smart phones phones. However substance offering among shrewd telephones is dreary as it obliges a few activities, for example, transferring to concentrated servers, seeking and downloading substance. One simple route is to depend on adhoc technique for distributed substance imparting. Shockingly with the current adhoc steering conventions, substance are most certainly not conveyed if a system parcel exists between the associates at the point when substance are imparted. Thus Delay Tolerant System (DTN) steering conventions accomplish better execution than customary adhoc steering conventions. These conventions don't oblige an incorporated server. Thus the substance are put away on advanced mobile phones itself. Smartphone's consists of many network interfaces like Bluetooth and Wi-Fi so ad hoc networks can be easily constructed with them. The Connectivity among Smartphone's is likely to be alternating because of movement patterns of carriers and the signal transmission phenomena. A wide variety of Store-carry-forward protocols have been anticipated by researchers. Routing in delay-tolerant networking concerns itself with the ability to route, data from

a source to a destination, which is a essential ability of all communication networks must have. In these exigent environments, mostly used or familiar ad hoc routing protocols fail to establish routes. This is due to these protocols first try to establish a complete route and then, once the route has been established forwards the actual data. Still, when immediate end-to-end paths are complicated or unfeasible to institute, routing protocols should take to a "store and then forward" method or approach, where data or a message is moved and stored incrementally all over the network in hops that it will finally arrive at its destination. A general technique used to maximize the likelihood of a message being effectively transferred is to duplicate many copies of the message in hops that one will be successful in reaching its destination. DTN workplace of forty buses and simulations supported real traces. we tend to concentrate on store-carry-forward networking situations, during which the nodes communicate victimization DTN bundle design. Some smart phones within the network store content that they're willing to share with others. All smart phone users square measure willing to get together and provide a restricted quantity of their resources, like information measure, storage, and process power, to help others. Our goal is to permit users to issue queries for content hold on on alternative smart phones anyplace within the network and to assess the probabilities of getting the knowledge required. we tend to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

107

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

assume that smart phones will perform searches on their native storage, and that we realize the relevant results for a given question to facilitate looking. Delay Tolerant Network (DTN) routing protocols attain enhanced performance than usual ad hoc routing protocols. Over the proposed DTN routing protocols, Epidemic routing is a essential DTN routing solution. In Epidemic routing by vahdat et al[2], messages are forwarded to each encountered node that does not have a replica of the same message. This solution exhibits the finest performance in terms of delivery pace and latency, but it involves abundant resources, such as storage, bandwidth, and energy. This paper spotlight mainly on efficiency of content discovery and its delivery to the targeted destination. Here we suggest recommendation based discover-predict-deliver (DPD) as efficient and effective content sharing scheme for smart phone based DTN’s. DPD suppose that smart phones can hook up when they are in close proximity that is where the Smartphone users reside for a longer period. Earlier studies have shown that Smartphone users stay indoors for a longer period where GPS cannot be accessed. The objective of our work is to discover solutions to the problems in content sharing and to minimize the energy consumption using sensor scheduling scheme and finally improve the performance and want to reduce energy consumption by using asymmetric multicore processors and efficient sensor scheduling my using POMPDs in scheduling scheme.

ISBN: 378 - 26 - 138420 - 5

rule downside and conjointly gift many extensions to permit OPF to use solely partial routing data and work with different probabilistic forwarding schemes like ticket-based forwarding. Implement OPF and several and a number of different and several other other protocols and perform trace-driven simulations. Simulation results show that the delivery rate of OPF is barely five-hitter below epidemic, and two hundredth bigger than the progressive delegation forwarding whereas generating five-hitter a lot of copies and five-hitter longer delay. B.DTN Routing as a Resource Allocation Problem: Many DTN routing protocols use a range of mechanisms, as well as discovering the meeting possibilities among nodes, packet replication, and network cryptography. The first focus of those mechanisms is to extend the chance of finding a path with restricted info, thus these approaches have solely associate incidental result on such routing metrics as most or average delivery latency. during this paper we have a tendency to gift fast associate intentional DTN routing protocol which will optimize a particular routing metric like worst-case delivery latency or the fraction of packets that area unit delivered inside a point. The key insight is to treat DTN routing as a resource allocation drawback that interprets the routing metric into per-packet utilities that confirm however packets ought to be replicated within the system. C.Resource Constraints: RAPID (resource allocation protocol for intentional DTN) routing a protocol designed to expressly optimize AN administrator-specified routing metric. speedy “routes” a packet by opportunistically replicating it till a replica reaches the destination. Speedy interprets the routing metric to perpacket utilities that verify at each transfer chance if the utility of replicating a packet justifies the resources used. Speedy loosely tracks network resources through an impact plane to assimilate a neighbor hood read of worldwide network state. To the present finish speedy uses AN in-band management channel to exchange network state data among nodes employing a fraction of the obtainable information measure. D. Epidemic routing:

Fig1: Finding meaning ful places and their simulation area II.LITERATURE SURVEY A. Optimal Probabilistic Forwarding Protocol in DTN: To provide Associate in Nursing optimum forwarding protocol that maximizes the expected delivery rate whereas satisfying a definite constant on the amount of forwarding’s per message The optimum probabilistic forwarding (OPF) protocol, we have a tendency to use Associate in Nursing optimum probabilistic forwarding metric derived by modeling every forwarding as Associate in Nursing optimum stopping

Epidemic routing may be a easy answer for DTNs, during which messages square measure forwarded to each encountered node. Thus, Epidemic routing achieves the best attainable delivery rate and lowest attainable latency, however it needs vast information measure and storage resources. Investigated a settled wave model for the progress of Epidemic routing. Many approaches are projected to cut back the overhead and to boost the performance of Epidemic routing examined variety of various ways to suppress redundant transmissions. projected conversation strategies, during which a node chooses a random range between zero and one, and therefore the message is forwarded to a different

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

108

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

node if the chosen range is higher than a user-predefined chance. These works belong to resource- aware routing protocols. different protocols square measure classified into two groups. Opportunity-based schemes and prediction-based schemes.

III.RELATED WORK A delay tolerant network (DTN) is a mobile network where a existing source-destination path may not exist amid a pair of nodes and messages are forwarded in a store-carry-forward routing hypothesis [6]. Vahdat et al [2] anticipated Epidemic routing as fundamental DTN routing protocol in which a node forwards a message to each and every encountered node that does not have a replica of the message. The solution shows the finest performance terms of delivery pace and latency but wastes bulk of bandwidth. A substitute solution was resource based [3], [4],where systems utilize “data mules” as message carriers that straightly delivers the message to the destination. Next, opportunitybased routing protocols make use of history of encounters to convey a message to the destination [5], [6], [7]. Prediction based schemes [8], [9], make use of complicated utility functions to decide whether the forward a message to the node.

ISBN: 378 - 26 - 138420 - 5

will be communicating using the DTN bundle protocol. A few devices in the network store content which they are ready to share with others. All nodes are willing to assist and provide a restricted amount of their local system resources (bandwidth, storage, and dispensation power) to aid other nodes. Our objective is to permit users to issue queries for content that is stored on the other nodes everywhere in the network and consider the possibility of such a node to acquire the required information. To ease searching, we suppose that nodes are capable to carry out searches on their local storage and uncover the appropriate results for a given query. The content sharing process is characterized into two phases: content discovery phase and the content delivery phase. In the content discovery phase, the user inputs or enters in a content sharing application requests for the content. The application initially searches the content it its own or individual database and if not found, the application then creates a query that is forwarded based on the user’s request. When the content is found, the content delivery phase is initiated or commenced, and the content is forwarded to the query originator.

Yuan et al precisely predicted encounter opportunities by means of time of encounters. Pitkanen et al anticipated stateof-the-art content sharing scheme in DTN’s. They mainly focused on restraining search query propagation and anticipated several query processing methods. Chang et al anticipated a process for searching for a node or an object in a bulky network and restraining search query propagation. Here a class or a set of controlled flooding search strategies where query/search packets are broadcasted and propagated in the network in anticipation of a preset TTL (time-to-live) value carried in the packet terminates. The objective of our work is to discover the content sharing problem in Smartphone based DTN’s involves minimizing energy consumption using sensor scheduling schemes. In General, these works are outlined focused around contact histories. when two or more smart phone clients are in the same spot, their gadgets don't generally make correspondence or recognize contact opportunities. Subsequently, the contact history gives less exact data on future contact opportunities than portability data. This result might be seen in the dissects talked about Content sharing in DTN'S involves the following problems: A. Content sharing In this segment we examine the problem of content sharing in delay tolerant networks and depict substitute solutions. As specified in the introduction, we spotlight on mobile opportunistic networking scenarios where the nodes

Figure: Processing of incoming query. B. Content discovery In content discovery, mainly systems spotlight on how to formulate queries, that depends on assumptions about the format or layout of the content to be discovered. A common protocol should sustain various types of queries and content, but we abstract or summarize from the actual similar or matching process in order to spotlight on discovering content in the network. The easiest strategy to discover and deliver the contents is Epidemic routing [2]. But, due to resource limitations, Epidemic routing is often extravagant, so we have to consider methods that limits the system resources used up on both content discovery and delivery. Preferably, a query should only be forwarded to neighbours that hold on the matching contents or those are on the path to other nodes having matching content . Different nodes should return no overlapping responses to the requester. As total knowledge or active coordination is not an alternative in our scenario, each

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

109

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

node can only make autonomous forwarding decisions. These autonomous forwarding decisions should attain a fine trade off amid discovery efficiency and necessary resources. Analogous limitations pertain to content delivery. A few methods anticipated by Pitkanen et al. may be used for restraining the distribution of queries. Additionally, we study two substitutes for restraining the distribution of queries: a split query lifetime limit and a query distance limit. We employ the controlled replication-based [9] routing scheme that performs a singlecopy scheme. This single-copy scheme turn both query lifetime and distance limits into random walk, and the scheme is not effectual when content-carrier nodes (i.e., destinations) are not well-known. By distinguishing, the controlled replication-based scheme dispenses a set of message replicas and evade the excessive spread of messages. C. Content delivery When the query matching content is discovered, the content carrying node should transmit only a subset of results. This constraint is needed to limit the amount of resources utilized both locally and globally for sending and storing the responses, and to eliminate potential copies . The query originator sets a limit for both the number of replications or duplicates and the amount of content that should be produced. When nodes require to forward a query message, the limits incorporated in the query message are used to make the forwarding decision. If the amount of the content go beyond the response limit, the node wants to select which ones to forward. D. Mobility Prediction Numerous studies have largely specified another problem of content sharing: mobility learning and prediction. Beacon Print discover meaningful places by constantly determining constant scans for a time period. Place Sense senses the arrival and exit from a place by utilizing invasive RF-beacons. The system uses a radio beacon’s retort rates to attain vigorous beacon conclusion. EnTracked is a position tracking system for GPS-enabled devices. The system is configurable to recognize different tradeoffs amid energy consumption and heftiness. Mobility prediction has been extensively studied in and out of the delay-tolerant networking area. Markov-based schemes, make the problem as a Hidden Markov or semi-Markov model and probabilistic prediction of human mobility. In contrast, neural network based schemes try to match the observed user behaviour with earlier observed behaviour and estimate the prospect based on the experimental patterns.Markov based schemes are suitable for resource- restricted devices, like Smartphone’s, owing to their low computation overhead and reserved storage requirements. In our work, we have to develop a mobility learning and prediction method. This method has been built to offer coarsegrained mobility information with a less computation overhead. When the difficulty of mobility learning and

ISBN: 378 - 26 - 138420 - 5

prediction scheme can be mistreated, the schemes specified in can be worn to offer fine-grained mobility information.

Problem in previous works In the previous works, the energy consumption is more so the battery lifetime will be reduced. By using asymmetric multicore processors and efficient sensor scheduling mechanisms energy consumption can be reduced and can increase the lifespan of the batteries. IV. PROPOSED WORK In order to content sharing we can use the DPD Technique and with that Here we are using asymmetric multicore processors for performance expansion in sensing. Below fig shows the daily energy utilization outline, which is calculated using the composed stationary and movement time from seven different users in four weeks . The scrutiny does not comprise the energy utilization of content swap as it mainly rely on the volume and communication pace of the nodes. The typical energy consumptions of GPS, Wi-Fi, and the accelerometer varies. The accelerometer do have the maximum energy consumption as it is used endlessly over 24 hours. Wi-Fi energy utilization is observed owed to the scanning of neighbour APs for place recognition. GPS has a huge discrepancy in energy consumption as this may not be available in all places. In this paper we examine the issue of following an item moving random through a system of remote sensors. Our destination is to devise techniques for planning the sensors to advance the trade off between following execution and energy consumption. We give the scheduling issue a role as a Partially observable Markov Decision Process (POMDP), where the control activities compare to the set of sensors to initiate at each one time step. Utilizing a bottom-up approach methodology, we consider different sensing, movement what's more cost models with expanding levels of trouble In order to minimize energy consumption or utilization we use sensor scheduling schemes or mechanisms[10]. Sensor systems have an wide-ranging diversity of prospective, functional and important applications. In any case, there are questions that have to be inclined for prolific procedure of sensor system frameworks in right applications. Energy sparing is one fundamental issue for sensor systems as most of the sensors are furnished with no rechargeable batteries that have constrained lifetime. To enhance the lifetime of a sensor set up, one vital methodology is to attentively schedule sensors' work sleep cycles (or obligation cycles). In addition, in cluster based systems, grouping heads are usually selected in a way that minimizes or reduces the aggregate energy utilization and they may axle among the sensors to fine-tune energy utilization. As a rule, these energy productive scheduling components or mechanisms (furthermore called topology arrangement components) required to accomplish

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

110

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

certain application requirements while sparing energy. In sensor arranges that have various outline requirements than those in conventional remote systems. Distinctive instruments may make characteristic suspicions about their sensors together with identification model, sense zone, transmission scope, dissatisfaction or disappointment model, time management, furthermore the capability to get area and parting data.

ISBN: 378 - 26 - 138420 - 5

discovery rate. With an increasing query lifetime, both DPD and Epidemic show a high discovery ratio because with a longer duration, each query is forwarded to more nodes.

Fig 4: Discovery Efficiency The influence of the query lifetime on hop-based discovery methods is not significant. These observations are derived from the limitation on the number of forwards. C. Prediction Accuracy; Figure: Mean energy consumption in a day V.PERFORMANCE EVALUATION A. Learning Accuracy Learning accuracy shows how efficiently and correctly the places were identified. The accuracy of place learning influences the estimation of encounter opportunity between two nodes. For example, If two different places are identified as identical ones, we may incorrectly estimate that two nodes will encounter each other when they visit two different places.Also, the correct computation of and depends on the geographical location information of nodes.

Mobility prediction is a key factor in the estimation of utility function. Here, we evaluate our prediction method according to trajectory deviation, prediction duration, and learning period, as illustrated. Trajectory deviation indicates the irregularity of a user’s mobility. For this evaluation, we modify the existing mobility information with noise data. Thus, 10, 20, and 30 percent of the meaningful places are randomly chosen locations for trajectory deviations of 0.1, 0.2, and 0.3, respectively. As the trajectory deviation increases, the prediction accuracy decreases. Prediction accuracy is computed as the ratio of correctly predicted locations to the total predicted locations.

Fig3: Learning Accuracy

Fig 5: Prediction Accuracy

B. Discovery Efficiency

D. Sharing Cost and Latency:

The discovery ratio is the ratio of discovered contents to the generated queries within a given duration. DPD’s discovery performance is subjective to the two forwarding. In Epidemic, queries are forwarded to every node. In hops-10 and hops-5, a query message is forwarded until its hop count reaches 10 and 5, respectively. When a query matching content is available only on a few nodes, the discovery methods show a low

Finally, we evaluate the protocols in terms of latency and cost, as shown in Fig. 3f and Fig. 4. E-E uses Epidemic routing in both the discovery and delivery phases. E-S&W uses Epidemic routing for content discovery and Spray and Wait for content delivery. The sharing latency is the sum of the discovery latency and the delivery latency. E-E shows the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

111

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

lowest latency, and both DPD and E-S&W show the highest latency. DPD exhibits such results due to the high latency of the discovery phase. However, the delivery latency of DPD is much smaller than that of E-S&W and is close to that of E-E. E-E shows the highest overhead. The latency and overhead are tradeoffs. In summary, DPD achieves good efficiency in the delivery phase, whereas the efficiency of the discovery phase can be improved. Content header caching on all nodes may be a good solution, and this issue will be addressed in future works.

ISBN: 378 - 26 - 138420 - 5

REFERENCE [1]

T3I Group LLC, http://www.telecomweb.com, 2010.

[2]

A. Vahdat and D. Becker, “Epidemic Routing for Partially Connected Ad Hoc Networks,” technical report, Dept. of Computer Science, Duke Univ., Sept. 2000.

[3]

A. Balasubramanian, B.N. Levine, and A. Venkataramani, “DTN Routing as a Resource Allocation Problem,” Proc. ACM SIGCOMM, pp. 373-384, 2007.

[4]

R.C. Shah, S. Roy, S. Jain, and W. Brunette, “Data Mules: Modelling a Three-Tier Architecture for Sparse Sensor Networks,” Elsevier Ad Hoc Networks J., vol. 1, pp. 215-233, Sept. 2003.

[5]

A. Lindgren, A. Doria, and O. Schelen, “Probabilistic Routing in Intermittently Connected Networks,” SIGMOBILE Mobile Computer Comm. Rev., vol. 7, no. 3, pp. 19-20, 2003.

[6]

C. Liu and J. Wu, “An Optimal Probabilistic Forwarding Protocol in Delay Tolerant Networks,” Proc. ACM MobiHoc, pp. 14, 2009.

[7]

J. Wu, M. Lu, and F. Li, “Utility-Based Opportunistic Routing in MultiHop Wireless Networks,” Proc. 28th Int’l Conf. Distributed Computing Systems (ICDCS ’08), pp. 470-477, 2008.

[8] T. Spyropoulos, K. Psounis, and C.S. Raghavendra, “Spray and Wait: An Efficient Routing Scheme for Intermittently Connected Mobile Networks,” Proc. ACM SIGCOMM Workshop Delay-Tolerant Networking (WDTN ’05), pp. 252-259, 2005.

Fig 4: Sharing Cost and Latency CONCLUSION

[9]

In this paper, we tend to planned associate degree economical content sharing mechanism in Smartphone-based DTNs. We tend to tried to utilize the benefits of today’s Smartphone’s (i.e., handiness of varied localization and communication technologies) and suitably designed the protocol. In planning a content sharing algorithmic rule, we tend to centered on 2 points: 1) individuals move around meaningful places, and 2) the quality of individuals is certain. supported this proposition, we tend to developed a quality learning and prediction algorithmic rule to reckon the utility operate. We tend to learned that contents so have geographical and temporal validity, and that we planned a theme by considering these characteristics of content. for instance, distributing queries for content in a vicinity twenty miles from the placement of the content searcher has solely a zero. Three percentage probability to get the content whereas generating twenty percentage additional transmission price. the time limitation on question distribution reduces transmission price. most vital, the planned protocol properly discovers and delivers eighty seven percentage of contents among two hours once the contents square measure on the market solely in thirty percentage of nodes within the network.energy consumption are reduced by using sensor scheduling and further works have to done on user privacy.

T. Spyropoulos, K. Psounis, and C. Raghavendra, “Efficient Routing in Intermittently Connected Mobile Networks: The Single-Copy Case,” IEEE/ACM Trans. Networking, vol. 16, no. 1, pp. 63-76, Feb. 2008.

[10] T. Spyropoulos, K. Psounis, and C.S. Raghavendra, “Efficient Routing in Intermittently Connected Mobile Networks: The Multiple-Copy Case,” IEEE/ACM Trans. Networking, vol. 16, pp. 77-90, Feb. 2008. [11] I. Cardei, C. Liu, J. Wu, and Q. Yuan, “DTN Routing with Probabilistic Trajectory Prediction,” Proc. Third Int’l Conf. Wireless Algorithms, Systems, and Applications (WASA ’08,), pp. 40-51, 2008. [12] Q. Yuan, I. Cardei, and J. Wu, “Predict and Relay: An Efficient Routing in Disruption-Tolerant Networks,” Proc. 10th ACM MobiHoc, pp. 95-104, 2009. [13]

E.M. Daly and M. Haahr, “Social Network Analysis for Routing in Disconnected Delay-tolerant MANETs,” Proc. Eighth ACM MobiHoc, pp. 32-40, 2007.

[14]

N.B. Chang and M. Liu, “Controlled Flooding Search in a Large Network,” IEEE/ACM Trans. Networking, vol. 15, no. 2, pp. 436- 449, Apr. 2007.

[15]

C. Avin and C. Brito, “Efficient and Robust Query Processing in Dynamic Environments Using Random Walk Techniques,” Proc. Third Int’l Symp. Information Processing in Sensor Networks (IPSN ’04), pp. 277-286, 2004.

[16]

M. Pitkanen, T. Karkkainen, J. Greifenberg, and J. Ott, “Searching for Content in Mobile DTNs,” Proc. IEEE Int’l Conf. Pervasive Computing and Comm. (PERCOM ’09), pp. 1-10, 2009.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

112

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

STORAGE PRIVACY PROTECTION AGAINST DATA LEAKAGE THREADS IN CLOUD COMPUTING K ABHINANDAN REDDY M.Tech 2nd Year , CSE abhinandan232@gmail.com __________________________________________________________________________ Abstract - Using Cloud Storage, users can

Index terms – Cloud storage, privacy- protection,

remotely store their data and enjoy the on-

public auditability, cloud computing.

demand high quality applications and services I. INTRODUCTION

from a shared pool of configurable computing resources, without the burden of local data

Cloud computing promises lower costs, rapid

storage and maintenance.However, the fact that

scaling,

users no longer have physical possession of the

availability anywhere, anytime, a key challenge

outsourced

is how to ensure and build confidence that the

data makes the data integrity

protection in Cloud Computing a formidable

easier

maintenance,

and

service

cloud can handle user data securely.

task, especially for users with constrained Cloud computing is transforming the very

computing resources. Moreover, users should be able to just use the cloud storage as if it is local, without worrying about the need to verify its

nature of how businesses use information technology. From user’s prospective, including both individuals and IT enterprises, storing data

integrity

remotely to the cloud in a flexible on-demand This paper proposes privacy preserving public

manner brings appealing benefits: relief of the

auditability for cloud storage is of very

burden for storage management, universal data

importance so that users can arrange to a third

access with independent geographical locations,

party auditor (TPA) to check the integrity of

and

outside data and be worry-free. To securely

hardware, software, and personnel maintenances,

initiate an effective TPA, the auditing process

etc.

avoidance

of

capital

expenditure

on

should bring no new security challenges towards user data privacy and no additional online burden to user. This paper proposes a secure cloud

storage

system

supporting

privacy-

preserving public auditing. We further extend the TPA to perform audits for multiple users simultaneously and efficiently.

While cloud computing makes these advantages more appealing than ever, it also brings new and challenging security threats towards user’s outsourced data. Data outsourcing is actually relinquishing user’s ultimate control over the fate of their data. As a result, the correctness of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

113

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

the data in the cloud is being put at risk due to

would also beneficial for the cloud service

the following reasons.

providers to improve their cloud based service platform.

i)

The infrastructures under the cloud are much more powerful and reliable than personal

Recently, the notation of public auditability has

computing devices; they are still facing the

been proposed in the context of ensuring

broad range of both internal and external

remotely stored data integrity under different

threats for data integrity.

system and security models. Public auditability

ii) There do exist various motivations for Cloud

allows external party, in addition to the user

Service Provider to behave unfaithfully

himself, to verify the correctness of remotely

towards the cloud users regarding the status

stored data. From the perspective of protecting

of their outsourced data.

data privacy, the users, who own the data and rely on TPA just for the storage security of their

The problem, if not properly addressed, may impede the successful deployment of the cloud architecture. As users have no longer physically

data,

do not

want this auditing process

introducing new vulnerabilities of unauthorized information leakage towards their data security

posses the storage of their data, traditional cryptographic primitives for the purpose of the

Exploiting data encryption before outsourcing is

data security protection cannot be directly

one way to mitigate this privacy concern, but it

adopted. Considering the large size of the

is only complementary to the privacy preserving

outsourced data and the user’s constrained

public auditing scheme to be proposed in this

resource capability, the tasks of auditing the data

paper. Without a properly designed auditing

correctness in cloud environment

can be

protocol, encryption itself cannot prevent data

formidable and expensive for the cloud users.

from “flowing away” towards external parties

Moreover, the overhead of using cloud storage

during the auditing process. Therefore, how to

should be minimized as much as possible, such

enable a privacy-preserving third-party auditing

that user does not need to perform too many

protocol, independent to data encryption, is the

operations to use the data.

problem we are going to tackle in this paper.

To fully ensure the data integrity and save the

To address these problems, our work utilizes the

cloud user’s computation resources as well as

technique of public key based homomorphic

online burden, it is of critical importance to

linear authenticator (or HLA for short), which

enable public auditing service for cloud data

enables TPA to perform the auditing without

storage, so that user’s may resort to an

demanding the local copy of data and thus

independent third party auditor (TPA) to audit

drastically reduces the communication and

the outsourced data when needed. In addition to

computation overhead as compared to the

help users to evaluate the risk of their subscribed

straightforward data auditing approaches. By

cloud data services, the audit result from TPA

integrating the HLA with random masking, our

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

114

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

protocol guarantees that the TPA could not learn

it integrity security. Then we present our main

any knowledge about the data content stored in

scheme and show how to extent our main

the cloud server during the efficient auditing

scheme to support batch auditing for the TPA

process.

upon delegations from multiple users. Finally we discuss how to generalize our privacy-preserving

a) Design Goals

public auditing scheme and its support of data

To enable privacy-preserving public auditing for cloud data storage our protocol design should achieve the following security and performance guarantees.

dynamics. a) Our Framework & Definitions We follow similar definition of previously proposed schemes in the context of remote data

1) Public auditability: to allow TPA to verify the correctness of the cloud data on demand

integrity checking and adapt the framework for our privacy preserving public auditing system.

without retrieving a copy of the whole data or introducing additional online burden to

A public auditing scheme consists of four

the cloud users.

algorithms

2) Storage correctness: to ensure that there

(KeyGen,

SigGen,

GenProof,

VerifyProof).

exists no cheating cloud server that can pass

KeyGen: is a key generation algorithm run by

the TPA’s audit without indeed storing

the user to setup the scheme.

user’s data intact.

SigGen: is used by the user to generate

3) Privacy-preserving: to ensure that the TPA cannot derive user’s data content from information collected during the auditing

verification metadata, which may consist of MAC, signatures, or other related information that will be used for auditing.

process. 4) Batch Auditing: to enable TPA with secure

GenProof: is run by the CS (Cloud server) to

and efficient auditing capability to cope with

generate a proof of data storage correctness,

multiple auditing delegations from possibly

while VerifyProof is run by the TPA to audit the

large

proof from the cloud server.

number

of

different

users

simultaneously. 5) Lightweight: to allow TPA to perform

Running a public auditing system consists of two phases, Setup and Audit:

auditing with minimum communication and computation overhead. II. PRPOSED SCHEMES

Setup: the user initializes the public and secret parameters of the system by executing KeyGen, and pre-processes the data file by using SigGen

This heading presents our public auditing

to generate verification metadata.

scheme which provides a complete outsourcing solution of data -not only the data itself, but also

Audit: the TPA issues an audit message or challenge to the cloud server to make sure that

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

115

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

the cloud server has retained the data file

scheme proposed by Boneh,Lynn and Shacham

properly at the time of the audit. The CS will

(BLS).

derive a response message from a function of the stored data file and its verification metadata by

d) BLS Schema Details

executing GenProof. The TPA then verifies the

Let G1, G2, and GT be multiplicative cyclic group

response via VerifyProof.

of primary order and e: G1 X G2→ GT . the BLS

Our frame work assumes the TPA is stateless,

map as introduced in mainly.

which is a desirable property achieved by our

Setup Phase: the cloud user runs KeyGen

proposed solution. It is easy to extend the

algorithm to generate the public and secrete

framework above to capture a stateful auditing

parameters. A then the user runs SigGen to

system, essentially by splitting the verification

compute authenticator for each block and name

metadata into two parts which are stored by the

is chosen by the user uniformity at random.

TPA and the cloud server respectively. Audit Phase: the TPA first retrieves the file tag. b) Basic Schemes

With respect to the mechanism in the setup

HLA-based solution. To effectively support public auditability without having to retrieve the data blocks themselves, the HLA technique can

phase, the TPA verifies the signature and quits by emitting FALSE if the verification fails. Otherwise, the TPA recovers name.

be user HLA’s like MACs are also some

Now it comes to the core part of the auditing

unforgeable

that

process. To generate the challenge message for

authentication the integrity of a data block. The

the audit, the TPA picks a random element

difference is that HLAs can be aggregate.

subset of set [1, n]. then the TPA sends challenge

verification

c) Privacy-Preserving

metadata

Public

Auditing

Scheme

Upon receiving challenge, the server runs

To achieve privacy-preserving public auditing, we propose to uniquely integrated the homophic linear

authenticator

message to the server.

with random

masking

GenProof to generate a response proof of data storage correctness. b) Support for Batch Auditing

technique. On the other hand, the correctness validation of the block- authenticator pairs can still be carried out in a new way which will be shown shortly, even with the presence of the randomness. Our design makes use of public key based HLA, to equip the auditing protocol with public auditability. Moreover, we use the HLA proposed, which is based on the short signature

With the establishment of privacy-preserving public auditing the TPA concurrently handle multiple upon different user’s delegation. The individual auditing of these tasks for the TPA can be tedious and very inefficient. It is more advantageous for the TPA to batch these multiple tasks together and audit at one time.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

116

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Keeping this in mind, we slightly modify our

stores,” in Proc. of CCS’07, Alexandria, VA,

protocol in a single user case, and achieve the

October 2007, pp. 598–609.

aggregation of K verification equations into [4] C.Wang, Q.Wang, K. Ren, andW. Lou,

single on.

“Ensuring data storage security in cloud III.

computing,” in Proc. of IWQoS’09, July

CONCLUSION

2009, pp. 1–9. In this paper, we propose a privacy-preserving public auditing system for data storage security

[5] R. C.Merkle, “Protocols for public key

in cloud computing. We utilize the homomorphic

cryptosystems,”

in

Proc.

of

IEEE

linear authenticator and random masking to

Symposium on Security and Privacy, Los

guarantee that TPA would not learn any

Alamitos, CA, USA, 1980.

knowledge about the data content stored on the cloud server during efficient auditing process.

[6]

S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving secure, scalable, and fine-

And we slight change our protocol, the TPA can

grained access control in cloud computing,”

perform multiple audit sessions from different

in Proc. of IEEE INFOCOM’10, San Diego,

users for their outsourced data files. Extensive

CA, USA, March 2010.

analysis shows that our schemes are provably secure and highly efficient.

AUTHORS

REFERENCES Mr.K [1]Cong Wang, Qian Wang, Kui Ren, Wenjing

REDDY

Lou “Privacy-Preserving Public Auditing for

received

Narayana

Secure Cloud Storage” in Proc. of Compu-

the

Engineering

College, Nellore, B.Tech

ters”, pp. 362-375, 2013. [1] C. Erway, A. Kupcu, C. Papamanthou, and

ABHINANDAN

degree

in

computer

science & engineering from the Jawaharlal

“Dynamic provable data

Nehru technological university Anantapur, in

possession,” in Proc. of CCS’09, 2009, pp.

2012, and received the Audisankara Institute of

213–222.

Technology, Gudur M.Tech degree in computer

R. Tamassia,

science engineering from the Jawaharlal Nehru [2] M. A. Shah, R. Swaminathan, and M. Baker, “Privacypreserving audit and extraction of digital contents,” Cryptology ePrint Archive,

technological university Anantapur in 2014, respectively. He Participated National Level Paper Symposiums in different Colleges.

Report 2008/186, 2008. [3] G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, “Provable data possession at untrusted

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

117

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Real Time event detection and alert system using sensors V Anil Kumar1, V Sreenatha Sarma2 1

M.Tech, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India 2

Asst Professor, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India

I.

Abstract - Twitter a well-liked microbiologging services, has received abundant attention recently.

INTRODUCTION

Twitter, a popular microblogging service, has

We investigate the $64000 time interaction of

received abundant attention recently. It is a web

events like earthquakes. And propose associate

social network employed by a lot of folks round

algorithm to watch tweets and to focus on event.

the world to remain connected to their friends,

To sight a target event, we device a classifier of

family members and fellow worker through their

tweets supported options like keyword in an

computers and mobile phones. Twitter was based

exceedingly tweet, the amount of words, and their

in March twenty one, 2006 within the headquarters

context. We have a tendency to manufacture a

of Sanfrancisco, California; USA. Twitter is

probabilistic spatiotemporal model for the target

classified

event that may notice the middle and therefore the

as

micro-blogging

services.

Microblogging is a style of blogging that permits

flight PF the event location. We have a tendency

users

to contemplate every twitter user as a device and

to

micromedia

apply kalman filtering and sensible filtering. The

send like

.Microblogging

particle filter works higher than alternative

temporary pictures

services

text or

apart

updates audio

or clips

from Twitter

embrace Tumbler, plurk, Emote.in, Squeelr, Jaiku,

compared ways in estimating the middle of

identi.ca, and so on. They have their own

earthquakes and trajectories of typhoons. As

characteristics. Microblogging service of twitter

associate application, we have a tendency to

provides

construct associate earthquake coverage System in

immediacy

and

movableness.

Microblogging is slowly moving to the thought. In

Japan. Our System detects earthquakes promptly

USA, for example Obama microblogged from the

and sends e-mails and SMS alerts to registered

campaign path, to that BBC news gave link.

users .Notification is delivered abundant quicker than the announcements that square measure broadcast by JMA.

II. INVESTIGATION Index terms - Tweets, social sensors, earthquake, Earthquake is a disastrous event in which many

hand shaking.

people loss their life and property. Hence detection

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

118

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

and bringing awareness about the event is very

“I happened to feel the shaking of the earth”.

important to prevent hazardous environment.

These tweets are analyzed based on features such

Daily news paper, TV broadcast channels paves

as statically [2] [3]. They separate keywords and

way for it. But these systems are slower when it

words and confirm the event and sent to positive

comes for the real time occurrence as they are time

class. At time there may be tweets such as “I read

consuming in sensing, reporting or publishing.

an article on earthquake” or the tweets such as

Also in today s running world people does not pay

“Earthquake before three days was heart stolen.

much attention all time in media .Later on SMS,

These tweets were analyzed and found that it is

calls facilitated the reporting system but due to the

not under the criteria and hence sent to the

ignorance of people under some situations such as

negative class

in the midst of night, expire of mobile phone charges. This system also encountered many failures when they happened to face the problem of fakers. The mobile phone company was helpless to stop the fake messages or confirm the predicted information. Hence these calls and SMS do not solve any problem but increases. At the third stage social networks came to the play they

Fig.1 Twitter User Map

have lots of followers as Microblogging attraction To category a tweet into a positive category or a

bloomed over them. These social networks served

negative class, we have a tendency to use a

to be both fun time and useful .It helped to bridge the

gap

between

the

friends

of

support vector machine (SVM) [7] that may be a

different

wide used machine-learning algorithmic program.

community and get connected all over the world.

By making ready positive and negative examples At the same time some of the facts and important

as a coaching set, we will manufacture a model to

messages were also known by the people by their

classify tweets mechanically into positive and

friends and followers updates, shares and likes.

negative classes. After confirmation of details

Their like gives rating to some of the ads or some

that\'s within the positive category the notification

blogs or groups. Considering these facts they

is distributed to the registered users victimization

decided to develop the reporting system for earth

particle filtering. Particle filtering is depended

quake. They choose twitter because of its

upon the rating for the location [6]. Rating is

popularity .They considers users as the social

depended informed the likes delivered to the zone.

sensors. These users posts report on earthquake

The intense disadvantage of existing system is that

such as “I am attending Earthquake right now” or

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

119

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

no confirmation of messages before the warning is

distribution for the situation estimation at time t,

distributed and also the notification is

selected because the belief Belðxtp ¼ fx^(i ) t; (_i^w)t g; i ¼1 . . . n. every x^i t may be a distinct hypothesis associated with the thing location. The〖 w〗^it area unit area unit non negative weights, referred to as importance factors.

Fig. 2 Number of tweets related to earthquakes Delivered only to the registered users. To many people who belong to the earth quake zone does not receive these messages as because they did not register. III. PROPOSED SYSTEM Fig. 3 System Architecture In the projected system for the detection of earthquake through twitter the user need n\'t

The

register for the earthquake awful tweets for the

algorithm is a Monte Carlo method that forms the

previous

basis for

info.

The

earthquake

is

detected

Sequential

Importance

Sampling

(SIS)

exploitation the keyword-based topic search.

particle filters. The SIS algorithm consists of

Tweet search API crawls the tweets containing the

recursive propagation of the weights and support

keywords like earthquake, disaster, damage,

points

shaking. Tweet crawler sends the keywords

sequentially.

containing sentence to mecab. In mecab keywords

Step1-Generation:

area unit keep and also the sentence containing

Generate and weight a particle set, which means N

keyword is distributed for linguistics analysis . A

discrete hypothesis evently on map.S0 ¼ s00 ; s10

particle filter may be a probabilistic approximation

; s20; . . . ; sN_1 0

algorithmic

program

implementing

as

each

measurement

is

received

a

mathematician filter, and a member of the family Step 2: Resampling:

of consecutive Monte Carlo strategies. For location estimation, it maintains a likelihood

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

120

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Resample Nparticles from a particle set

ISBN: 378 - 26 - 138420 - 5

using

Obama, cine stars and so on. Even some of the

weights of respective particles and allocate them

entertaining media like television and radio

on the map. (We allow resampling of more than

gathers information through the Twitter and spread

that of the same particles.).

it. V.

Step 3: Prediction. Predict the next state of a particle set

As we tend to mentioned during this paper,

from

Twitter user is employed as a device, and set the

Newton’s motion equation.

matter as detection of an occurrence supported

Step 4:

sensory observations. Location estimation ways

Find the new document vector coordinates in this

like particle filtering are used

reduced 2-dimensional space.

locations

Step 5: Weighting Re-calculate the weight of

Calculate the current object location ( ,

of

events.

As

to estimate the associate

degree

application, we tend to developed associate degree

.

earthquake reportage system. Microblogging helps

Step 6: Measurement: the average of ( ,

CONCLUSION

United States of America to unfold the news at

)by

quicker rates to the opposite subscribers of the

) ∈ .

page; it conjointly distinguishes it from different

Step 7: Iteration:

media within the sort of blogs and cooperative

Iterate Step 3, 4, 5 and 6 until convergence

bookmarks. The algorithmic program utilized in

IV. REPORTING SYSTEM

the projected system facilitates United States of

We developed an earthquake-reporting system

America to spot the keyword that\'s needed help

using the event detection algorithm. Earthquake

United States of America to assemble additional

information is much more valuable if it is received

info from varied places by SIS algorithmic

in real time. Given some amount of advanced

program that the folks are unaware of it. Finding

warning, any person would be able to turn off a

the folks of specific areas through particle filtering

stove or heater at home and then seek protection

and causation alarms to the folks is a brand new

under a desk or table if such a person were to have

conception of intimating folks which is able to pay

several seconds’ notice before an earthquake

some way for spreading the news at quicker rate

actually strikes an area. It goes without saying

instead of intimating the folks within the whole

that, for such a warning, earlier is better. It

world and spreading the news at slower rate.

provides advance announcements of the estimated seismic intensities and expected arrival times. As in the survey of social networking sites the twitter have about 2 million users across the world including the famous personalities like Barack

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

121

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

REFERENCES [5] M. Ebner and M. Schiefner, “Microblogging -

[1] S. Milstein, A. Chowdhury, G. Hochmuth, B. Lorica, and R. Magoulas.Twitter and the micr-

More

messaging

Learning Conf., pp. 155-159, 2008.

revolution:

Communication,

than

Fun?”Proc.

IADIS

Mobile

[6] L. Backstrom, J. Kleinberg, R. Kumar, and J.

connections, and immediacy.140 characters at

Novak, “Spatial Variation in Search Engine

a time. O’Reilly Media, 2008.

Queries,”Proc. 17th Int’l Conf.World Wide

[2] V. Fox, J. Hightower, L. Liao, D. Schulz, and

Web (WWW ’08), pp. 357-366, 2008.

G. Borriello,“ Bayesian Filtering for Location

[7] T. Joachims. Text categorization with support

Estimation,” IEEE Pervasive Computing, vol.

vector machines. In Proc. ECML’98, pages

2, no. 3, pp. 24-33, July-Sept. 2003.

137–142, 1998.

[3]T. Sakaki, M. Okazaki, and Y. Matsuo, “Earthquake Shakes Twitter Users: Real-Time Event Detection by Social Sensors,” Proc. 19th Int’l Conf. World Wide Web (WWW ’10), pp. 851-860,2010. [4] M. Cataldi, L. Di Caro, and C. Schifanella, “Emerging Topic Detection on Twitter Based on Temporal and Social Terms Evaluation,”Proc.10th Int’l Workshop Multimedia Data MiningMDMKDD’10), pp. 1-10, 2010.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

122

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

An Efficient Secure Scheme for Multi User in Cloud by Using Cryptography Technique S.Santhosh1K.MadhuBabu 2 2

PG Scholar1, M.Tech, Assistant Professor Santhosh.somu6@gmail.com Department of Computer Science and Engineering, Audisankara College of Engineering & Technology, Gudur. A.P - India

ABSTRACT The major aims of this technique a secure multi-owner knowledge sharing theme. It implies that any user within the cluster will firmly share knowledge with others by the world organization trust worthy cloud. This theme is ready to support dynamic teams. Efficiently, specifically, new granted users will directly rewrite knowledge files uploaded before their participation whereas not contacting with knowledge owners. User revocation square measure just achieved through a very distinctive revocation list whereas not modification the key. Keys of the remaining users the size and computation overhead of cryptography square measure constant and freelance with the amount of revoked users. We’ve a bent to gift a secure and privacy-preserving access management to users that guarantee any member throughout a cluster to anonymously utilize the cloud resource. We offer rigorous security analysis, and perform intensive simulations to demonstrate the potency of our theme in terms of storage and computation overhead. To attain the dependable and adaptable in MONA, in this paper we are exhibiting the new structure for MONA. In this strategy we are further exhibiting how we are dealing with the risks like failure of cluster manager by increasing the number of backup cluster manager, hanging of gathering administrator on the off chance that number of demands all the more by offering the workload in numerous gathering chiefs. Using this method states expected productivity, scalability and a lot essentially dependability. Keywords:- Multi owner, resource, cluster manager, revocation, Key Distribution I.INTRODUCTION CLOUD computing is recognized as another to ancient information technology as a result of its resource -distribution and low-maintenance distinctiveness. In cloud computing, the cloud service suppliers (CSPs), like Amazon, unit able to deliver various services to cloud users with the help of powerful data centres. By migrating the local data management systems into cloud servers, users can fancy highquality services and save vital investments on their native infrastructures. One in every of the foremost basic services offered by cloud suppliers is data storage. Permit US to place confidence in a smart data application. A company permits its staffs inside identical cluster or department to store and share files inside the cloud. By utilizing the cloud, the staff’s square measure typically totally discharged from the tough native data storage and maintenance. However, it in addition poses a important risk to the confidentiality of these hold on files. Specifically, the cloud servers managed by cloud suppliers are not wholly trustworthy by users whereas the data files hold on inside the cloud may even be sensitive and confidential, like industry plans. To conserve data privacy, a basic resolution is to jot down in code data files, and so transfer the encrypted data into the cloud [2].Sadly, planning degree economical and secure data sharing theme for teams inside the cloud is not an easy task as a result of the following issues.

However, the complexities of client participation and revocation in these schemes square measure linearly increasing with the amount of information homeowners and so the vary of revoked users, severally. By setting a bunch with one attribute level projected a secure origin theme supported the cipher textpolicy attribute-based cryptography technique, which allows any member throughout a cluster to share data with others. However, the matter of user revocation is not self-addressed in their theme conferred a scalable and fine-grained data access management theme in cloud computing supported the key policy attribute-based cryptography (KP-ABE) technique .Sadly, the one owner manner hinders the adoption of their theme into the case, where any user is granted to store and share data. Our contributions. To resolve the challenges conferred higher than, we've an inclination to propose Anglesey Island, a secure multi-owner data sharing theme for dynamic groups inside the cloud. To achieve secure data sharing for dynamic groups in the cloud, we expect to combine the group signature and dynamic broadcast encryption techniques. Specially, the group signature scheme enables users to anonymously use the cloud resources, and the dynamic broadcast encryption technique allows data owners to securely share their data files with others including new joining users.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

123

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

II. LITERATURE SURVEY

A.Plutus: Scalable Secure File Sharing on Untrusted Storage Plutus may be a scientific discipline storage system that permits secure file sharing while not putting a lot of trust on the file servers. Specifically, it makes novel use of scientific discipline primitives to safeguard and share files. Plutus options extremely scalable key management whereas permitting individual users to retain direct management over United Nations agency gets access to their files. We have a tendency to make a case for the mechanisms in Plutus to scale back the quantity of scientific discipline keys changed between users by exploitation file groups, distinguish file scan and write access, handle user revocation expeditiously, and permit Associate in nursing untrusted server to authorize file writes. We’ve designed a epitome of Plutus on Open AFS. Measurements of this epitome show that Plutus achieves sturdy security with overhead such as systems that encrypt all network traffic. B. Sirius: Securing Remote Untrusted Storage This paper presents SiRiUS, a secure classification system designed to be superimposed over insecure network and P2P file systems like NFS, CIFS, OceanStore, and Yahoo! case. SiRiUS assumes the network storage is untrusted and provides its acquire read-write cryptanalytic access management for file level distribution. Key administration and revocation is easy with tokenish out-of-band communication. Classification system freshness guarantees square measure supported by SiRiUS mistreatment hash tree constructions. SiRiUS contains a completely unique technique of playing file random access during a cryptanalytic classification system while not the utilization of a block server. Extensions to SiRiUS embody giant scale cluster sharing mistreatment the NNL key revocation construction. Our implementation of SiRiUS performs well relative to the underlying classification system despite mistreatment cryptanalytic operations

Fig1. Existing Model

1. In the prevailing Systems, identity privacy is one among the foremost vital obstacles for the wide preparation of cloud computing. While not the guarantee of identity privacy, users is also unwilling to affix in cloud computing systems as a result of their real identities may be simply disclosed to cloud suppliers and attackers. On the opposite hand, unconditional identity privacy might incur the abuse of privacy. for instance, a misbehaved employees will deceive others within the company by sharing false files while not being traceable. 2. Only the cluster manager will store and modify knowledge within the cloud. 3. The changes of membership create secure knowledge sharing extraordinarily troublesome the difficulty of user revocation isn't self-addressed. 4. On the other hand depending on consistency in addition to scalability issue this method has to be workout further as if the cluster mangers are amiss due to many requests caused by various groups of owners, and then complete security technique connected with MONA was unable lower.

IV. SYSTEM IMPLEMENTATION

.III. RELATED WORK To preserve information privacy, a basic resolution is to encipher information files, then transfer the encrypted information into the cloud. sadly, coming up with associate degree economical and secure information sharing theme for teams within the cloud isn't a simple task.In the existing System information homeowners store the encrypted information files in untrusted storage and distribute the corresponding cryptography keys solely to approved users. Thus, unauthorized users moreover as storage servers cannot learn the content of the information files as a result of they need no data of the cryptography keys. However, the complexities of user participation and revocation in these schemes area unit linearly increasing with variety the amount the quantity of knowledge homeowners and also the number of revoked users, severally.

1. We have an inclination to propose a secure multi-owner data sharing theme. It implies that any user inside the cluster can firmly share data with others by the world organisation sure cloud. 2. Our projected theme is during a place to support dynamic teams expeditiously. particularly, new established users will directly rewrite data files uploaded before their participation whereas not contacting with data homeowners. User revocation is also merely achieved through a completely unique revocation list whereas not amendment the key keys of the remaining users. The size and computation overhead of cryptography square measure constant and freelance with the number of revoked users.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

124

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

3. We suggest secure and privacy-preserving access management to users that guarantees any member throughout a bunch to anonymously utilize the cloud resource. Moreover, the vital identities of information homeowners are also disclosed by the cluster manager once disputes occur. 4. We provide rigorous security analysis, and perform intensive simulations to demonstrate the potency of our theme in terms of storage and computation overhead. .

ISBN: 378 - 26 - 138420 - 5

E. Group Signature : A cluster signature theme permits any member of the group to sign messages whereas keeping the identity secret from verifiers. Besides, the selected cluster manager will reveal the identity of the signature’s mastermind once a dispute happens, that is denoted as traceability. F. User Revocation:

A.Cloud section: In this module, we tend to produce an area Cloud and supply priced torrential storage services. The users will transfer their information within the cloud. we tend to develop this module, wherever the cloud storage may be created secure. However, the cloud isn't absolutely trusty by users since the CSPs are terribly possible to be outside of the cloud users’ trusty domain. kind of like we tend to assume that the cloud server is honest however curious. That is, the cloud server won't maliciously delete or modify user information owing to the protection of information auditing schemes, however can try and learn the content of the keep information and therefore the identities of cloud users. B. Group Manager : Group manager takes charge of followings 1. System parameters generation, 2. User registration, 3. User revocation, and 4. Revealing the important identity of a dispute knowledge owner. Therefore, we tend to assume that the cluster manager is absolutely trustworthy by the opposite parties. The cluster manager is that the admin. The cluster manager has the logs of every and each method within the cloud. The cluster manager is accountable for user registration and additionally user revocation too. C. Group Member : 1.store their non-public knowledge into the cloud server and 2.Share them with others within the cluster. Note that, the cluster membership is dynamically modified, as a result of the workers resignation and new worker participation within the company. The cluster member has the possession of fixing the files within the cluster. Whoever within the cluster will read the files that are uploaded in their cluster and conjointly modify it. D. File Security: 1. Encrypting the data file. 2. File stored in the cloud can be deleted by either the group manager or the data owner. (i.e., the member who uploaded the file into the server).

User revocation is performed by the cluster manager via a public out there revocation list (RL), supported that cluster members will cypher their information files and make sure the confidentiality against the revoked users. G. Enhanced work:

Fig 2.Proposed Model In this strategy we are further showing how we are dealing with the dangers like disappointment of gathering chief by expanding the amount of reinforcement gathering supervisor, hanging of gathering director on the off chance that number of demands all the more by offering the workload in various gathering supervisors. This strategy cases obliged effectiveness, versatility and above all dependability To conquer the impediment of existing framework , in this proposed Scheme of data security is if the gathering director quit working because of expansive number of appeals originating from diverse gatherings of holders, then reinforcement bunch supervisor will stays accessible. To attain the dependable and adaptable in MONA, in this paper we are exhibiting the new structure for MONA. In this strategy we are further exhibiting how we are dealing with the risks like failure of cluster manager by increasing the number of backup cluster manager, hanging of gathering administrator on the off chance that number of demands all the more by offering the workload in numerous gathering chiefs. Using this method states expected productivity, scalability and a lot essentially dependability. In order to triumph over this downside connected with existing system MONA, inside the recommended MONA will be if the class manager stop working due to numerous asks originating from diverse multiple proprietors, subsequently back up class manager will probably remains offered.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

125

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

V. CONCLUSION In this paper, we've an inclination to vogue a secure data sharing theme, for dynamic groups in associate world organisation trustworthy cloud. In Mona, a user is during a position to share data with others at intervals the cluster whereas not revealing identity privacy to the cloud. Additionally, Angle Sea Island supports economical user revocation and new user modification of integrity. lots of specially, economical user revocation are achieved through a public revocation list whereas not modification the private keys of the remaining users, and new users can directly decipher files keep at intervals the cloud before their participation. Moreover, the storage overhead and additionally the coding computation value are constant. Intensive analyses show that our projected theme satisfies the specified security requirements and guarantees efficiency additionally. Projected a science storage system that allows secure file sharing on world organisation trustworthy servers, named Plutus. By dividing files into file groups and encrypting each file cluster with a completely unique file-block key, the information owner can share the file groups with others through delivering the corresponding safe-deposit key, where the safe-deposit key's won’t to write the file-block keys. However, it brings a few of significant key distribution overhead for large-scale file sharing. In addition, the file-block key should be updated and distributed yet again for a user revocation.

REFERENCES [1]

M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R.H. Katz, A.Konwinski,G. Lee, D.A. Patterson, A. Rabkin, I. Stoica, and M. Zaharia, “A View ofCloudComputing,” Comm. ACM, vol. 53,no. 4, pp. 50-58, Apr. 2010.

[2]

S. Kamara and K. Lauter, “Cryptographic Cloud Storage,” Proc.Int’l Conf. Financial Cryptography and Data Security (FC), pp. 136-149, Jan. 2010. [3] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving Secure, Scalable, and Fine-Grained Data Access Control in Cloud Computing,” Proc. IEEE INFOCOM, pp. 534-542, 2010. [4] M. Kallahalla, E. Riedel, R. Swaminathan, Q. Wang, and K. Fu, “Plutus: Scalable Secure File Sharing on Untrusted Storage,” Proc.USENIX Conf. File and Storage Technologies, pp. 29-42, 2003. [5] E. Goh, H. Shacham, N. Modadugu, and D. Boneh, “Sirius:Securing Remote Untrusted Storage,” Proc. Network and Distributed Systems Security Symp. (NDSS), pp. 131-145, 2003. [6] G. Ateniese, K. Fu, M. Green, and S. Hohenberger, “Improved Proxy Re-Encryption Schemes with Applications to Secure Distributed Storage,” Proc. Network and Distributed Systems Security Symp. (NDSS), pp. 29-43, 2005. [7] R. Lu, X. Lin, X. Liang, and X. Shen, “Secure Provenance: The Essential of Bread and Butter of Data Forensics in Cloud Computing,” Proc. ACM Symp. Information, Computer and Comm.Security, pp. 282-292, 2010. [8] B. Waters, “Ciphertext-Policy Attribute-Based Encryption: An Expressive, Efficient, and Provably Secure Realization,” Proc. Int’lConf. Practice and Theory in Public Key Cryptography Conf. Public Key Cryptography, http://eprint.iacr.org/2008/290.pdf, 2008. [9] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “AttributeBased Encryption for Fine-Grained Access Control of Encrypted Data,” Proc. ACM Conf. Computer and Comm. Security (CCS), pp. 89-98, 2006. [10] D. Naor, M. Naor, and J.B. Lotspiech, “Revocation and Tracing Schemes for Stateless Receivers,” Proc. Ann. Int’l Cryptology Conf. Advances in Cryptology (CRYPTO), pp. 41-62, 2001

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

126

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Design and Implementation of secure cloud systems using Meta cloud Perumalla Gireesh M.Tech 2nd year, Dept. of CSE, ASCET, Gudur, India Email:perum.giri7@gmail.com _____________________________________________________________________________________

Abstract – Cloud computing has recently emerged

way from existing schemes to solving this problem

as a new paradigm for hosting and delivering

effectively. But it does not consider the users data

services over the Internet. Cloud computing is

privacy in transforming Meta cloud. To address

attractive to business owners as it eliminates the

this problem, we introduce Business Continuity

requirement

Management (BCM). This is defined as a holistic

for

users

to

plan

ahead

for

provisioning, and allows enterprises to start from

management

the small and increase resources only when there

organization and reduces the impacts of data

is a rise in service demand. However, despite the

leakage issues.

fact

Index terms – Meta cloud, Cloud Privacy, private

that

cloud

computing

offers

huge

opportunities to the IT industry, the development

process

that

identifies

to

an

clouds, security.

of cloud computing technology is currently at its

I. INTRODUCTION

infancy, with many issues still to be addressed. In With the rapid development of processing and

this paper, we present a survey of cloud computing,

highlighting

its

architectural

principles,

and

key

storage technologies and the success of the

concepts,

Internet,

state-of-the-art

computing

resources

have become

cheaper, more powerful and more ubiquitously

implementation as well as research challenges.

available than ever before. This technological trend has enabled the realization of a new

Meta cloud based on a combination of existing

computing model called cloud computing, in

tools, concepts and provides the convenient to

which resources (e.g., CPU and storage) are

organize the private clouds. This can consider the

provided as general utilities that can be leased and

only vendor lock-in problem of different vendors

released by users through the Internet in an on-

in cloud. For that Meta cloud provides an abstract

demand fashion.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

127

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

The cloud computing paradigm has achieved

a) The Key Challenges

widespread adoption in recent years. Its

Being virtual in concept, the cloud environment

success is due largely to customers’ ability to

generates several questions in the minds of users

use services on demand with a pay-as-you go

with respect to confidentiality, integrity and

pricing model, which has proved convenient

availability. The key challenges for the adoption

in many respects. Low costs and high

of the cloud are as given below:

flexibility make migrating to the cloud

Assurance of the privacy and security

compelling. Despite its obvious advantages,

The cloud users are wary of the security and

however, many companies hesitate to “move

privacy

to the cloud,” mainly because of concerns

environment of the cloud is causing concerns

related to service availability, data lock-in,

amongst enterprises. As the same underlying

data security and legal uncertainties.

hardware may be used by other companies and

A previous study considers the data lock-in

competitors, it may lead to a breach of privacy.

problem and provides a convenient way to

Moreover, any data leakage or virus attack would

of

their

data.

The

multi-tenant

have a cascading effect on multiple organizations.

solve this using Meta cloud. The problem is that once an application has been developed

Reliability and availability

based on one particular provider’s cloud

Instances of outages at the facilities of the cloud

services and using its specific API, that

service providers have raised concerns over the

application

provider;

reliability of the cloud solutions. Enterprises are

deploying it on another cloud would usually

recognizing that they would have to deal with

require completely redesigning and rewriting

some level of failures while using commodity-

is bound to that

based solutions. Also, the cloud providers cannot

it. Such vendor lock-in leads to strong

give an assurance on the uptime of their external

dependence on the cloud service operator.

internet connection, which cloud shut all access to

The Meta cloud framework contains the

the cloud.

following components: Meta cloud API, Meta Data Security is Key Concern

cloud proxy, resource monitoring and so on. But sometimes, transforming cloud as meta

There are a number of concerns surrounding the

cloud data security issues are raised which are not

adoption of the cloud especially because it is a

consider in the previous study.

relatively new concept. Assuring customers of data security would be one of the biggest challenges for cloud vendors to overcome. The

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

128

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

II.

Figure 1 shows the chart of the key barriers to cloud adaptation.

ISBN: 378 - 26 - 138420 - 5

PROPOSED WORK

In this section we introduce a novel solution Business continuity management (BCM) and provide the overview of the Business Continuity Management. a) Business Continuity Management (BCM) The BCMS will use to Plan Do Check Act approach. The PDCA approach can be applied to every element of the BCM lifecycle. Business Continuity leads (BC leads)

Figure 1 Chart of the key barriers

Leads for business continuity management will be

To address this problem this paper introduce

appointed in each directorate, regional, area team

the Business Continuity Management (BCM)

and hosted bodies within the strategy.

is defined as a holistic management process that

 BC leads will perform the following:

identifies to an organization and reduces the  Promote business continuity Management

impacts of data leakage issues. This contains following stages Project initiation, understand the

 Receive BC training

organization, BC strategies, develop Business continuity planning, and Apply BCP. The

 Facilitate the completion of BIAs

Business Continuity Planning is shown in the

 Develop BCPs

following Figure 2.  Ensure that BCPs are available during incident response  Ensure that incident responders receive training appropriate to their role  Ensure that plans are tested, reviewed and updated  Participate in the review and development of the BCMS. Figure 2 Business Continuity Management Overview

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

129

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Business Continuity working Groups

Stage 1- Understanding the Organization

Working groups may be established to:

Understanding

Take control of resource allocation Set priorities Set

continuity

organization’s

strategies objectives

in and

line

ISBN: 378 - 26 - 138420 - 5

with

the business

is

essential

in

developing an appropriate BCM Programme. A

the

detailed understanding of what processes are

responsibilities

essential to ensure continuity of prioritized

Establish the measures that will be used to assure

activities to at least the minimum business

the BCMS remains current and relevant Report to

continuity objective level will be achieved by

top management on the performance of the

undertaking BIA. The BIA will incorporate

BCMS.

continuity requirements analysis which may

Emergency preparedness resilience and response

include

(EPRR)

qualifications required for prioritized activities.

The business continuity program will have close

BIAs will describe as follows:

links to EPRR because both desciplines aim to

the

staff

skills,

competencies

and

 The prioritized activities of departments/

ensure the organization is resilient and able to

teams;

respond to threats and hazards. The BCMS described in this strategy will ensure that the

 The impact that the incidents will have on

organization is able to manage risks and incidents

prioritized activities

that directly impact on its ability to deliver

 How long we could continue using the

business as usual.

emergency measures before we would have to

Assurance

restart our normal activities;

The National support centre will maintain an

 A description of the emergency measures

overview of the BCMS. BC leads will be reuired

we have in place to deal with an incident;

to report on progress within their areas.  The threats to the continued delivery of BCM Documentation

priority activate.

The National Support Centre will be given access Stage 2 – Determining BCM strategy

to related documentation by areas within the scope, such as BCPs, training records, incident

 BIAs

records and exercises to facilitate the sharing of

organizations

good practice throughout the organization. The

and business continuity risks. This information

Business

will be used to:

Continuity

management

has

the

will create a picture of the dependencies,

vulnerabilities

following stages:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

130

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

 To assist in deciding the scope of the

continuity plans will be based on different levels

BCM programme.

of response and escalation.

 To provide the information from which

Business Continuity Plans

continuity options can be identified and

Various plans will continue to be developed to

evaluated.

identify the actions that are necessary and the resources which are needed to enable business

 To assist the preparation of detailed plans

continuity. Plans will be based upon the risks Decisions that determine business continuity

identified, but will allow for flexibility.

strategies will be made at an appropriate level

Prioritized activity recovery plans (PARPs)

 Recovery

Priority activities are those activities to which priority must be given following an incident in

 People

order to mitigate the impact. Activities of the  Premises

highest priority are those that if disrupted, impact the organization to the greatest extent and in the

 Technology and information

shortest possible time.  Suppliers and partners

Stage 4 – Exercise, Audit, Marinating and reviewing

Stage 3 – Developing and implementing a BCM

Exercises

response

It is essential that regular BC exercises are carried

This stage considers the incident reporting

out to ensure that plans are tested and continue to

structure,

be effective and fit-for-purpose as operational

business

continuity

plans,

and

Prioritized activity recovery plans.

processes and technology configurations are constantly changing. Exercise will rise awareness of BCM procedures.

Incident Reporting Structure There

are

various

sources

of

information

Audit

pertaining to business continuity threats such as

 To

severe, flooding and soon.

organizations BCM polices and standards

The impact of all incidents will vary. It is

 To

important that the response to an incident is

solutions

appropriate to the level of impact and remains

 To validate the organizations BCM plans

flexible as the situation develops. Business

 To verify that appropriate exercising and

validate

review

compliance

the

with

organizations

the

BCM

maintenance activities are taking place. To

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

131

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

 Business continuity exercises

highlight decencies and issues and ensure their resolution

CONCLUSION

Management Review An annual review of this strategy will be

In this paper we introduce a novel solution to

undertaken. However, events may prompt more

provide a convenient way to process to identify the

frequent re-examination, such as:

various security threats. This paper considers a

 A

BIA

substantive

revision changes

which in

survey of Business continuity management (BCM)

identifies

processes

to avoid the security risks.

and

priorities;  A

REFERNCES

significant

assessment

and/or

change risk

in

the

threat

appetite of

[1] ISO 22301 Societal Security - Business

the

Continuity

organization  New

Management

Systems

Requirements. regulatory

or

legislative

[2] NHS England Core Standards for Emergency

requirements.

Preparedness,

Resilience

and

Response

(EPRR).

 Embedding BCM in the Organization’s culture

[3]

J. Skene, D.D. Lamanna, and W. Emmerich, “Precise Service Level Agreements,” Proc.

 BCM must be an accepted management

26th Int’l Conf. Software Eng. (ICSE 04),

process, full endorsed and actively promoted

IEEE CS Press, 2004, pp. 179–188.

by directors. The communication of high-level endorsement to all is essential. There are

[4] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud

various ways in which this be achieved:

Computing: State-of-the-Art and Research Challenges,”

 Business continuity will be part of the

J.

Internet

Services

and

Applications, vol. 1, no. 1, 2010, pp. 7–18.

organization’s induction for new starters

[5] The Route Map to Business Continuity

 Participation in BIA and writing BCPs

Management: Meeting the Requirements of  Communication

of

risks,

alerts

ISO 22301.

and

incidents  Business continuity information will be available on the staff intranet  Business continuity training

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

132

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

AUTHORS

Mr.P.Gireesh received the Vaishnavi

Instutiate

Technology,

of

Tirupathi,

B.Tech degree in computer science & engineering from the Jawaharlal Nehru technological university Anantapur, in 2011, and received the Audisankara College of Engineering and Technology, Nellore M.Tech degree in computer science engineering from

the

Jawaharlal

Nehru

technological

university Anantapur in 2014, respectively. He Participated National Level Paper Symposiums in different Networks,

Colleges. Mobile

He

interests

Computer

Computing,

Network

Programming, and System Hardware. He is a member of the IEEE.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

133

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Increasing network life span of MANET by using cooperative MAC protocol B Suman Kanth1, P Venkateswara Rao2 1

M.Tech , Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India.

2

Professor, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India.

Abstract – Cooperative diversity has been shown

reducing the energy use for consistent data

to give important performance increases in

transmission becomes one of the most important

wireless

design concerns for such networks. As an

networks

where

communication

is

hindered by channel fading. In resource limit

promising

and

powerful

solution

that

can

networks, the advantages of cooperation can be

overcome the drawback of resource constraint

further exploited by optimally allocating the

wireless networks, cooperative communication has

energy and bandwidth resources among users in a

received major attention recently as one of the key

cross-layer way. In this paper, we examine the

candidates for meeting the stringent requirement

problem of transmission power minimization and

of the resource limited networks.

network lifetime maximization using cooperative Cooperative communication is developed from the

diversity for wireless sensor networks, under the

traditional MIMO (multiple-input and multiple-

restraint of a target end-to-end transmission

output) techniques and the model of relay

consistency and a given transmission rate. By using

a

cross-layer

optimization

channels. Though MIMO has been shown to be

scheme,

able to significantly enlarge the system throughput

distributive algorithms which mutually consider

and

routing, relay selection, and power allocation plans

reliability,

it

is

not

simple

to

be

straightforward to implement in the wireless

are proposed for the consistency constraint

sensor networks due to the control on the size and

wireless sensor networks.

hardware complexity of sensor nodes.

Index terms – power minimization, cooperative Cooperative communication, however, is able to

diversity, relay selection, cross-layer optimization.

achieve the same space diversity by forming a I.

INTRODUCTION

virtual distributed antenna array where each antenna belongs to a separate node. With

Wireless sensor networks are composed of nodes

cooperation, users that understanding a deep fade

powered by batteries, for which substitute or recharging is very hard, if unfeasible. Therefore,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

134

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

in their link towards the destination can utilize

in MAC, and power allocation in PHY) for

quality of service (QoS).

cooperative networks is still an open issue. Although there are some efforts concerned in

a variety of cooperative schemes have been

optimizing some metrics such as throughput,

developed so far. Cooperative beam forming scheme

was

proposed for

delay, or power use under certain QoS conditions,

CDMA cellular

these efforts focus on either one-hop situation or

networks. Coded cooperation was proposed along

fading-free channels.

with analog-and-forward (AF) and decipher-andforward (DF) schemes. Diversity-multiplexing

Different from our work, we focus on various

tradeoff

the

optimization goal and conditions. We are planning

performance of diversive cooperative schemes

on energy minimization as well as network

such as fixed relaying, selection relaying, and

lifetime maximization. The QoS conditions in our

incremental relaying. Multi relay cooperative

work are end-to-end transfer reliability while the

protocol using space-time coding was proposed.

work is under the constraint of end-to-end

tools

were

used

to

analyze

transmission capacity. We believe in wireless Opportunistic relaying (or selective cooperation)

sensor

was proposed with various relay selection policies.

network

and

the

network

lifetime

maximization, and the guarantee for end-to-end

Power allocation problem and SER performance

reliability

analysis in resource constraint networks. These

is

more

important

than

other

considerations. Moreover, we extend our work of

works are primarily focused on enhancinging the

cross-layer optimization for cooperative network.

link performance in the physical layer.

We decomposes the problem of minimizing

Cooperative communication presents at birth a

network power consumption into a routing sub-

cross-layer

communication

problem in the network layer and a combined relay

resources have to be carefully allocated among

selection and power allocation sub-problem in the

different nodes in the network. Therefore, the

physical layer.

problem,

since

integration and interaction with higher layers has

However, the decomposition method to solve this

become an active research area in recent years.

cross-layer problem is faulty for its complexity.

There have been a lot of efforts towards this

Since the algorithm projected is non-heuristic, it

consideration such as combining node cooperation

needs comprehensive iterations, with long meet

with ARQ in the link layer, or resource allocation

time and huge overhead for message exchanging,

in the MAC layer, and routing algorithm in the

thus inappropriate for sensor network application.

network layer.

On opposite side, in our work, we try to derive a

However, a complete cross-layer optimization

closed-form solution (though may not be optimal)

incorporating three layers (routing, relay selection

and propose our algorithm in a heuristic way; thus,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

135

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

it can be distributively employed in wireless

selection, and power allocation in arbitrarily

sensor

distributed wireless sensor networks.

networks.

In

the

cooperative

communication and data-aggregation techniques are together implemented to reduce the energy

II. CROSS – LAYER OPTIMIZATION PROBLEM IN COOPERATIVE SENSOR NETWORK

spending in wireless sensor networks by reducing the sum of data for transmission and better using

In this section, we prepare and analyze the min-

network

cooperative

power problem and the max-lifetime problem in

communication. There are also extra works in

the context of reliability conditioned cooperative

prolonging network lifetime using cooperative

networks and obtain relevant solutions, leading to

communication techniques.

our algorithms which will be explained in detail in

resources

through

the next section.

To address above stated problems, a cooperative scheme is proposed join the maximum lifetime

Problem Formulation

power allocation and the energy aware routing to Consider a multihop wireless sensor network

maximize the network lifetime. This method is not

containing of multiple arbitrarily disseminated

a cross-layer method since the energy saving routing

is

formed

first,

and

sensor nodes and one sink. Each sensor node has a

cooperative

single

transmission is applied based on the build routes. Therefore,

the

qualities

of

omnidirectional

antenna

and

can

dynamically adjust its transmitted power.

cooperative

transmission are not fully discovered since the best

We

cooperative route might be completely different

objectives. The first is called min-power problem:

from the noncooperative route.

given any source node, the goal is to find the route

consider

two

associated

optimization

that minimizes the total transmission power, while

A suboptimal algorithm for lifetime increasing is

fulfilling a required end-to-end broadcast success

developed based on the performance analysis for

probability and transmission rate.

MPSK modulation in the condition of adding some special cooperative relays at the best places

The second is called max-lifetime problem: given

to prolong the network lifetime, while, in our

a set of source nodes, the goal is to offers an

work, the nodes are randomly distributed and no

information transferring scheme that increases the

additional relay nodes deployed.

network lifetime, defined as the lifetime of the node whose battery consumes out first [1], while

In a word, we propose a fresh scheme to increase

fulfilling the same conditions as the min-power

the network lifetime by utilize the cooperative

problem.

diversity and together considering routing, relay

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

136

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

and the relay to confirm a accurate reception. Otherwise, it sends a negative acknowledgment (NACK) that allows the relay, if it received the symbol properly, to transmit this symbol to the receiver in the next time slot. 3) Multirelay Cooperation

in

which

for

the

multirelay

collaboration mode, we use the opportunistic relaying scheme according to this opportunistic relaying scheme, for each frame, a node with the

Figure 1. Cooperative transmission (CT)

best immediate relaying channel condition among

and direct transmission (DT) modes as

several possible relays is selected to promote the

building blocks for any route.

packet in each frame.

From Figure 1, we can see that to promote data

Now consider the min-power problem described in

from node i to node j on link (i, j), either direct

the above. This is a convex problem and we can

transmission is used or a particular node r helps

solve it using Lagrangian multiplier techniques.

node i to forward data to node j using decipher-

To make best use of the lifetime of the entire

and-forward (DF) cooperative diversity.

network, the novel solution is that the lifetime of

The broadcast transmission

rate and

(DT)

power

mode

and

for

each node in the route of flow is equal to a target

direct

lifetime.

cooperative

transmission (CT) mode classified into three cases

III.

COOPERATION-BASED CROSS-LAYER

such as 1) Direct Transmission in which node i send the data to the node j. Cooperation

in

which

for

SCHEMES

2) Single-Relay the

In this section, we propose thorough minimum-

cooperative

transmission mode, the sender I sends its symbol

power

and

maximize-lifetime

in its time slot. Due to the transmit nature of the

algorithms, under the constraint of end-to-end

wireless medium, both the receiver and the relay

success probability and data rate, both in direct

receive noisy versions of the transmitted symbol.

mode and cooperative mode. We assume that each

We assume that the relay and the receiver decide

node transmits HELLO packet to its neighbors to

that the received symbol is properly received if the

update the local topology information. Our

received (SNR) is greater than a certain threshold.

algorithms are composed of two parts: routing

According to the incremental relaying scheme, if

(and relay selection) algorithm and power

the receiver deciphers the symbol suitably, then it

allocation algorithm. Algorithms are based on the

sends an acknowledgment (ACK) to the sender

conventional

Bellman-Ford

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

137

cross-layer

shortest

path

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

algorithm

which

can

be

distributively

ISBN: 378 - 26 - 138420 - 5

Algorithm 1 (Min-Power Cross-layer Scheme with Direct Transmission (MPCS-DT)). Consider

implemented. In Bellman-Ford algorithm, each node executes the iteration.

(1)

Notation ∝ ,

() Qs i)

∞ except Cost (0) = 0 (node 0 represents the sink).

Table 1 Symbol and Explanation Explanation The effective distance between node I and j The latest estimate cost of the shortest path from node j to the destination The set of neighboring nodes of node i Quality parameter

(2)

Every ∝ ,

distance

node

estimates

(3)

of

the

average

(4)

(5)

∝/ ,

.

as

Cost(i)

=

min ∈

(6)

optimization

( ) , and select node j as the next hop node. If the required QoS parameter

is a

If not, the source will deliver a message

along the path about the

. Then each node in the

First we select the minimum-power route with the

route will adjust the transmit power.

least

(7)

correlated to

+

though the constructed route, informing all nodes

problem into two sub-problems.

the

∝/ ,

will adjust the transmit power.

nothing to do with the QoS parameter. Thus we

Then,

( )

priori to the whole network, each node in the route

That means the formation of routing actually has

algorithm.

from

Every node updates its cost toward the

destination

We can see that to reduce is equal to minimizing.

conservative

SNR

Every node calculates the costs of its

outgoing links as

on

effective

periodically broadcasted HELLO message.

Transmission

based

the

of its outgoing links through the

measurement

Min-Power Cross-Layer Scheme with Direct

can decompose the cross-layer

Every node initiates its cost for routing as

Bellman-Ford

transmission

power i)

Got to Step (2).

Min-Power Cross-Layer Scheme with Cooperative

for each node in the route is

Transmission

adjusted. The

min-power

scheme

for

cooperative

Since the forward nodes in the route may not

communication is composed of two parts: single-

know

relay scheme and multirelay scheme. For single-

(if not a priori for the whole network),

the source node may need to transfer a message containing

relay scheme, that to minimize

to the destination through the path

minimizing ∑

,

,

is equal to

. Hence, the cross-layer best

to inform all the forward nodes. Thus, the cross-

strategy can be realized by three steps. First, the

layer scheme is as follows.

potential relay of each link is selected by minimizing

. Then, the min-power route is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

138

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

constructed as the route with the least ∑

,

opportunistic

.

relaying

ISBN: 378 - 26 - 138420 - 5

multirelay

scenario)).

Consider

Finally, the transmission power of the nodes in the route is adjusted. The algorithm as follows.

(1)

the similar as steps 1 & 2 Algorithm 1.

Algorithm 2 (Min-Power Cross-layer Scheme

(2)

the similar as steps 1 & 2 Algorithm 1.

Cooperative Transmission (MPCS-CT) (for single-

(3)

Every node sorts all its neighboring nodes

relay scenario)). Consider

ascending according to the value of

(1)

The similar as steps 1 & 2 in Algorithm 1.

(2)

The similar as steps 1 & 2 in Algorithm 1.

(3)

Every node calculates the costs of its

outgoing links as ∝ ,

,

= min

∈ (, )

∝ ,

∝ ,

,

=

min

∈ ( , )

∝ ,

(4)

select node j as the next hop node.

Every node updates its cost toward the +

,

,

Every node updates its cost toward the

(4)

,

+

of both I and j.

destination as Costi = min ( /

,

where N(i, j) denotes the set of neighboring node

+

of both i and j.

( )

Then it calculates the costs of its outgoing links as

where N(i, j) denotes the set of neighboring node

= min ∈

+

and selects the first K nodes as potential relays.

, and select node k as the relay of this link,

destination as

∝ ,

,

+

), and

(5) (5) & (6) The similar as steps 5 & 6 in algorithm 1

and select node j as the next hop node.

except each path node and relay node in the route

(5)

adjust the transmit power.

& (6) The similar as steps 5 & 6 in

Algorithm 1 except each path node and relay node ii) in the route adjust the transmit power.

(6)

Go to (2).

(6)

Transmission

Max-Lifetime Cross-Layer Scheme with Direct

Go to step (2).

The remaining energy of each node is decreasing For multirelay scheme (assume we need K relays

at unusual rates; the route may vary from time to

for each link), the difference with single-relay

time as the residual energy of the intermediate

scheme is that the K probable relays have to be

node in the route varies. The rate of the

selected and only the optimal relay is chosen from

recalculation of the route is determined by the rate

time-to-time in every transmission slot.

of HELLO message exchange. Thus, the algorithm

The best communicate is in charge of relaying the

is as follows.

packet, while further potential relays will turn off

Algorithm 4 (Max-Lifetime Cross-Layer Scheme

the radio and not receive the packet to save

with Direct Transmission (MLCS-DT)). Consider

energy. The algorithm is as follows.

(1)

& (2) The similar as steps 1 & 2 in

Algorithm 1. Algorithm 3 (Min-Power Cross-Layer Scheme Cooperative

Transmission

(MPCS-CT)

(for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

139

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

(3)

Every node measures its residual energy

(5)

and its total power for the ongoing flows. Then it ∝ ,

and its total power

/

Every node updates its cost toward the = min ∈

destination as

calculate the cost of its outgoing links as

ISBN: 378 - 26 - 138420 - 5

,

()

+

,

and selects node j as the next hop node.

for the ongoing

flows.

(6)

(4) Every node updates its cost toward the

Algorithm 1 except each forward node and relay

destination as Costi = min ∈

∝ ,

( )

+

& (7) The similar as steps 5 & 6 in

node in the route adjust the transmit power.

,

(8)

and select node j as the next hop node.

Go to (2).

(5) & (6) the similar as steps 5 & 6 in Algorithm 1

IV. EXPECTED RESULTS

except each node in the route adjust the transmit power.

In this section we compare the Min-Power

(7) Go to (2).

Algorithms to demonstrate the effect of cross-layer

iii) Max-Lifetime

Cross-Layer

Scheme

design for communication; we implement two

with

min-power algorithms: MPCS-DT and MPCS-CT

Cooperative Transmission

in random networks. For better comparison, we To find a route which can maximize, we should

also implement two other cooperation along the

find a path with the minimum, and the power of

shortest

nodes should be adjusted.

routing

(MPCR)

algorithm and cooperation along the shortest noncooperative path (CASNCP) algorithms.

Algorithm 5 (Max-Lifetime Cross-Layer Scheme with Cooperative Transmission (MLCS-CT)).

And also we compare the Max-Life Algorithms,

Consider (1)

non-cooperative

for that we consider three different schemes: (1) max-lifetime cross-layer scheme with direct

& (2) the similar as steps 1 & 2 in

Algorithm 1.

communication (MLCS-DT)and (2) max-lifetime

(3) Every node calculates the effective distance of

cross-layer

its

communication (MLCS-CT) and (3) greedy power

outgoing ,

= min

∈ ( , )

links ∝ ,

(

∝ ,

+

∝ ,

as

with

cooperative

allocation and cost-based routing (GPA-CR).

), and select

node k as the relay of this link. (4)

scheme

V. CONCLUSION

Every node measures its residual energy

In this paper, we build several cross-layer

and its total transmission power for the ongoing

algorithms for energy resourceful and consistent

flows. Then calculates the cost of its outgoing

data transfer in wireless sensor networks using

links.

cooperative diversity. We examine the problem of how to minimize transmission power consumption

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

140

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

and

exploit

guaranteeing

the the

network end-to-end

lifetime

while

[4] L. Zheng and D. N. C. Tse, “Diversity and

accomplishment

freedom: a fundamental tradeoff in multiple

probability.

antenna channels,” in Proceedings of the IEEE International

REFERENCES

Theory

and E. Cayirci, “A survey on sensor

p.

476,

Lausanne,

Transactions on Information Theory, vol. 25, no. 5, pp. 572–584, 1979.

[2] A. Nosratinia, T. E. Hunter, and A. Hedayat, in

wireless

[6] A. Sendonaris, E. Erkip, and B. Aazhang,

networks,” IEEE Communications Magazine,

“User cooperation diversity part I and part 2,”

vol. 42, no. 10, pp. 74–80, 2004.

IEEE Transactions on Communications, vol. 51, no. 11, pp. 1927–1938, 2003.

[3] Y.-W. Hong, W.-J. Huang, F.-H. Chiu, and C.C. J. Kuo, “Cooperative communications in wireless

'02),

Information

theorems for the relay channel,” IEEE

vol. 40, no. 8, pp. 102–105, 2002.

resource-constrained

(ISIT

on

[5] T. M. Cover and A. A. E. Gamal, “Capacity

networks,” IEEE Communications Magazine,

communication

Symposium

Switzerland, July 2002.

[1] F. Akyildiz, W. Su, Y. Sankarasubramaniam,

“Cooperative

ISBN: 378 - 26 - 138420 - 5

[7] J. N. Laneman, D. N. C. Tse, and G. W.

networks,”

Wornell, “Cooperative diversity in wireless

IEEE Signal Processing Magazine, vol. 24, no.

networks: efficient protocols and outage

3, pp. 47–57, 2007.

behavior,” IEEE Transactions on Information Theory, vol. 50, no. 12, pp. 3062–3080, 2004.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

141

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Scalable Mobile Presence Cloud with Communication Security N.Tejesh1 D.Prasanth2 1

PG Student, M.Tech. Computer Science and Engineering, Audisankara College of Engineering & Technology, Gudur.A.P, India Tej578@gmail.com 2

Assistant Professor, Department of CSE, Audisankara College of Engineering & Technology, Gudur.A.P, India sumyoan@gmail.com

Abstract-

Within Portable existence services is often a vital portion of a new social networking applications on account of mobile user’s existence details such as global setting method place, circle address, along with online/offline rank are usually constantly apprise to help user’s online close friends. A mobile ubiquity companies can be an critical component impair processing environments, for your reason that maintains the up-to-date report on existence data connected with mobile consumer. In the event that existence improvements happen usually the amount of announcements written by existence server may cause scalability issue along with close friend number look for issue with large-scale mobile existence companies. For you to triumph over the particular scalability issue recommended an efficient along with ascendable server buildings termed existence impair. It sets up the particular existence server straight into quorum centered server-server buildings regarding useful looking. If a mobile consumer ties together a new circle or perhaps net, existence impair lookups the particular existence data. Additionally, it achieves modest regular look for latency from the led look for protocol along with one-hop caching approach.

keywords—Social networks, mobile presence services, distributed presence servers, cloud computing.

I. INTRODUCTION The Presence [1] entitled applications such as Face-book, Twitter etc., which is produced by mobile devices and cloud computing [2] nature due to the prevalence of internet [3].Way the members are engaged with their buddies on

internet are changed by the social network services [4]. In order to interact with buddies across great distance participants can dispense the live event immediately using their mobile device. Mobile user’s presence information details will be maintained by mobile presence service [5]. In cloud computing environment mobile presence service is a vital component of social network application. Because of the ubiquity of the Internet, mobile devices and cloud computing environments can provide presence-enabled applications, i.e., social network applications/services, worldwide. Face book, Twitter Foursquare Google Latitude, buddy cloud and Mobile Instant Messaging (MIM) are examples of presence-enabled applications that have grown rapidly in the last decade. Social network services are changing the ways in which participants engage with their friends on the Internet. They exploit the information about the status of participants including their appearances and activities to interact with their friends. Moreover, because of the wide availability of mobile devices (e.g., Smartphone’s) that utilize wireless mobile network technologies, social network services enable participants to share live experiences instantly across great distances. Presence information tells the detail about mobile user’s availability, activity and machine capacity. Service does binding of user id to his/her current presence information details. Each individual mobile user has a buddy list which includes details of whom he/she wants to interact with in social network services. When a user does shipment from one level to other, this change is instinctively transmitted to each individual on the buddy list. Server cluster technology increases the search speed and decrease the report time. For example in social network application mobile user logs in through his/her mobile device, the mobile presence services searches and reveals each of them about user’s friend list such as instant messaging system [6]. Potential of presence cloud [5] [7] can be examined by using search cost and search satisfaction without impaired neither of them. When a user arrives presence server provoke a number of messages is search cost. Time it takes to examine the arrival of user’s buddy list is search satisfaction. To help the users who are

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

142

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

present worldwide, the services enhanced by Google [3] [8] and Facebook [3] are proliferated among many servers. Presence server used in large scale social network services to ameliorate the coherence of mobile presence services. In this section, examine the existing server architecture for buddy list in large scale geographical information Centre. Overloading of buddy search message on presence server leads to scalability problem. Presence cloud disseminates many users’ information details among many presence servers on the internet, which is used as a building block of mobile presence services. For efficient buddy list search there is no single point collapse, since servers in presence cloud are organized in quorum [9] based server to server architecture to gain small search delay using directed buddy search algorithm. Caching procedure is used to reduce buddy list search. The potential of three architectures such as presence cloud, mesh [10] based scheme and distributed hash table [11] are examined in terms of search response time and friend notification time. Presence information tells the detail about mobile user’s availability, activity and machine capacity. Service does binding of user id to his/her current presence information details. Each individual mobile user has a buddy list which includes details of whom he/she wants to interact with in social network services. When a user does shipment from one level to other, this change is instinctively transmitted to each individual on the buddy list. Server cluster technology increases the search speed and decrease the report time. For example in social network application mobile user logs in through his/her mobile device, the mobile presence services searches and reveals each of them about user’s friend list such as instant messaging system [6]. Potential of presence cloud [5] [7] can be examined by using search cost and search satisfaction without impaired neither of them. When a user arrives presence server provoke a number of messages is search cost. Time it takes to examine the arrival of user’s buddy list is search satisfaction. To help the users who are present worldwide, the services enhanced by Google [3] [8] and Facebook [3] are proliferated among many servers. Presence server used in large scale social network services to ameliorate the coherence of mobile presence services. In this section, examine the existing server architecture for buddy list in large scale geographical information Centre. Overloading of buddy search message on presence server leads to scalability problem. Presence cloud disseminates many users’ information details among many presence servers on the internet, which is used as a building block of mobile presence services. For efficient buddy list search there is no single point collapse, since servers in presence cloud are organized in quorum [9] based server to server architecture to gain small search delay using directed buddy search algorithm. Caching procedure is used to reduce buddy list search. The potential of three architectures such as presence cloud, mesh [10] based scheme and distributed hash table [11] are examined in terms of search response time and friend notification time.

ISBN: 378 - 26 - 138420 - 5

II. RELEATED WORK The rationale behind the design of Presence Cloud is to distribute the information of millions of users among thousands of presence server’s on the Internet. To avoid single point of failure, no single presence server is supposed to maintain service-wide global information about all users. Presence Cloud organizes presence servers into a quorum based server-to-server architecture to facilitate efficient buddy list searching. It also leverages the server overlay and a directed buddy search algorithm to achieve small constant search latency and employs an active caching strategy that substantially reduces the number of messages generated by each search for a list of buddies. We analyze the performance complexity of Presence Cloud and two other architectures, a Mesh based scheme and a Distributed Hash Table (DHT)based scheme. Through simulations, we also compare the performance of the three approaches in terms servers on the Internet. The design of Presence Cloud, a scalable server-toserver architecture that can be used as a building block for mobile presence services. The rationale behind the design of Presence Cloud is to distribute the information of millions of users among thousands of presence servers on the Internet. To avoid single point of failure, no single presence server is supposed to maintain service-wide global information about all users. Presence Cloud organizes presence servers into a quorum-based server-to-server architecture to facilitate efficient buddy list searching.

Fig1. Presence Cloud Architecture

Presence servers which are present in presence cloud, where these presence servers are arranged in quorum based server to server architecture and also load on servers are balance in presence cloud sever overlay. All these presence server keeps caches for buddies in order to increase query speed is one hop caching approach. Finding small constant search delay results in directed buddy search by decreasing network traffic using one hop search strategy. Architecture of presence cloud which is the proposed work is shown in Figure1, Using 3G or Wi-Fi services mobile user access the internet and make a data link to the presence cloud. Using secure hash algorithm mobile users are intent to one of the presence servers. To transfer presence information details, the mobile user is authenticated to the mobile presence services and also opens a TCP link.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

143

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Once path is set up, the mobile user request for the friend list to the presence server which is present in presence cloud. And finally the request is responded by the presence cloud after completing an efficient search of buddy’s presence information. Analyze the performance complexity of Presence Cloud and two other architectures, a Mesh based scheme and a Distributed Hash Table (DHT)-based scheme. Through simulations, we also compare the performance of the three approaches in terms of the number of messages generated and the search satisfaction which we use to denote the search response time and the buddy notification time. The results demonstrate that Presence- Cloud achieves major performance gains in terms of reducing the number of messages without sacrificing search satisfaction. Thus, Presence Cloud can support a large-scale social network service distributed among thousands of servers on the internet. Presence Cloud is among the pioneering architecture for mobile presence services. To the best of our knowledge, this is the first work that explicitly designs a presence server architecture that significantly outperforms those based distributed hash tables. Presence Cloud can also be utilized by Internet social network applications and services that need to replicate or search for mutable and dynamic data among distributed presence servers. The contribution is that analyzes the scalability problems of distributed presence server architectures, and defines a new problem called the buddy-list search problem. Through our mathematical formulation, the scalability problem in the distributed server architectures of mobile presence services is analyzed. Finally, we analyze the performance complexity of Presence Cloud and different designs of distributed architectures, and evaluate them empirically to demonstrate the advantages of Presence Cloud. Server architectures of existing presence services, and introduce the buddy-list search problem in distributed presence Architectures in large-scale geographically data centers.

III. PRESENCE CLOUD The past few years has seen a veritable frenzy of research activity in Internet-scale object searching field, with many designed protocols and proposed algorithms. Most of the previous algorithms are used to address the fixed object searching problem in distributed systems for different intentions. However, people are nomadic, the mobile presence information is more mutable and dynamic; anew design of mobile presence services is needed to address the buddy-list search problem, especially for the demand of mobile social network applications. Presence Cloud is used to construct and maintain distributed server architecture and can be used to efficiently query the system for buddy list searches. Presence Cloud consists of three main components that are run across a set of presence servers. In the design of Presence Cloud, the ideas of P2P systems and present a particular design for mobile presence services has been refined. Presence Cloud server overlay: It organizes presence servers based on the concept of grid quorum system. So, the server overlay of

ISBN: 378 - 26 - 138420 - 5

Presence Cloud has a balanced load property and a two-hop diameter node degrees, where n is the number of presence servers. One-hop caching strategy: It is used to reduce the number of transmitted messages and accelerate query speed. All presence servers maintain caches for the buddies offered by their immediate neighbors Directed buddy search: It is based on the directed search strategy. Presence Cloud ensures an one-hop search, it yields a small constant search latency on average. The primary abstraction exported by our Presence Cloud issued scalable server architecture for Mobile presence services, and can be used to efficiently search the desired buddy lists. We illustrated a simple overview of Presence Cloud in Fig. 1. In the mobile Internet, a mobile user can access the Internet and make a data connection to Presence Cloud via 3G or Wifi services. After the mobile user joins and authenticates himself/herself to the mobile presence service, the mobile user is determinately directed to one of Presence Servers in the Presence Cloud by using the Secure Hash Algorithm, such as SHA-1. The mobile user opens a TCP connection to the Presence Server (PSnode) for control message transmission, particularly for the presence information. After the control channel is established, the mobile user sends a request to the connected PSnode for his/her buddy list searching. Our Presence Cloudshall do an efficient searching operation and return the presence information of the desired buddies to the mobile user.

IV. EVOLUTION A cost analysis of the communication cost of Presence Cloud in terms of the number of messages required to search the buddy information of a mobile user. Note that how to reduce the number of inter server communication messages is the most important metric in mobile presence service issues. The buddy-list search problem can be solved by a brute-force search algorithm, which simply searches all the PS nodes in the mobile presence service. In a simple mesh-based design, the algorithm replicates all the presence information at each PS node; hence its search cost, denote by QMesh, is only one message. On the other hand, the system needs n _ 1 messages to replicate a user’s presence information to all PS nodes, where n is the number of PS nodes. The communication cost of searching buddies and replicating presence information can be formulated as Mcost = QMesh +RMesh, where RMesh is the communication cost of replicating presence information to all PS nodes. Accordingly, we have Mcost = O(n). In the analysis of Presence Cloud, we assume that the mobile users are distributed equally among all the PS nodes, which is the worst case of the performance of Presence- Cloud. Here, the search cost of Presence Cloud is denoted as Qp, which is messages for both searching buddy lists and replicating presence information. Because search message and replica message can be combined into one single message, the communication cost of replicating, Rp (0). It is straight forward to know that the communication cost of searching

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

144

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

buddies and replicating presence information in Presence Cloud is Pcost .However, in Presence Cloud, a PS node not only searches a buddy list and replicates presence information, but also notifies users in the buddy list about the new presence event. Let b be the maximum number of buddies of a mobile user. Thus, the worst case is when none of the buddies are registered with the PS nodes reached by the search messages and each user on the buddy list is located on different PS nodes.

ISBN: 378 - 26 - 138420 - 5

[9]. Quorumb asedtechni queshttp:// www.en.wikip idiaorg/ wiki/Quoru m_(distributed _computing). [10]. Jianming zhao, Nianmin yao, Shaobin Cai, and Xiang li, “Tree-Mesh Based P2P Streaming data distribution schema”,2012. [11]. Distributed hash 744/S07/lectures /16-dht.pdf.

tables

http://www.cs.cmu.edu/~dga/15-

[12]. Juuso aleksi lehtinen, “Mobile peer-to-peer over session initiation protocol”, August 4,2008. [13]. Kundan Singh and Henning Schulzrinne Department of Computer Science, Columbia University {kns10,hgs}@ cs.columbia.edu, “SIPPEER : A session iniiation protocol (SIP)- peer-topeer internet telephony cllient adaptor” . [14]. Michael Piatek, Tomas Isdal, Arvind Krishnamurthy , and Thomas Anderson “One hop Reputations for Peer to Peer File

V. CONCLUSION In this paper In large scale social network services mobile presence services is supported by the scalable server architecture called as presence cloud. A scalable server architecture that supports mobile presence services in largescale social network services. Presence Cloud achieves low search latency and enhances the performance of mobile presence services. Total number of buddy search messages increases substantially with the user arrival rate and the number of presence servers. The growth of social network applications and mobile device computing capacity to explore the user satisfaction both on mobile presence services or mobile devices. Presence Cloud could certificate the presence server every time when the presence server joins to Presence Cloud. The results of that Presence Cloud achieve performance gains in the search cost without compromising search satisfaction.

Sharing Workloads”. [15]. Brent Hecht, Jaime Teevan , Meredith Ringel Morris, and Dan Liebling, “SearchBuddies: Bringing Search Engines into the Conversation”, 2012

REFERENCE [1 ] R.B. Jennings, E.M. Nahum, D.P. Olshefski, D. Saha, Z.-Y. Shae, and C. Waters, “A Study of Internet Instant Messaging and Chat Protocols,” IEEE Network, vol. 20, no. 6, pp. 16-21, July/Aug. 2006. [2] Z. Xiao, L. Guo, and J. Tracey, “Understanding Instant Messaging Traffic Characteristics,” Proc. IEEE 27th Int’l Conf. Distributed Computing Systems (ICDCS), 2007. [3] Chi, R. Hao, D. Wang, and Z.-Z. Cao, “IMS Presence Server:aTraffic Analysis and Performance Modelling,” Proc. IEEE Int’Conf. Network Protocols (ICNP), 2008. [4] Instant Messaging and Presence Protocol IETF Working Group,http: //www.ietf.org /html.charters/impp-charter.html, 2012. [5] Extensible Messaging and Presence Protocol IETF Working Group, http://www.ietf.org /html.charters/xmpp-chart er.html,2012. [6] Open Mobile Alliance, “OMA Instant Messaging and Presence Service,” 2005. [7] P. Saint-Andre, “Interdomain Presence Scaling Analysis for the Extensible Messaging and Presence Protocol (XMPP),” IETF Internet draft, 2008. [8 ]X. Chen, S. Ren, H. Wang, and X. Zhang, “SCOPE: Scalable Consistency Maintenance in Structured P2P Systems,” Proc. IEEE INFOCOM, 2005.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

145

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Adaptive and well-organized Mobile Video Streaming public Networks in Cloud 1

B.Renuka, 2Ch.Madhu Babu PG Scholor, Audisankara College of Engineering and Technology, Gudur. Assoc Professor(CSE), Audisankara College of Engineering and Technology, Gudur. Email: brenuka1990@gmail.com1 , madhugist@gmail.com2

Abstract: Due to the high demands of video traffics over mobile networks, the wireless link more fails to keep up the pace with the demand. There exits a gap between the request and the link capacity which results in poor service quality of the video streaming over mobile networks which includes disruptions and long buffering time. While demands on video traffic above mobile webs have been float, the wireless link capacity cannot retain up alongside the traffic request. The gap amid the traffic request and the link capacity, alongside time-varying link conditions, by-product in poor ability quality of video streaming above mobile webs such as long buffering period and intermittent confusion. Leveraging the cloud computing knowledge, we advice a new mobile video streaming framework, dubbed AMES-Cloud that has two parts: Adaptive Mobile Video Streaming (AMOV) and Efficient Communal Video Sharing (ESoV). AMoV and ESoV craft a personal agent to furnish video streaming services effectually for every single mobile user. For a given user, AMoV lets her private agent adaptively adjust her streaming flow alongside a scalable video coding method established on the feedback of link quality. In similar, ESoV monitors the common web contact amid mobile users, and their confidential agents attempt to perfects video content in advance. We apply a prototype of the AMES-Cloud structure to clarify its performance.

INDEX TERMS: Scalable Video Coding, Adaptive Video Streaming, Mobile Networks, Social Video Sharing, Cloud Computing.

Trough cloud computing the user can

1. INTRODUCTION

decrease the cost and can use the resource at

Cloud computing is the lease of the

any time.

resources through which the users can use the

resources

depending

upon

There are three types of cloud as shown in

the

fig1

requirement and pay based on the usage.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

146

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

i) Public cloud

deployment of applications without the cost

ii) Private cloud

and complexity of buying and managing the

iii) Hybrid cloud

underlying hardware and software and provisioning hosting capabilities.

Public cloud: Public cloud or external cloud is one in which the resources are leased on self service basis over the internet, via web applications/web services, from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis. Fig 1: Types of services

Private cloud: Private cloud is also called

Fig 2 shows the architecture of a

internal cloud; it is used to describe the network.

typical cloud at a high level. An end user

Hybrid cloud is one

Bob connects to the cloud via a portal from

which contains multiple internal or external

his browser. Alternatively, a user Alice can

clouds. Means N number of internal and

choose to directly connect to the cloud

external clouds

manager via a command line interface

offerings

of

Hybrid cloud:

private

similar to that used in EC2. A cloud

AMES is based on platform as a service.

provides

Platform as a service (PaaS) is a category of

pool to store persistent user data. The users

(SaaS) and infrastructure as a service (IaaS),

will make the request and the cloud manager

it is a service model of cloud computing. In

will authenticate the user and he keep track

this model, the consumer creates the

of the users and their request and due to the

software using tools and/or libraries from

streaming techniques and AMoV will adjust

the provider. The consumer also controls

PaaS

offerings

the streaming flow with a video coding

configuration facilitate

a

images can be run, and optionally a storage

service.[1] Along with software as a service

settings.

of resources:

a set of computer servers on which the VM

computing platform and a solution stack as a

and

types

collection of (VM) virtual machine images,

cloud computing services that provides a

software deployment

three

technique will adjust the flow and increase

the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

147

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

the

quality.

ISBN: 378 - 26 - 138420 - 5

into account for developing the proposed system.

2.1. Scalability: Mobile video streaming services should support a wide spectrum of Fig 2: cloud architecture

mobile devices; they have different video resolutions, different computing powers, different wireless links (like 3G and LTE)

2. LITERATURE REVIEW

and so on. Also, the available link capacity Several authors have developed the

of a mobile device may vary over time and

techniques related to storing the data and

space depending on its signal strength, other

also for maintaining the data and for security

user’s traffic in the same cell, and link

issues related to the cloud.

condition

The quality of service on mobile video is

versions (with different bit rates) of the

based on two factors: Literature survey is

same video content may incur high overhead

the

in terms of storage and communication.

most

important

step

in

software

variation.

Storing

multiple

development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once

2.2.

Adaptability:

these things r satisfied, ten next steps are to

streaming

determine which operating system and

considering relatively stable traffic links

language can be used for developing the

between servers and users perform poorly in

tool. Once the programmers start building

mobile

the tool the programmers need lot of

fluctuating wireless link status should be

external support. This support can be

properly dealt with to provide ‘tolerable”

obtained from senior programmers, from

video streaming services. To address this

book or from websites. Before building the

issue, we have to adjust the video bit rate

system the above consideration are taken

adapting to the currently time-varying

techniques

environments

Traditional designed

[11].

Thus

video by

the

available link bandwidth of each mobile

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

148

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

user. Such adaptive streaming techniques

also developed client-side HTTP adaptive

can effectively reduce packet losses.

live streaming solutions. 2.2.2.

Mobile

Cloud

Computing

Techniques The cloud computing

has

been well

positioned to provide video streaming services, especially in the wired Internet because of its scalability and capability. For example, the quality-assured bandwidth auto-scaling for VoD streaming based on the cloud computing is proposed , and the Fig. 3. An illustration of the AMES-

CALMS framework is a cloud-assisted live

Cloud framework 2.2.1.

Adaptive

media Video

streaming

service

for

globally

distributed users. However, extending the

Streaming

cloud computing-based services to mobile

Techniques

environments requires more factors to In the adaptive streaming, the video traffic

consider: wireless link dynamics, user

rate is adjusted on the fly so that a user can

mobility, the limited capability of mobile

experience the maximum possible video

devices. More recently, new designs for

quality based on his or her link’s time-

users on top of mobile cloud computing

varying bandwidth capacity. There are

environments

mainly two types of adaptive streaming

are

proposed,

which

virtualized private agents that are in charge

techniques, depending on whether the

of satisfying the requirements (e.g.QoS) of

adaptively is controlled by the client or the

individual users such as Cloudlets and

server. The Microsoft’s Smooth Streaming

Stratus.

is a live adaptive streaming service which can switch among different bit rate segments

The Video usage and images plays a vital

encoded with configurable bit rates and

role in communication. The usage of

video resolutions at servers, while clients

traditional networking and service providers

dynamically request videos based on local

lacks to provide the quality centered and

monitoring of link quality. Adobe and Apple

reliable

service

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

149

to

the

mobile

users

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

concerning with the media data. The

Due to the fast development of the

problems that leads to the poor services

mobile communication technology, more

from the service providers would be low

people are getting addicted to video

bandwidth

efficient

streaming over phones. Over the few years,

transfer of video to the user, the disruption

video streaming is becoming challenging

of video streaming also occurs due to the

over wireless links than on wired links. The

low bandwidth. The buffer time of the video

increasing

over mobile devices which moves from

overwhelming the wireless link capacity.

place to place affects the smooth streaming

The

and also sharing of video from one user to

disruptions and very long buffering time

another user over social media. Our survey

while receiving video through networks like

shows the functioning of various methods

3G or 4G due to short bandwidth and link

and architecture which used cloud to provide

fluctuations. So, it is imperative to improve

effective solution for providing better

the services of video streaming over mobile

service to the users. AMES is cloud

networks. Scalability and adaptability are

architecture built specially to provide video

the two aspects in order to improve the

service to the user. The study has came up

quality of the video streaming over mobile

with a optimal solution, proposing with

networks.

which

affects

the

video cloud, which collects the video from

video

mobile

users

traffic

often

demands

suffer

are

from

Scalable video coding (SVC) and

video service providers and providing the

adaptive

reliable service to the user[1].The network

combined together to accomplish the best

providers

video

possible quality of the video streaming

downloads but it provides some delays due

services. So, that we can adjust the SVC

to network dynamics so this technique is

layers which depends on the current link

used to remove jitters and provide video on

status. The cloud computing technique is

demand[3].

streaming

ready to provide scalable resources to the

solutions for different mobile which shows

service providers and process offloading to

my realistic work relevant to streaming

the mobile users. So, cloud data centers can

methods with RTMP protocols family and

provision to large scale real time video

solutions for iPhone, Android, Smart mobile

services. In cloud more than one agent

YouTube

cloud

provide

centered

streaming

techniques

can

be

phones, Window and Blackberry phones etc.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

150

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

instances can be maintained dynamically

responsible to store the popular video clips.

and effectively due to mobile user demands.

tempVB is a video base which is temporary

The social network services (SNS)

and is utilized to cache new mobile users for

on the mobile networks is becoming

popular videos, while it counts the access

increasingly popular. In SNS’s mobile users

frequency of each video.VC keeps on

might post, comment and share videos

executing a collector to look for videos

which can be viewed and by his/her friends.

which are popular already in video service

So,

the

provider (VSP), and it will re-encode the

relationship between the mobile users and

videos that are collected into scalable video

their SNS activities in order to perfects the

coding format and will save in tempVB.

first part of the video during streaming.

A sub video cloud (subVC) is dynamically

we

are

inspired

to

exploit

created if there is any ling of video demand from the mobile user. A sub video base (subVB) is present in subVC and it stores segments of recently fetched video. The subVC contains encoding functions, and if the mobile users request a new video, which is not in the subVB or the VB in VC, the subVC will fetch, encode and move the video. During the time of the streaming of videos, the users of the mobile will report the link conditions to the subVC and it will offer adaptive streams. There is a temporary storage in every mobile device which is Fig.4.Context architecture

known as local video base (localVB), used for perfecting and buffering.

3. CLOUD FRAMEWORK As shown in the above figure, the video

4.

SOCIAL

streaming and storing system in the cloud is

PREFETCHING

called video cloud (VC). Within the video

In social network services, the mobile users

cloud, there is video base (VB), which is

can subscribe to their friends and content

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

151

AWARE

VIDEO

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

publishers, and there are numerous types of

about security, availability, and privacy of

activities socially. So it is required for us to

their data as it rest sin the cloud.

define different kinds of levels of strengths

In this technique we propose an adaptive

for those socially oriented activities to

mobile

indicate many different possibilities so that

framework, called AMES-Cloud, which

the videos that are shared by one mobile

efficiently stores videos in the clouds (VC),

user can be viewed by the receiver of his/her

and utilizes cloud computing to construct

sharing activities, so the sub video clouds

private agent (subVC) for each mobile user

may engage into pre fetching at subVB done

to try to offer “non-terminating” video

in background and may transfer to mobile

streaming adapting to the fluctuation of link

user’s local VB. Because after one shares a

quality based on the Scalable Video Coding

video, there can be a bit of delay that the

ability. Also AMES-Cloud can further seek

receiver will know the sharing, and starts to

to issue “none buffering “experience of

watch. So, advance pre fetching will not

video streaming by background pushing

affect the mobile users in most of the cases.

functions among the VB, subVBs and

But a mobile user may play the videos to

localVB of mobile users. We assess the

view without any delay due to buffering as

AMES-Cloud by prototype implementation

the first part or May even the entire video is

and shows that the cloud computing

locally pre fetched already.

technique brings significant improvement on

video

streaming

and

sharing

the adaptively of the mobile streaming. We 5. IMPLEMENTATION

disregard the cost of encoding workload in the cloud while implementing the prototype.

Cloud computing promises lower costs, rapid scaling, easier

maintenance, and

This method require three different steps

service availability anywhere, anytime, a key 1. Uploading and Rating videos:

challenge is how to ensure and build confidence that the cloud can handle user

2. User details

data securely. A recent Microsoft survey 3. Rate videos

found that “58 percent of the public and 86 percent of business leaders are excited about

5.1. Uploading and rating Video: Here we

the possibilities of cloud computing. But

can upload the videos and also we can give

more than 90 percent of them are worried

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

152

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

rating to the videos depending upon the

adaptability and scalability of the mobile

priorities or the usage.

streaming, and the efficiency of intelligent pre fetching.

5.2. User Details: In this we will maintain the details of the users and also determine

7.REFERENCES

the usage of each user. And keep track of the

[1].A Survey on efficient video sharing and

videos the user is requesting and account

streaming in cloud environment using VC,

them.

M.Sona, D.Daniel, S.Vanitha. [2]

.AMES-Cloud

A

Framework

of

5.3. Rate videos: This wills avoiding

Adaptive Mobile Video Streaming and

unexpected

videos

accept/reject

videos

from then

users.

After

Efficient Social Video

only

users

[3].CloudStream:

delivering

high-quality

can/cannot view their own videos.

streaming videos through a cloud-based

6. CONCLUSION

SVC proxy ,Zixia Huang, ,ChaoMei,

In this paper we have discussed our

Li

Erran Li,Thomas

proposal of the cloud assisted adaptive

[4]. CISCO, “Cisco Visual Networking

mobile video streaming and public websites

Index : Global Mobile Data Traffic Forecast

are fetching, which stores the videos

Update , 2011-2016,” Tech. Rep., 2012.

efficiently in the clouds and constructs

[5] .Y. Li, Y. Zhang, and R. Yuan,

private agent (subVC) for active mobile

“Measurement and Analysis

users in order to attempt to give “non

of a Large Scale Commercial Mobile

terminating”

Internet TV System,” in ACM IMC,pp.

streaming

of

videos

by

adapting to the changes of quality of links

209–224, 2011.

which depends on scalable video coding

[6]. T. Taleb and K. Hashimoto, “MS2: A

technique, and to try to provide “non

Novel

buffering” video streaming occurrence by

Architecture,” in IEEE Transaction on

background perfecting based on the tracking

Broadcasting,vol. 57, no. 3, pp. 662–673,

of the interactions of mobile users in their

2011.

SNSs. We evaluated the framework by

[7] .X. Wang, S. Kim, T. Kwon, H. Kim, Y.

prototype

showed

Choi,“Unveiling the BitTorrent Performance

successfully that the cloud computing

in Mobile WiMAX Networks,” in Passive

method

andActive Measurement Conference, 2011.

implementation,

brings

and

improvement

to

the

Multi-Source

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

153

Mobile-Streaming

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[8] .A. Nafaa, T. Taleb, and L. Murphy,

Servicesand Applications, vol. 1, no. 1, pp.

“Forward

Error

7–18, Apr. 2010.

Strategies

for

Correction

over

[13] .F. Wang, J. Liu, and M. Chen,

IEEE

“CALMS : Cloud-Assisted Live Media

Communications Magazine, vol. 46, no. 1,

Streaming for Globalized Demands with

pp. 72–79, 2008. Cloud-Based Mobile

Time / Region

Video Streaming Techniques By Saurabh

Diversities,” in IEEE INFOCOM, 2012.

Goel

[14] .H. T. Dinh, C. Lee, D. Niyato, and P.

[9] .J. Fernandez, T. Taleb, M. Guizani, and

Wang,

N. Kato, “Bandwidth Aggregation-aware

Computing : Architecture , Applications ,

Dynamic QoS Negotiation for Real-Time

and Approaches,”

VideoApplications in Next-

in

Generation Wireless Networks,” in IEEE

Communications and Mobile Computing,

Transaction on Multimedia, vol. 11, no. 6,

Oct. 2011.

pp. 1082–1093, 2009.

[15] .S. Chetan, G. Kumar, K. Dinesh, K.

[10]. T. Taleb, K. Kashibuchi, A. Leonardi,

Mathew, and M. A. Abhimanyu, “Cloud

S. Palazzo, K. Hashimoto, N. Kato, and Y.

Computing for Mobile World,” Tech. Rep.,

Nemoto, “A Cross-layer Approach for

2010.

AnEfficient Delivery of TCP/RTP-based

[16] .N. Davies, “The Case for VM-Based

Multimedia Applications in Heterogeneous

Cloudlets in Mobile Computing,” in IEEE

Wireless Networks,” in IEEE Transaction on

Pervasive Computing, vol. 8, no. 4, pp. 14–

Vehicular Technology, vol. 57, no. 6, pp.

23, 2009.

3801–3814, 2008.

[17] .B. Aggarwal, N. Spring, and A.

[11].Y. Li, Y. Zhang, and R. Yuan,

Schulman,

“Measurement and Analysis of a Large

Mobile

Scale Commercial Mobile Internet TV

Support,” in ACM SIGCOMM DEMO,

System,” in ACM IMC,pp. 209–224, 2011.

2010.

Wireless

Media

Adaptation

Streaming

Networks,”in

“A

Survey

Wiley

of

Journal

“Stratus

:

Communication

Mobile

of

Cloud

Wireless

Energy-Efficient using

Cloud

[12].Q. Zhang, L. Cheng, and R. Boutaba, “Cloud Computing: State-of-the-art and Research Challenges,” in Journal of Internet

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

154

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Identifying And Preventing Resource Depletion Attack In Mobile Sensor Network V.Sucharitha Associate Professor jesuchi78@yahoo.com Audisankara college of engineering and technology

M.Swapna M.Tech swapna.12b2@gmail.com

ABSTRACT: Ad-hoc low-power wireless networks are inspiring research direction in sense and enveloping computing. In previous security work in this area has focused primarily on inconsistency of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the navigation protocol layer, which permanent disable networks by quickly draining nodes battery power. The “Vampire” attacks are not specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are vulnerable to Vampire attacks, which are demolish and difficult to detect, and easy to carry out using as few as one malicious insider send only protocol compliant messages.

near future, such as omnipresent ondemand computing power, continuous connectivity, and instantly-deployable communication for military and first responders. Such networks already monitor environmental conditions, factory performance, and troop deployment, to name a few applications. As WSNs become more and more crucial to the everyday functioning of people and organizations, availability faults become less tolerable — lack of availability can make the difference between business as usual and lost productivity, power outages, environmental disasters, and even lost lives; thus high availability of these

1.INTRODUCTION: the last couple of years wireless communication has become of such fundamental importance that a world without is no longer imaginable for many of using. Beyond the establish technologies such as mobile phones and WLAN, new approaches to wireless communication are emerging; one of them are so called ad hoc and sensor networks. Ad hoc and sensor networks are formed by autonomous nodes communicating via radio without any additional backbone infrastructure. Adhoc wireless sensor networks (WSNs) promise exciting new applications in the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

155

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

networks is a critical property, and should hold even under malicious conditions. Due to their ad-hoc organization, wireless ad-hoc networks are particularly vulnerable to denial of service (DoS) attacks, and a great deal of research has been done to enhance survivability. While these schemes can prevent attacks on the short-term availability of a network, they do not address attacks that affect long-term available — the most permanent denial of service attack is to entirely deplete nodes’ batteries. This is an instance of a resource depletion attack, with battery power as the resource of interest. this paper we consider how routing protocols, even those designed to be secure, lack protection from these attacks, which we call Vampire attacks, since they drain the life from networks nodes. These attacks are distinct from previouslystudied DoS, reduction of quality (RoQ), and routing infrastructure attacks as they do not disrupt immediate availability, but rather work over time to entirely disable a network. While some of the individual attacks are simple, and powerdraining and resource exhaustion attacks have been discussed before, prior work has been mostly confined to other levels of the protocol stack, e.g. medium access control (MAC) or application layers, and to our knowledge there is little discussion, and no thorough analysis or mitigation, of routing-layer resource exhaustion attacks. Vampire attacks are not protocolspecific, in that they do not rely on design properties or implementation faults of particular routing protocols, but rather exploit general properties of protocol classes such as link-state, distance-vector, source routing and geographic and beacon routing. Neither

ISBN: 378 - 26 - 138420 - 5

do these attacks rely on flooding the network with large amounts of data, but rather try to transmit as little data as possible to achieve the largest energy drain, preventing a rate limiting solution. Since Vampires use protocol-compliant messages, these attacks are very difficult to detect and prevent. This paper makes three primary contributions. First, we thoroughly evaluate the vulnerabilities of existing protocols to routing layer battery depletion attacks. We observe that security measures to prevent Vampire attacks are orthogonal to those used to protect routing infrastructure, and so existing secure routing protocols such as Ariadne, SAODV, and SEAD do not protect against Vampire attacks. Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action. Second, we show simulation results quantifying the performance of several representative protocols in the presence of a single Vampire (insider adversary). Third, we modify an existing sensor network routing protocol to provably bound the damage from Vampire attacks during packet forwarding. 1.1.Wireless Adhoc Network: An ad hoc wireless network is a collection of wireless mobile nodes that self-configure to form a network without the aid of any established infrastructure, as shown in without an inherent infrastructure, the mobiles handle the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

156

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

necessary control and networking tasks by themselves, generally through the use of distributed control algorithms. Multihop connections, whereby intermediate nodes send the packets toward their final destination, are supported to allow for efficient wireless communication between parties that are relatively far apart. Ad hoc wireless networks are highly appealing for many reasons. They can be rapidly deployed and reconfigured. They can be tailored to specific applications, as implied by Oxford’s definition. They are also highly robust due to their distributed nature, node redundancy, and the lack of single points of failure.

ISBN: 378 - 26 - 138420 - 5

necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps are to determine which operating system and language can be used for developing the tool.

Once

the

programmers

start

building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building

the

system

the

above

consideration r taken into account for developing the proposed system. A wireless sensor network (WSN) consists

of

spatially

distributed

autonomous sensors to monitor physical Fig:Adhoc Network Structure

or environmental conditions, such as

Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action.

temperature, sound, pressure, etc. and to

2.LITERATURE REVIEW:

industrial and consumer applications,

Literature survey is the most important

such as industrial process monitoring

step in software development process.

and control, machine health monitoring,

Before

and so on.

developing

the

tool

it

cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enabling control of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in

is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

157

many

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The WSN is built of "nodes" –

ISBN: 378 - 26 - 138420 - 5

3.IMPLIMENTATION:

from a few to several hundreds or even thousands, where each node is connected

As

to one (or sometimes several) sensors.

cooperatively build a Chord overlay

Each such sensor network node has

network over the sensor network. Cloned

typically

node

several

parts:

a

radio

a

prerequisite,

may

not

all

participate

nodes

in

this

transceiver with an internal antenna or

procedure, but it does not give them any

connection to an external antenna, a

advantage of avoiding detection. The

microcontroller, an electronic circuit for

construction of the overlay network is

interfacing with the sensors and an

independent of node clone detection. As

energy source, usually a battery or an

a result, nodes possess the information

embedded form of energy harvesting. A

of their direct predecessor and successor

sensor node might vary in size from that

in the Chord ring. In addition, each node

of a shoebox down to the size of a grain

caches information of its g consecutive

of dust, although functioning "motes" of

successors in its successors table. Many

genuine microscopic dimensions have

Chord systems utilize this kind of cache

yet to be created. The cost of sensor

mechanism to reduce the communication

nodes is similarly variable, ranging from

cost and enhance systems robustness.

a few to hundreds of dollars, depending

More importantly in our protocol, the

on the complexity of the individual

facility

sensor nodes. Size and cost constraints

contributes to the economical selection

on sensor nodes result in corresponding

of inspectors. One detection round

constraints on resources such as energy,

consists of three stages.

memory,

and

Stage 1: Initialization

The

To activate all nodes starting a new

topology of the WSNs can vary from a

round of node clone detection, the

simple star network to an advanced

initiator uses a broadcast authentication

multi-hop wireless mesh network. The

scheme to release an action message

propagation technique between the hops

including a monotonously increasing

of the network can be routing or

nonce, a random round seed, and an

flooding.

action time. The nonce is intended to

computational

communications

speed

bandwidth.

of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

158

the

successors

table

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

prevent adversaries from launching a

period, during which nodes randomly

DoS attack by repeating broadcasting

pick up a transmission time for every

action messages. The action message is

claiming message.

defined by

Stage 3: Processing claiming messages A claiming message will be forwarded to its destination node via several Chord intermediate nodes. Only those nodes in

Stage 2: Claiming neighbors information

the overlay network layer (i.e., the

Upon receiving an action message, a

source node, Chord intermediate nodes,

node verifies if the message nonce is

and the destination node) need to process

greater than last nonce and if the

a message,

message signature is valid. If both pass,

whereas other nodes along the path

the node updates the nonce and stores

simply route the message to temporary

the seed. At the designated action time,

targets. Algorithm 1 for handling a

the node operates as an observer that

message is the kernel of our DHT-based

generates a claiming message for each

detection protocol. If the algorithm

neighbor (examinee) and transmits the

returns NIL, then the message has

message through the overlay network with

respect

to

the

arrived at its destination. Otherwise, the

claiming

message will be subsequently forwarded

probability .The claiming message by observer for examinee is

to the next node with the ID that is

constructed

returned by Algorithm 1.

by

Criteria

of

determining

inspectors:

During handling a message in Algorithm 1, the node acts as an inspector if one of Where

, are locations of

,respectively.

Nodes

can

and

the following conditions is satisfied.

start

transmitting claiming messages at the same time, but then huge traffic may cause serious interference and degrade the network capacity. To relieve this problem, we may specify a sending

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

159

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

performance. By Algorithm 1, roughly

4.ALGORITHMS:

of all claiming messages related to a same examinee’s ID will pass through one

of

the

predecessors

of

the

destination. Thus, those nodes are much more likely to be able to detect a clone than randomly selected inspectors. As a result, this criterion to decide inspectors can increase the average number of witnesses at a little extra memory cost. We will theoretically quantify those performance measurements later. In Algorithm 1, to examine a message for node clone detection, an inspector will invoke Algorithm 2, which compares the message

with

previous

inspected

messages that are buffered in the cache table. Naturally, all records in the cache table should have different examinee IDs, as implied in Algorithm 2. If detecting a clone, which means that there exist two messages satisfying

and

and

, the

1) This node is the destination node of

witness

the claiming message.

evidence to notify the whole network.

2) The destination node is one of the g

All integrity nodes verify the evidence

node

successors of the node. In other words,

then

broadcasts

message

the destination node will be reached in

the

and

stop communicating with the cloned

the next Chord hop. While the ďŹ rst

nodes. To prevent cloned nodes from

criterion is intuitive, the second one is

joining the network in the future, a

subtle and critical for the protocol

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

160

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

revocation list of compromised nodes

expense

IDs may be maintained by nodes

probability. The RDE protocol shares the

individually. It is worth noting that

major merit with broadcasting detection:

messages

are

Every node only needs to know and

and,

buffer a neighbor-list containing all

respectively. Therefore, the witness does

neighbors IDs and locations. For both

not need to sign the evidence message. If

detection

a malicious node tries to launch a DoS

constructs a claiming message with

attack by broadcasting a bogus evidence

signed version of its neighbor-list, and

message,

node

then tries to deliver the message to

receiving it can immediately detect the

others which will compare with its own

wicked

the

neighbor-list to detect clone. For a dense

before

network, broadcasting will drive all

and

authenticated

by

the

next

behavior

signatures

observers

integrity

by

of

verifying

and

with

ISBN: 378 - 26 - 138420 - 5

adequate

procedures,

detection

every

node

neighbors of cloned nodes to find the

forwarding to other nodes.

attack, but in fact one witness that The DHT-based detection protocol

successfully catches the clone and then

can be applied to general sensor

notifies the entire network would suffice

networks, and its security level is

for the detection purpose. To achieve

remarkable, as cloned nodes will be

that in a communicatively efficient way,

caught by one deterministic witness plus several

probabilistic

we bring several

witnesses.

Chord

overlap

network

protocol. First, a claiming message

incurs

needs to provide maximal h op limit, and

considerable communication cost, which

initially it is sent to a random neighbor.

may not be desired for some sensor

Then,

networks that are extremely sensitive to

(RDE),

and

presents

optimal

subsequent

helps a message go through the network

which

as fast as possible from a locally optimal

tremendously reduces communication cost

message

line. The line transmission property

challenge, we propose the randomly exploration

the

transmission will roughly maintain a

energy consumption. To fulfill this

directed

and

effectively construct a multicast routing

However, the message transmission over a

mechanisms

perspective. In addition, we introduce

storage

border

determination

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

161

mechanism to

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

signiďŹ cantly reduce communication cost. We can do all of those because every node is aware of its neighbors locations, which is a basic assumption for all

where is time to live (a.k.a. message

witness-based detection protocols but

maximum hop). Since tt1 will be altered

rarely utilized by other protocols.

by

intermediate

transmission,

nodes

it

should

during not

be

authenticated. The observer willdeliver the claiming message r times. In each time, the node transmits it to a random neighbor as indicated. Note that can be a real

number,

and

accordingly

an

observer transmits its claiming message at least[r] ,up to ,[r] and on average r times. When an intermediate node receives a claiming message

4.1 Protocol Description:

it

launches , which is described by pseudo code in Algorithm 3, to

One round of clone detection is still

process

the

message.

During

the

activated by the initiator. Subsequently, processing, node

at the designated action time, each node

compares its own neighbor-list to the

creates its own neighbor-list including

neighbor-list in the message, checking if

the neighbors IDs and locations, which

there is a clone. Similarly, if detecting a

constitutes the sole storage consumption

clone,

of the protocol. Then, it, as an observer

the

claiming message containing its own ID, and

its

claiming

message

neighb-list. by

node

the

witness

node

will

broadcast an evidence messageto notify

for all its neighbors, starts to generate a

location,

, as an inspector,

whole

that

The

network

such

the

cloned

nodes are expelled from the sensor

is

network. To deal with routing, node

constructed by

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

162

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

decreases the message’s by 1 and

can be directly discarded. In our

discards the message if reaches zero;

proposal for border local determination, another parameter

4.4target range : This is used along with ideal direction to determine a target zone. When no neighbor is found in this zone, the current node will conclude that the message has reached a border, and thus throw it away.

Essentially, Algorithm 4 contains the following three mechanisms.

4.2Deterministicdirected transmission: When node receives a claiming message from previous node, the ideal direction can be calculated. In order to achieve the best effect of line transmission, the next destination node should be node , which is closest to the ideal direction. Fig:Loose source routing performance

4.3Networkborder

compared to optimal, in a network with

determination: This

takes

diameter slightly above 10. The dashed into

trend line represents expected path

the

length when nodes store logN local state,

communication cost. In many sensor

and the solid trend line shows actual

network applications, there exist outside

observed performance.

consideration

network to

shape reduce

borders of network due to physical

5.CONCLUSION:

constrains. When reaching some border

We defined Vampire attacks, a new class

in the network, the claiming message

of resource consumption attacks that use

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

163

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

routing protocols to permanently disable

[5] J. Bellardo and S. Savage, “802.11

ad-hoc wireless sensor networks by

Denial-of-Service

depleting nodes’ battery power. These

Vulnerabilities and Practical Solutions,”

attacks do not depend on particular

Proc. 12th Conf. USENIX Security,

protocols or implementations, but rather

2003.

expose vulnerabilities in a number of

[6] D. Bernstein and P. Schwabe, “New

popular protocol classes. We showed a

AES Software Speed Records,” Proc.

number

of proof-of-concept

Ninth Int’l Conf. Cryptology in India:

against

representative

attacks

examples

of

Attacks:

Real

Progress in Cryptology (INDOCRYPT),

existing routing protocols using a small

2008.

number

[7] D.J. Bernstein, “Syn Cookies,”

of

weak

adversaries,

and

measured their attack success on a

http://cr.yp.to/syncookies.html, 1996.

randomly-generated topology of 30

[8] I.F. Blaked, G. Seroussi, and N.P.

nodes.

Smart, Elliptic Curves in cryptography, vol. 265. Cambridge Univ. , 1999.

REFERENCES:

[9] J.W. Bos, D.A. Osvik, and D. Stefan, “Fast

[1] “The Network Simulator - ns-2,”

Various Platforms,” Cryptology ePrint

http://www.isi.edu/nsnam/ns,2012.

Archive,

[2] I. Aad, J.-P. Hubaux, and E.W.

501,

and Privacy in Sensor Networks,”

MobiCom, 2004.

Computer, vol. 36, no. 10, pp. 103-105,

[3] G. Acs, L. Buttyan, and I. Vajda,

Oct. 2003.

“Provably Secure On-Demand Source

[11] J.-H. Chang and L. Tassiulas,

Routing in Mobile Ad Hoc Networks,”

“Maximum

IEEE Trans. Mobile Computing, vol. 5,

Lifetime

Routing

in

Wireless Sensor Networks,” IEEE/ACM

no. 11, pp. 1533-1546, Nov. 2006. Aura,

2009/

[10] H. Chan and A. Perrig, “Security

in Ad Hoc Networks,” Proc. ACM

T.

Report

http://eprint.iacr.org, 2009.

Knightly, “Denial of Service Resilience

[4]

Implementations of AES on

Trans. Networking, vol. 12, no. 4, pp.

“Dos-Resistant

609-619, Aug. 2004.

Authentication with Client Puzzles,” Proc. Int’l Workshop Security Protocols, 2001.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

164

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[12] T.H. Clausen and P. Jacquet, Optimized

Link

State

Routing

Protocol(OLSR), IETF RFC 3626, 2003. [13] J. Deng, R. Han, and S. Mishra, “Defending against Path-Based DoS Attacks in Wireless Sensor Networks,” Proc. ACM Workshop Security of Ad Hoc and Sensor Networks, 2005. [14] J. Deng, R. Han, and S. Mishra, “INSENS: Intrusion-Tolerant Routing for

Wireless

Sensor

Networks,”

Computer Comm., vol. 29, 1. 2, pp. 216230, 2006. [15] S. Doshi, S. Bhandare, and T.X. Brown, “An On-Demand Minimum Energy Routing Protocol for a Wireless Ad Hoc Network,” ACM SIGMOBILE Mobile Computing and Comm. Rev., vol. 6, no. 3, pp. 50-66, 2002. [16] J.R. Douceur, “The Sybil Attack,” Proc.

Int’l

Workshop

Peer-to-Peer

Systems, 2002. [17] H. Eberle, A. Wander, N. Gura, C.S.

Sheueling,

and

V.

Gupta,

“Architectural Extensions for Elliptic Curve Cryptography over GF(2m) on 8bit Microprocessors,” Proc. IEEE Int’l Conf’ Application- Specific Systems, Architecture Processors (ASAP), 2005.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

165

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

EFFECTIVE FAULT TOERANT RESOURCE ALLOCATION WITH COST REDUCTION FOR CLOUD Dorababu Sudarsa M.Tech.,Ph.D,MISTE Associate Professor dorababu.sudarsa@gmail.com Audisankara college of engineering and technology

AllareddyAmulya M.Tech allareddyamulya@gmail.com

Abstract: In Cloud systems Virtual Machine technology being increasingly grown-up, compute resources which can be partitioned in fine granularity and allocated them on require. In this paper we formulate a deadline-driven resource allocation problem based on the Cloud environment that provides VM resource isolation technology, and also propose an optimal solution with polynomial time, which minimizes users payment in terms of their expected deadlines. We propose an fault-tolerant method to guarantee task’s completion within its deadline. And then we validate its effectiveness over a real VM-facilitated cluster environment under different levels of competition. To maximize utilization and minimize total cost of the cloud computing infrastructure and running applications, efficient resources need to be managed properly and virtual machines shall allocate proper host nodes . In this work, we propose performance analysis based on resource allocation scheme for the efficient allocation of virtual machines on the cloud infrastructure. Our experimental results shows that our work more efficient for scheduling and allocation and improving the resource utilization.

Key words: fault torenant,resource allocation,cloud computing, cost reduction. provides computation, software, data access, and storage resources without requiring 1. INTRODUCTION: Cloud Computing[1] is a model for enabling cloud users to know the location and other convenient, on-demand network access to a details of the computing infrastructure. shared pool of configurable and reliable Cloud computing is transforming business computing resources (e.g., networks, by offering new options for businesses to servers, storage, applications, services) that increase efficiencies while reducing costs. can be rapidly provisioned and released with These problems include: minimal consumer management effort or a. High operational costs: typically service provider interaction. Cloud associated with implementing and managing computing is the delivery of computing as a desktop and server infrastructures service rather than a product, whereby b. Low system utilization: often associated shared resources, software, and information with non-virtualized server workloads in are provided to computers and other devices enterprise environments as a metered service over a network (typically the Internet). Cloud computing

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

166

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

c. Inconsistent availability: due to the high cost of providing hardware redundancy. d. Poor agility: which makes it difficult for businesses to meet evolving market demands. The reallocation in cloud computing is more complex than in other distributed systems like Grid computing platform. In a Grid system [2], it is inappropriate to share the compute resources among the multiple applications at the same time running atop it due to the unavoidable mutual performance involvement among them. Whereas, cloud systems usually do not providing physical hosts directly to users, but leverage virtual resources isolated by VM technology [3], [4], [5]. Not only can such an elastic resource usage way adapt to user’s specific demand, but it can also maximize resource utilization in fine granularity and isolate the abnormal environments for safety purpose. Some successful platforms or cloud management tools leveraging VM resource isolation technology include Amazon EC2 [6] and OpenNebula [7]. On the other hand, with fast development of scientific research, users may propose quite complicated demands. For example, users may want to minimize their payments when confirm their service level such that their tasks can be finished before deadlines. Such a deadline ensure the reallocation with minimized payment is rarely studied in literatures. Moreover, inavoidable errors with an anticipate the task workloads will definitely make the problem harder. Based on the elastic resource usage model, we aim to design a reallocation algorithm with high anticipate- error tolerance ability, also minimizing users’ payments subject to their expected deadlines. Since the idle physical resources can be arbitrarily divide and allocated to new tasks, the VM-based divisible resource allocation could be very flexible. This implies the feasibility of finding the optimal solution through convex

ISBN: 378 - 26 - 138420 - 5

optimization strategies [8], unlike the traditional Grid model that relies on the indivisible resources like the number of physical cores. However, we found that it is in avoidable to directly solve the necessary and sufficient condition to find the optimal solution, a.k.a., Karush-Kuhn-Tucker (KKT) conditions [9]. Our first contribution is devising a new approach to solve the problem.

2. RELATED WORKS: A Static resource allocation based on peak demand is not cost-effective because of poor resource utilization during off-peak periods.. Resource provisioning for cloud computing, an important issue is how resources may be allocated to an application mix such that the service level agreements (SLAs) of all applications are met Heuristic algorithm that determines a resource allocation strategy (SA or DA) that results in the smallest number of servers required to meet the SLA of both classes; Comparative evaluation of FCFS, head-of-the-line priority (HOL) and a new scheduling discipline called probability dependent priority (PDP). Scott et al[10] proposed a finding the failure rate of a system is a crucial step in high performance computing systems analysis. Fault tolerant mechanism, called checkpoint/ restart technique, was introduced. Incremental checkpoint model can reduce the waste time more than it is reduced by the full checkpoint model. Singh et al. presented a slot-based provisioning model on grids to provide scheduling according to the availability and cost of resources. 2.1.Cloud Environment Infrastructure Architecture: Cloud users combine virtualization, automated software, and internet connectivity [11] to provide their services. A basic element of the cloud environment is client, server, and network connectivity [13].

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

167

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

A hybrid computing model allows customer to leverage both public and private computing services to create a more flexible and cost-effective computing utility. The public cloud environment involves Web based application, Data as a service (DaaS), Infrastructure as a Service (IaaS), Software as a service (SaaS), and Email as a service (EaaS). A private cloud accesses the resources from the public cloud organization to provide services to its customers. In a hybrid cloud environment, an organization combines various services and data model from various cloud environments to create an automated cloud computing environment.

ISBN: 378 - 26 - 138420 - 5

taking rent of hardware resources based on pay as you go basics. This process is referred to as resource gathering. Resource gathering by the hypervisor makes virtualization possible, and virtualization makes multiprocessing computing that leads to an infrastructure shared by several users with similar resources in regard to their requirements. 2.3. Task Scheduling and Resource Allocation : To increase the flexibility, cloud allocates the resources according to their demands. Major problems in task scheduling environment are load balancing, scalability, reliability, performance, and re-allocation of resources to the computing nodes dynamically. In past days, there are various methods and algorithms to solve the problem of scheduling a resource in Preempt able Job in cloud environment. In cloud environment, resources are allocated to the customers under the basics of pay per use on demand. Algorithms used in the allocation of the resources in cloud computing environment differ according to schedule of task in different environment under different circumstances. Dynamic load balancing in cloud allocates resource to computational node dynamical. Task Scheduling algorithms aim at minimizing the execution of tasks with maximizing resource usage efficiently. Rescheduling is need only when the customer’s request the same type of resources. Each and every task is different and autonomous their requirement of more bandwidth, response time, resource expenses, and memory storage also differs. Efficient scheduling algorithms maintain load balancing of task in efficient manner. Efficiency of cloud environment only depends on the type of scheduling algorithm used for task scheduling.

Fig 2.1: Cloud Environment Infrastructure Architecture 2.2. Infrastructure as a service (IaaS) : Infrastructure as a service (IaaS) controls user and manage the systems. However, for business IaaS takes an advantage in its capacity. IT companies able to develop its own software and implements that can able to handles the ability to re-schedule resources in an IaaS cloud. IaaS consists of a combination of internal and external resources. IaaS is low-level resource that runs independent of an operating system called a hypervisor and is responsible for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

168

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

fewer then m jobs are active, they are required to execute on the fastest processor while the slowest are idled, and higher priority jobs are executed on faster processors. A formal verification which guarantees all deadlines in a real-time system would be the best. Then this verification is called feasibility test. Three different kinds of tests are available:1.Exact tests with long execution times or simple models [11], [12], [13]. 2. Fast sufficient tests which fail to accept feasible task sets, especially those with high utilizations [14], [15]. 3. Approximations, which are allowing an adjustment of performance and acceptance rate [1], Task migration cost might be very high. For example, in loosely coupled system such as cluster of workstation a migration is performed so slowly that the overload resulting from excessive migration may prove unacceptable [3]. In this paper we are presenting the new approach call the queue set algorithm is used to reduce the efficent time complexity.

3.IMPLEMENTATION By using queue set scheduling for scheduling the task we can obtain the high task completion with in schedule. Whenever the queue set scheduling event occurs the task queue is searched for the process closest to its deadline and is scheduled for its execution. In queue set scheduling , at every scheduling point the task having the shortest deadline is taken up for scheduling. The basic principle of this algorithm is very sensitive and simple to understand. If a new process arrives with cpu burst time less than remaining time of current executing process. Queue set satisfies the condition that total processor utilization (Ui) due to the task set is less than 1. With scheduling periodic processes that have deadlines equal to their periods, queue set has a utilization bound of 100%. For example let us Consider 3 periodic processes scheduled using queue set alogorithm, the following acceptance test shows that all deadlines will be met. Q2 Table1:Task Parameter Process P1 P2 P3

Execution Time=C 3 2 1

ISBN: 378 - 26 - 138420 - 5

Period=T 4 5 7

4.QUEUE SET SCHEDULING ALGORITHM: Let n denote the number of processing nodes and m denote the number of Available tasks in a uniform parallel real- time system. C denotes the capacity vectore and D denotes the deadline. In this section we are presenting five steps of queue set scheduling alogorithm. obviously, each task which is picked for up execution is not considered for execution by other processors. Here we are giving following methods for our new approach: 1. Perform a possible to check a specify the task which has a chance to meet their deadline and put them into a queue(2 ) , Put the remaining tasks are also allocated and assign that particular queue. We can

The utilization will be: 3/4+2/3+1/7=1.559=55.9% The theoretical limit for any number of processes is 100% and so the system is schedulable. The queue set algorithm chooses for execution at each instant in the time currently active job(s) that have the nearest deadlines. The queue set implementation upon uniform parallel machines is according to the following rules [2], No Processor is idle while there are active jobs waiting for execution, when

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

169

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

partition the task set by any existing approach. 2. Sort the task queue set scheduling according to their deadline in a nondescending order by using any of existing sorting algorithms. Let k denote the number of tasks in allocated in queue , i.e. the number of tasks that have the opportunity to meet their deadline.

ISBN: 378 - 26 - 138420 - 5

N1←(corevalue×0.2)+(cpuvalue×0.5)+(me mvalue ×0.3); DB.add(Ni ); end for

Node performance analysis algorithm for each i ∈ N.size() if !available(Ni , requestResource) availableNodeList.add(Ni ); end if end for Sort (availableNodeList ); while j ≥ availableNodeList.size() if VM = empty creatVM(VM); endifsuccess(Nj←VM) vmtable.add(j,VM); end if j++; end while

3. For all processor j, (j≤min(k,m)) check whether a task which was last running on the jth processor is among the first min(k,m) tasks of set 1. If so assign it to the jth processor. At this point there might be some processors to which no task has been assigned yet. 4. For all j, (j≤min(k,m)) if no task is assigned to the jth processor , select the task with earliest deadline from remaining tasks of set 1 and assign it to the jth processor. If k≥m, each processor have a task to process and the algorithm is finished.

Virtual machine scheduling algorithm After getting the proper host, the scheduler will return the host number to the virtual machine manager for placement of virtual machine on that host. Then the virtual machine manager has all information about the virtual machine and its location. It will send a service activation message to the client/user. After that, the client/user can access the service for the duration specified. And when the resources and the data are ready, this task’s execution being.

5. If k<m, for all j, (k<j≤m) assign the task with smallest deadline from B to the jth processor. The last step is optional and all the tasks from next set will miss their deadlines.

5. Resource allocation algorithm: 6. CONCLUSION:

Resource allocation is the process of assigning available resources to the needed cloud applications. Cloud resources consist of physical and virtual resources. The user request for virtualized resources is described through a set of parameters detailing the processing CPU, memory, disk, and so on. For each I ∈ Node(Core,CPU,Mem) Starttime←Times();. Memvalue←InvertMatrix(Ni); Finishtime←Times(); CPUvalue←Finishtime−Starttime;

Cloud Computing is a promising technology to support IT organizations in developing cost, time and resource efective products. Since, Cloud computing is a pay-go-model, it is necessary to reduce cost at the peak hors inoredr to improve the business performance of the cloud system. The cost will be reduced and efficient resource utilization also possible. to have an effective error tolerant approach that can efficiently allocate the resources to reach the deadline.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

170

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

scheduling under time and resource constraints”, In: IEEE Transactions on Computers C-36 (8) (1987)949– 960. 10. Alex King Yeung Cheung and Hans-Arno Jacobsen, “Green Resource Allocation Algorithms for Publish/Subscribe Systems”, In: the 31th IEEE International Conference on Distributed Computing Systems (ICDCS), 2011. 11. Mrs.S.Selvarani1; Dr.G.Sudha Sadhasivam, “Improved Cost Based Algorithm For Task Scheduling In Cloud Computing”, IEEE 2010. 12. S. Mohana Priya, B. Subramani, “A New Approach For Load Balancing In Cloud Computing”, In: International Journal Of Engineering And Computer Science (IJECS2013) Vol 2, PP 1636-1640(2013).

7. REFERENCES: 1. Amazon Elastic Compute Cloud, http://aws.amazon.com/ec2/, 2012. 2. D. Milojicic, I.M. Llorente, and R.S. Montero, “Opennebula: A Cloud Management Tool,” IEEE Internet Computing, vol. 15, no. 2, pp. 11-14, Mar./Apr. 2011. 3. S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. Press, 2009. 4. E. Imamagic, B. Radic, and D. Dobrenic, “An Approach to Grid Scheduling by Using Condor-G Matchmaking Mechanism,” Proc. 28th Int’l Conf. Information Technology Interfaces, pp. 625-632, 2006. 5. Naksinehaboon N, Paun M, Nassar R, Leangsuksun B, Scott S (2009) High performance computing systems with various checkpointing schemes. 6. Ratan Mishra and Anant Jaiswal, “Ant colony Optimization: A Solution of Load balancing in Cloud”, in: International Journal of Web & Semantic Technology (IJWesT-2012) Vol 3, PP 33-50 (2012). DOI: 15121/ijwest.2012.32 7. Chandrashekhar S. Pawar and R.B.Wagh, “A review of resource allocation policies in cloud computing”, IN: World Journal of Science and Technology (WJST) Vol 3, PP 165-167 (2012). 8. K C Gouda, Radhika T V, Akshatha M, "Priority based resource allocation model for cloud computing", Volume 2, Issue 1, January 2013, International Journal of Science, Engineering and Technology Research (IJSETR). 9. W. Zhao, K.Ramamritham, and J.A.Stankovic, “Preemptive

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

171

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

SCALABLE AND SECURE SHARING OF PERSONAL HEALTH RECORDS IN CLOUD USING MULTI AUTHORITY ATTRIBUTE BASED ENCRYPTION PINISETTY CHANDRA SARIKA1 1

P.VENKATESWAR RAO2

PG Student, M.Tech, Computer Science and Engineering,

Audisankara College of Engineering & Technology, Gudur.A.P, India Sarikapinisetty@gmail.com 2

Associate Professor, Department of CSE,

Audisankara College of Engineering & Technology, Gudur.A.P, India varun.apeksha@yahoo.com

ABSTRACT This project is to design and implement personal health record (PHR) and provide security and stored in cloud provider. This application allows user to access their lifetime health information, it maintained in centralized server to maintain the patients personal and diagnosis information. The person health records should maintain with high privacy and security. The patient data is protected from the public access through these security techniques. To convince the patients’ access over to their own PHRs, before outsourcing we have to encrypt the PHRs. To protect the personal health data stored in a partly-trusted server, we have to adopt attribute-based encryption (ABE) as a main encryption primitive. To obtain fine-grained and scalable information access control for PHRs and to encrypt each patient’s PHR file we purchase attribute based encryption (ABE) techniques. In this project we propose a new patient centric Architecture and access mechanism to control PHRs in multitrusted servers. We get fine grained and scalable control of data access for personal health records stored in multi-trusted servers, and attributes based encryption (ABE) technique to encrypt the each patient’s medical record into a file. In this paper we are extend Multi Authority Attribute Based Encryption (MA-ABE) for access control mechanism. Keywords: Personal Health Records, Cloud Computing, Data Privacy, Fine-grained access control, Multi Authority Attribute Based Encryption.

1. INTRODUCTION In Recent years, personal health record is maintained as a patient centric design of health message exchange. It allows a patient to create and control their medical data and it can be maintained in a single place such as data centres. High cost of building and managing stream of data centres and many of PHR services are outsourced to third party service providers, for example Google Health,

Microsoft Health Vault. The PHR which is stored in cloud computing have been proposed in [1], [2]. When it is exciting to have convenient PHR data passing for each one, there are number of privacy and security risks it’s a wide adoption. In the third party service provider there is no security and privacy risk for PHR. The maximum values of sensitive Personal Health Information (PHI) the unauthorized person storage service are often

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

172

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

to target the various malicious behaviours which lead to exposure to the PHI. The main concern is about privacy of patients, personal health data and to find which user could gain access to the medical records stored in a cloud server. The famous incident, department of Veterans Affairs containing sensitive database PHI of 26.5 encryption pattern namely Attribute Based Encryption (ABE). In this ABE it attributes the users’ data to selects the access policies. It allows a patient to share their own PHR among a group of users by encrypting the file under a set of attributes, need to know complete information of users. The scope of result the number of attributes is to determine the complexities in the encryption technique, security key generation, and decryption. The Multi Authority Attribute Based Encryption scheme is used to provide the multiple authority based access mechanism. The aim of this Patient Centric Privacy often disagrees with the scalability and reliability in PHR system. Only the authorized users can access the PHR system for personal use or professional purpose. We are referring about two categories as personal and professional users respectively.

ISBN: 378 - 26 - 138420 - 5

million military veterans, including their health problems and social security numbers was theft by an employee who take the data home without authorization [13]. We ensure the privacy control over their own PHRs, it is essential to have fine-grained data access control mechanisms that work with may trusted servers. Then we can skip to a new in the system. Not only may create a load bottleneck but also it suffers from the security key problems. The attribute management tasks also include certification of all users’ attributes and generating secret keys.

B. Revocable Encryption:

Attributed

based

This scheme is a well-known challenging problem to invoke users or attributes efficiently and on-demand in ABE. Mainly this technique is done by the authority broadcasting periodic key updates to unrevoked users frequently, and which does not get complete forward/backward security and efficiency is less. In this paper for uniform security, we are proposing framework of patients centric sharing of PHR in several domain, several authority PHR system with several users. It captures framework application-level of requirements for both public and personal use of patients PHRs and it distributed users trust on multiple authorities are better reflects actuality.

2. RELATED WORK This article is more related to operate in cryptographically enforced control access for outsourced data and attribute based on encryption data. To improve the scalability of the above result, one-to-many encryption methods such as ABE can be utilized. The basic property of ABE is preventing against the user collusion.

3. IMPLEMENTATION a. Requirements: The most important task is to achieve patientcentric PHR sharing. That means, the patient should contain the fundamental control over their own health record. It also determines which users should have access to their medical data. The user control write/read access and revocation are two main security purposes for any type of electronic health record system. The write access control is controlled by the person to prevent in PHR

A.Trusted Authority: Multiple operations used ABE to realize fine grained access outsourced data control. Each patient EHR files are encrypted by using a broadcast variant of CP-ABE that allows directly. Here several communication drawbacks of the above mentioned operations. Mainly they are usually pretending the use of a single trusted authority

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

173

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

context entitles by the unauthorized users to get access on the record and to modifying it.

ISBN: 378 - 26 - 138420 - 5

The below Fig.1 depicts the architecture of proposed system for secure sharing of the patient medical records. The system split the users into two security domains namely, public domains (PUDs) and personal domains (PSDs) according to the different users’ data access needs. The PUDs consist of users who grant access based on their professional roles, such as doctors, nurses, medical researchers and insurance agents. For each PSD, the users are personally associated with a data owner (such as family members or close friends), and they grant accesses to personal health records based on access rights assigned by the owner. Here we consider Data owner who own the personal health record, data reader as who can read the encrypted patient medical record. In PSD, the owner uses key-policy attributed based on encryption and generates secret key for their PSD users and in PUD the owner used multi-authority attribute based encryption is preferred. Secret Key for PUD users are produced by Multiple authority (For this project we consider Specialization Authority and Patient Medical Authority) depending on both profession and specialization.

b. Framework: The purpose of our framework is to provide security of patient-centric PHR access and efficient key management at the same time. If users attribute is not valid, then the user is unable to access future PHR files using that attribute. The PHR data should support users from the personal domain as well as public domain. The public domain may have more number of users who may be in huge amount and unpredictable, system should be highly scalable in terms of complexity in key management system communication, computation and storage. The owner’s endeavour in managing users and keys should be minimized to enjoy usability. By using attribute based encryption we can encrypt personal health records self-protective that is they can access only authorized users even on a semi trusted server.

Fig.1: Framework for PHR sharing Fig.2: Architecture record sharing

c. Architecture of implementation:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

174

of

patients

medical

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

authority’s domain and output secret key for the user.

4. TECHNIQUES a. Attribute Based Encryption

Central Key Generation: A central authority can be used be run by a random algorithm. It takes the master key as an input and a user’s GID and outputs secret key for user.

The database security is provided by using Attribute Based Encryption techniques. In this the sensitive information is shared and stored in the cloud provider; it is needed to encrypt cipher text which is classified by set of attributes. The private key is associated with access make to control with cipher text a user is able to decrypt. Here we are using Attribute Based Encryption (ABE) as the principal encryption primitive. By using ABE access policies are declared based on the attributes of user data, which make possible to selectively share her/his PHR among a set of users to encrypting the file under a set of attributes, without a need of complete users. The complexity per encryption, security key generation, and decryption are only linear with multi number of attributes are included. When we integrate ABE into a large scale of PHR system, the important dispute such as dynamic policy updates, key management and scalability, and an efficient on demand revocation are non-retrieval to solve.

Encryption: This technique can be run by a sender. Take a set of attributes as an input for each authority, and the system public key. The outputs are in the form of cipher text. Decryption: This mechanism can be done by a receiver. Takes input as a cipher text, which was encrypted under a set of decryption keys for attribute set. By using this ABE and MA-ABE it will increase the system scalability, there are some restriction in building PHR system. The ABE does not handle it efficiently. In that scenario one may regard with the help of attributes based broadcast encryption.

5. SECURITY MODEL OF THE PROPOSED SYSTEM i. Data confidentiality:

b. Multi-Authority ABE

This research plan reveals the data about each user to access on the PHR among one another. The different sets of documents are authorized by the users to read the document.

A Multi-Authority ABE system is included with k attribute authorities and one central control. The value dk is assigned to every attribute authority. In this proposed system we can use the following algorithms:

ii. User Access Confidentiality:

The random algorithm is run by the central authority or some other trusted security. It takes input as a security parameter and outputs as a public key and secret key pair for each of the attribute authorities and also outputs as a system public key and master secret key, which is used for central authority.

Privilege

The system does not disclose the rights from one person to another. This ensures the user to access strong confidentiality. And also it maintains both public domain and private domain. Secure Sharing of Personal Health Records System designer maintain Personal Health Records with various user access points. These data values are managed under a third party cloud provider system. The cloud provider will provide security for the data. Multiple modules can be provided by this system.

Attribute Key Generation: A random algorithm is run by an attribute authority. The secret key is to take as an input for security authority and the authority’s value dk, a user’s GID, and a set of attributes in the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

175

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Data owner is designed to manage the patient details. With multiple attribute collections the PHR is maintained. Access permission to different authorities can be assigned by data header.

ISBN: 378 - 26 - 138420 - 5

[3] M. Li, S. Yu, K. Ren, and W. Lou, “Securing Personal Health Records in Cloud Computing: Patient-Centric and Fine-Grained Data Access Control in Multi-Owner Settings,” Proc. Sixth Int’l ICSTConf.SecurityandPrivacyinComm.Netw orks (SecureComm’10), pp. 89-106, Sept. 2010.

Cloud provider module is used to store the PHR values. The encrypted PHR is uploaded by the data header to the cloud provider. Patients can access data and also maintained under the cloud provider.

[4] M. Chase and S.S. Chow, “Improving Privacy and Security in Multi-Authority Attribute-Based Encryption,” Proc. 16th ACM Conf. Computer and Comm. Security (CCS ’09), pp. 121-130, 2009.

Key management is one of the main tasks to plan and control key values for various authorities. The owner of the data will update the key values. This dynamic policy is based on key management scheme.

[5] M. Li, S. Yu, N. Cao, and W. Lou, “Authorized Private Keyword Search over Encrypted Personal Health Records in Cloud Computing,” Proc. 31st Int’l Conf. Distributed Computing Systems (ICDCS ’11), June 2011.

Patients are accessed by the client module. This system uses the personal and professional access pattern. Access classification is used to provide multiple attributes. Client’s access to log maintains to the user request information to process auditing.

[6] J. Benaloh, M. Chase, E. Horvitz, and K. Lauter, “Patient Controlled Encryption: Ensuring Privacy of Electronic Medical Records,” Proc. ACM Workshop Cloud Computing Security (CCSW ’09), pp. 103114, 2009.

6. CONCLUSION This PHR system fights against the security attackers and hackers. The secure data sharing is used to protect the information from unauthorized user. We have proposed a novel approach for existing PHR system providing high security using Attribute Based Encryption which plays main role, because these are the unique competition, and it is difficult to hack. The ABE model increases and operates with MAABE.

[7] S. Yu, C. Wang, K. Ren, and W. Lou, “Achieving Secure, Scalable, and FineGrained Data Access Control in Cloud Computing,” Proc. IEEE INFOCOM ’10, 2010.

7. REFERENCES

[8] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-Based Encryption for Fine-Grained Access Control of Encrypted Data,” Proc. 13th ACM Conf. Computer and Comm. Security (CCS ’06), pp. 89-98, 2006.

[1] H. Lo ¨hr, A.-R. Sadeghi, and M. Winandy, “Securing the E-Health Cloud,” Proc. First ACM Int’l Health Informatics Symp. (IHI ’10), pp. 220-229, 2010.

[9] S. Narayan, M. Gagn´e, and R. SafaviNaini, “Privacy preserving EHR system using attribute-based infrastructure,” ser. CCSW ’10, 2010, pp. 47–52.

[2] M. Li, S. Yu, N. Cao, and W. Lou, “Authorized Private Keyword Search over Encrypted Personal Health Records in Cloud Computing,” Proc. 31st Int’l Conf. Distributed Computing Systems (ICDCS ’11), June 2011.

[10] J. Hur and D.K. Noh, “Attribute-Based Access Control with Efficient Revocation in Data Outsourcing Systems,” IEEE Trans. Parallel and Distributed Systems, vol. 22, no. 7, pp. 1214-1221, July 2011.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

176

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[11] S. Jahid, P. Mittal, and N. Borisov, “Easier: Encryption-Based Access Control in Social Networks with Efficient Revocation,” Proc. ACM Symp. Information, Computer and Comm. Security (ASIACCS), Mar. 2011. [12] S. Ruj, A. Nayak, and I. Stojmenovic, “DACC: Distributed Access Control in Clouds,” Proc. IEEE 10th Int’l Conf. Trust, Security and Privacy in Computing and Comm. (TrustCom), 2011. [13] “At Risk of Exposure - in the Push for Electronic Medical Records, Concern Is Growing About How Well Privacy Can Be Safeguarded,” http://articles.latimes.com/2006/jun/26/health/ he-privacy26, 2006.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

177

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

LATENT FINGERPRINT RECOGNITION AND MATCHING USING STATISTICAL TEXTURE ANALYSIS M.CHARAN KUMAR1, K.PHALGUNA RAO 2, ¹ M.Tech 2nd year, Dept. of CSE, ASIT, Gudur, India ² Professor, Dept. of CSE, ASIT, Gudur, India cherry.abu@gmail.com; 2 kprao21@gmail.com;

1

insisted on: distortion, dry, and wet fingerprints.

Abstract: Latent’s are the partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to this characteristic, latent’s have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and noise of latents make it extremely difficult to automatically match latent’s their mated full prints that are stored in law enforcement databases, although a number of methods used for fingerprint recognition to extract accurate results but is not up to level. The proposed fingerprint recognition and matching using statistical analysis gives efficient scheme of fingerprint recognition for biometric identification of individuals. Three statistical features are extracted to represent in mathematical model. They are (1) an entropy coefficient, for intensity histogram of the image, (2) a correlation coefficient, for operation between the original and filter image by using wiener filter, and (3) an energy coefficient, obtaining image in 5-level wavelet decomposition obtained after 5th level decomposition. The approach can be easily used to provide accurate recognition results.

Distortion of fingerprints seriously affects the accuracy of matching. There are two main reasons contributed to the fingerprint distortion. First, the acquisition

of

a

fingerprint

is

a

three-

dimensional/two-dimensional warping process. The fingerprint captured with different contact centers usually results in different warping mode. Second, distortion will be introduced to fingerprint by the no orthogonal pressure people exert on the sensor. How to cope with these nonlinear distortions in the matching process is a challenging task. Several fingerprint matching approaches have been proposed in the literature. These include methods based on point pattern matching, transform features and structural matching. Many fingerprint recognition algorithms are based on minutiae matching since it is widely

believed that

the minutiae are most

discriminating and reliable features. Rather al. Index Terms: - fingerprint recognition, entropy,

addressed a method based on point pattern matching.

correlation, wavelet energy.

The generalized Hough transform (GHT) is used to recover the pose transformation between two

1. INTRODUCTION SIGNIFICANT

impressions. Jain et al. proposed a novel later bank

improvements

recognition have been

in

fingerprint

based fingerprint feature representation method.

achieved in terms of

Jingled.

algorithms, but there are still many challenging tasks.

Addressed a method which relies on a

similarity measure defined between local structural

One of them is matching of nonlinear distorted

features, to align the two pat- terns and calculate a

finger-prints. According to Fingerprint Verification

matching score between two minutiae lists. Fantail.

Competition 2004 (FVC2004), they are particularly

Applied a set of geometric masks to record part of the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

178

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

rich information of the ridge structure. What et al.

not always well defined and, therefore, cannot be

Addressed a method using groups of minutiae to

correctly detected. A significant number of spurious

define local structural features. The matching is

minutiae may be created as a result. In order to

performed based on the pairs of corresponding

ensure that the performance of the minutiae

structural features that are identified between two

extraction algorithm will be robust with respect to the

fingerprint impressions. However, these methods do

quality of input fingerprint images, an enhancement

not solve the problem of nonlinear distortions.

algorithm which can improve the clarity of the ridge

Recently, some algorithms have been presented to

structures is necessary. However, for poor fingerprint

deal with the nonlinear distortion in fingerprints

image, some spurious minutiae may still exist after

explicitly in order

to improve the matching

fingerprint enhancement and post processing. It is

performance. Proposed a method to measure the

necessary to propose a method to deal with the

forces and torques on the scanner directly. This

spurious minutiae.

prevents capture with the aid of specialized hardware when excessive force is applied to the scanner Doraietal. Proposed a method to detect and estimate distortion occurring in fingerprint videos, but those two mentioned methods do not work with the collected fingerprint images. Mao and Maltonietal. Proposed a plastic distortion model to cope with the nonlinear deformations characterizing finger- print images taken from on-line acquisition sensors. This model helps to understand the distortion process. However, it is hard to automatically and reliably estimate the parameter due to the insufficiency and uncertainty of the information. Doggie Leeetal. Addressed a minutiae-based fingerprints matching

Fig. 1. Feature set of a live-scan fingerprint image. (a)

algorithm using distance normalization and local

Original fingerprint image. (b) Thinned ridge image with minutiae and sample points of (a).

alignment to deal with the problem of the nonlinear distortion.

However,

rich

information

of

the A method to judge whether an extracted minutia is a

ridge/valley structure is not used, and the matching

true one has been proposed in this paper. According

performance is moderate.

to our work, the distance between true minutiae is However, in reality, approximately 10% [20] of

generally greater than threshold (three). While near

acquired fingerprint images are of poor quality due to

the spurious minutiae, there are usually other

variations

ridge

spurious minutiae. On the other hand, spurious

configuration, skin conditions, acquisition devices,

minutiae are usually detected at the border of

and non-cooperative attitude of subjects, etc. The

fingerprint image. Examples of spurious minutiae in

ridge structures in poor-quality fingerprint images are

poor quality fingerprint images are shown in Fig. 2.

in

impression

conditions,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

179

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Fingerprint

ISBN: 378 - 26 - 138420 - 5

recognition

(also

known

as

Dactyloscopy) is the process of comparing known fingerprint against another or template fingerprint to determine if the impressions are from the same finger or not. It includes two sub-domains: one is fingerprint verification

and

the

other

is

finger

print

identification. Verification specify an individual fingerprint by comparing only one fingerprint template stored in the database, while identification specify comparing all the fingerprints stored in the database. Verification is one to one matching and Fig. 2. Examples of spurious minutiae in poor quality

identification is one to N (number of fingerprint

ďŹ ngerprint images. The images have been cropped and

templates

scaled for view. (a) Original image. (b) Enhanced image of

available

in

database)

matching.

Verification is a fast process as compared to

(a), (c) original image, (d) enhanced image of (c). Many

identification.

spurious minutiae were detected in the process of minutiae extraction. Near the spurious minutiae, there are usually other spurious minutiae as indicated in ellipses, and spurious minutiae are usually detected at the border of ďŹ ngerprint image as shown in rectangles.

2. EXISTING SYSTEM A. Fingerprint Recognition Fig.2. Fingerprint Recognition System

The existing algorithm uses a robust alignment

Fig.2 shows the basic fingerprint recognition system.

algorithm (descriptor-based Hough transform) to

First of all we take a fingerprint image. After taking

align fingerprints and measures similarity between

an input image we can apply fingerprint segmentation

fingerprints by considering both minutiae and

technique. Segmentation is separation of the input

orientation field information. To be consistent with

data into foreground (object of interest) and

the common practice in latent matching (i.e., only

background

minutiae are marked by latent examiners), the

ridges) from the background. This is very useful for

marked minutiae, it can be easily used in law

recovering false feature extraction. In some cases, a

enforcement applications. Experimental results on

correct segmentation is very difficult, especially in

two different latent databases show that the proposed two

well

Before

to separate the fingerprint regions (presence of

Since the proposed algorithm relies only on manually

outperforms

information).

extracting the feature of a fingerprint it is important

orientation field is reconstructed from minutiae.

algorithm

(irrelevant

poor quality fingerprint image or noisy images.

optimized

Orientation

commercial fingerprint matchers.

field plays an

important role in

fingerprint recognition system. Orientation field

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

180

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

consist of four major steps (1) pre processing

techniques first of all we find minutiae points on

fingerprint image (2) determining the primary ridges

which we have to do mapping. However, there are

of fingerprint block (3) estimating block direction by

some difficulties when using this approach. It is

projective distance variance of such a ridge (4)

difficult to identify the minutiae points accurately

correcting the estimated orientation field. Image

when the fingerprint is of low quality.

enhancement is use to improve significantly the

• Pattern-based (or image-based) matching:

image quality by applying some image enhancement

Pattern based technique compare the basic fingerprint

technique. The main purpose of such procedure is to

patterns (arch, whorl, and loop) between a previously

enhance the image by improving the clarity of ridge

stored template and a candidate fingerprint. This

structure or increasing the consistency of the fridge

requires that the images be aligned in the same

orientation. Fingerprint classification is used to check

orientation. In a pattern-based algorithm, the template

the fingerprint pattern type. After classification of

contains the type, size, and orientation of patterns

fingerprint. We can apply fingerprint ridge thinning

within the aligned fingerprint image.

which is also called block filtering; it is used to

The candidate fingerprint image is graphically

reduce the thickness of all ridges lines to a single

compared with the template to determine the degree

pixel width. Thinning does not change the location

to which they match.

and orientation of minutiae points compared to original fingerprint which ensures accurate estimation

3. PROPOSED SYSTEM

of minutiae points. Then we can extract minutiae

A. entropy

points and generate data matrix. Finally we can use

Image entropy is an extent which is used to explain

minutiae matching to compare the input fingerprint

the business of an image, i.e. the amount of

data with the template data and give the result.

information which must be implicit for by a compression algorithm. Low entropy Images, such as

B. Fingerprint Matching Techniques

those include a lot of black sky, have very little

There are many Fingerprint Matching Techniques.

difference and large runs of Pixels with the same or

Most widely used matching techniques are these:

parallel DN values. An Image that is entirely flat will have entropy of Zero. Therefore, they can be

• Correlation-based matching:

compressed to a relatively small size. On the other

In correlation based matching the two fingerprint

hand, high

images are matched through corresponding pixels

Entropy images such as an image of heavily formed

which is computed for different alignments and

areas on the moon have a great deal of

rotations. The main disadvantage of correlation based

Thing from one pixel to the next and accordingly

matching is its computational complexity.

cannot be compressed as much as low entropy images. Image entropy as used in this paper is

• Minutiae-based matching:

calculated with the same formula used by the Galileo

This is the most popular and widely used technique,

Imaging Team Entropy

for

fingerprint

comparison.

In

:

minutiae-based

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

181

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

In the above expression, P is the probability 

That the difference between two adjacent pixels

Wiener Filtering:

The Wiener filtering is optimal in conditions of the

Is equal to i, and Log2i is the base 2 algorithm.

mean square error (MSE). In other words, it

Entropy successfully bounds the performance of The strongest lossless compression feasible, which

minimizes the generally mean square error in the

can be realized in theory by using the

development of inverse to remove and noise

Distinctive set or in perform using Huffman.

smoothing. The Wiener filtering is a linear inference of the new image. The advance is based on a stochastic frame. The orthogonality principle implies

B. Correlation

that the Wiener filter in Fourier domain. To complete

Digital Image connection (correlation) and Tracking (DIC/DDIT) is an optical method that employs

the

Wiener

filter

in

perform

we

have

to

tracking & image check techniques for accurate 2-D

approximation the power spectra of the original image. For noise is remove the power spectrum is

and 3-D measurements of change in images. This is

equal to the variation of the noise. To estimate the

often used to measure deformation (engineering),

power range of the original image many methods can

displacement, strain, and visual flow, but it is widely

be used. A through estimate is the period gram

applied in many areas of science and engineering

estimate of the power spectral density (PSD).

calculations. The image is first subjected to a 2-D wiener filter using a 3*3 mask. By using wiener filter to remove the redundant noise or un-wanted pixels.

C. Energy

Digital image correlation (DIC) techniques have been

For calculating the energy coefficient, the image is subjected to a wavelet decomposition using the

increasing in status, especially in micro- and neon-

Daubechies wavelet for up to 5 levels. The wavelet

scale mechanical testing applications due to its relative ease of implementation and use. Advances in

decomposition involves the image with a low-pass

computer technology and digital cameras have been

filter for generating the approximation coefficients

the enabling technologies for this method and while

and a high pass filter for generating the detail

white-light optics has been the leading approach, DIC

coefficients, followed by a down-sampling. The data

can be and has been extensive to almost any imaging

image for each level is taken as the approximation

technology.

image for the previous level. Another related use is in image transforms: for example, the DCT transform

The cross correlation coefficient is defined by

(basis of the JPEG compression method) transforms a

we

blocks of pixels (8x8 image) into a matrix of

represent as r, then we have

transformed coefficients; for distinctive images, it results that, while the original 8x8 image has its energy regularly distributed among the 64 pixels, the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

182

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

changed image has its energy determined in the lower-upper "pixels”, The decomposition operation generates the approximation coefficientsA5 and Where n is the number of variables, and Xi

detailed coefficients B5,B4,B3,B2,B1 as shown in

And Y is the values of the ith variable, at points X

below.

and Y correspondingly. i The Manhattan Distance is

Ea=∑ (A5)/∑ (B5+ B5+B4+B3+B2+B1)

the distance between two points considered along axes at right angles. The name alludes to the grid

The final feature vector is taken as the complex

explain of the streets of Manhattan, which cause the

formed of the above three components viz.

straight path a car could take between two points in

F= {Cc, En, Ea}

the city. For the 8-puzzle if xi(s) and y(s) are the x

Classification is done by mapping the feature vectors

and y coordinates of tile i in state s, and if upper line

of a training set and a testing set into appropriate

(xi) and upper-line (yii) are the x and y coordinates of

feature spaces and calculating differences using

tile i in the goal state, the heuristic is:

Manhattan distance.

ALGORITHM

FOR

CALCULATING

STATISTICAL TEXTURE FEATURES Input: Query image for which statistical features has

I

been computed. Output: feature vector 1. Calculate Entropy for query image (En) using -sum

Figure 3: Wavelet decomposition of an image

(p.*log2 (p)) formula E. Manhattan Distance

2. Apply wiener filter for query image and then

The distance between two points in a grid base on a

calculate correlation coefficient (CC) for query image

firmly horizontal and/or vertical path (that is, along

and filtered image

the grid lines), as distinct to the diagonal or "as the

3. Apply 5 level decomposition for input query image

crow flies" distance. The Manhattan detachment is

and calculate energy for coefficients (Ea)

the plain sum of the horizontal and vertical works;

4. Calculate feature vector F for query image by

whereas the diagonal span might be computed by

using En, Ea, and CC.

apply the Pythagorean Theorem. The formula for this

Then compare feature vector F of query image with

distance between a point X=(X1, X2, etc.) and a point

the database image and if features are equal then the

Y= (Y1, Y2, etc.) is:

image is matched.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

183

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

0.420820.

4. EXPERIMENTAL RESULTS The proposed algorithm has been participated in FVC2004. In FVC2004, databases are more difficult than in FVC2000/FVC2002 ones. In FVC2004, the organizers have particularly insisted on: distortion, dry, and wet fingerprints. Especially in fingerprints database DB1 and DB3 of FVC2004, the distortion between some fingerprints from the same finger is large. Our work is to solve the problem of distorted fingerprint matching, so the evaluation of the proposed algorithm is mainly focused on DB1 and DB3 of FVC2004. The proposed algorithm is also compared with the one described by Luo et al. and the

one

proposed

by

Bazen

et

al Fig.5. Experimental results of the proposed algorithm on 103_2.tif and 103_4.tif in FVC2004 DB3. The images have been scaled for view. (a) 103_2.tif. (b) Enhanced image of 103_2. (c) 103_4.tif. (d) Enhanced image of 103_4. The similarity of these two fingerprints is 0.484 111.

5. CONCLUSION This paper has proposed a quick and efficient technique of fingerprint recognition using a set of texture statistical based features. The features are derived from a correlation coefficient, an entropy coefficient and an energy coefficient. The features can be calculated by using fingerprint miniature points. Moreover such texture based by using color finger print images. The fingerprint images may be

Fig.4. Experimental results of the proposed algorithm on

divided in to separation of red, green and blue

102_3.tif and 102_5.tif in FVC2004 DB1. The images have been cropped and scaled for view. (a) 102_3.tif. (b)

components. And output part combine true color

Enhanced image of 102_3. (c)102_5.tif. (d) Enhanced

components. Future work would involve combining

image of 102_5. The similarity of these two fingerprints is

color and shape based techniques to study whether these can be used to improve recognition rates.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

184

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

6. REFERENCES

IEEE Transactions on Image Processing, 9, 2000,

[1]. A. Lanitis, “A Survey of the Effects of Aging on

pp. 846-853.

Biometric

Identity

Verification”,

DOI:10.1504/IJBM.2010.030415 [2]. A. K. Jain, A. Ross and S. Prabhkar, “An Introduction

to

Biometric

Recognition”,

AUTHORS

IEEE K.Phalguna Rao, completed M.Tech information technology from Andhra University presently Pursuing PhD. Life member of ISTE. He is working as Professor in the Dept of CSE Published several papers in the International Journals and International and national conferences. Attended several International and national workshops. Research Interest areas are Data Base Systems, Network Security, cloud Computing, Bioinformatics.

Transactions on Circuits and Systems for Video Technology, special issue on Image and Video – Based Biometrics, 14, 2004, pp. 4-20. [3]. A. K. Jain, A. Ross and S. Pankanti, “Biometrics: A Tool for Information Security”, IEEE Transactions on Information Forensics and Security. 1, 2000. [4]. A. K. Jain and A. Ross, “Fingerprint Matching Using Minutiae and Texture Features”, Proceeding of International

Conference on

Image Processing

(ICIP), 2001, pp. 282-285. [5]. A. K. Jain, L. Hong, S. Pankanti and R. Bolle, “An

Identity-Authentication

System

using

Fingerprints”, Proceeding of the IEEE. 85, 1997, pp. M.Charan Kumar received sree kalahasthi institute of technology degree in computer science engineering from the Jawaharlal Nehru technology university Anantapur, in 2010, and received the Audisankara institute of technology M.Tech degree in computer science engineering from the Jawaharlal Nehru technology Ananthapur in 2014, respectively. He published one International journal and participated four national conferences and participate One International conference. He worked as communication faculty for 3 years in Kerala and Karnataka.

1365-1388. [6]. D. Maltoni, D. Maio, A. K. Jain and S. Prabhkar, Handbook of Fingerprint Recognition. [7]. S. Chikkerur, S. Pankanti, A. Jea and R. Bolle, “Fingerprint Representation using Localized Texture Features”, The 18th

International Conference on

Pattern Recognition, 2006. [8]. A. A. A. Yousiff, M. U. Chowdhury, S. Ray and H. Y. Nafaa, “Fingerprint Recognition System using Hybrid Matching Techniques”, 6th IEEE/ACIS International

Conference

on

Computer

and

Information Science, 2007, pp. 234-240. [9]. O. Zhengu, J. Feng, F. Su, A. Cai, “Fingerprint Matching

with

Rotation-Descriptor

Texture

Features”,

The 8th International Conference on

Pattern Recognition, 2006, pp. 417-420. [10]. A. K. Jain, S. Prabhkar, L. Hong and S. Pankanti, “Filterbank-Based Fingerprint Matching”,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

185

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Rely on Administration with Multipath Routing for Intrusion Threshold in Heterogeneous WSNs. CH. Sendhil Kumar Department of CSE ASCET, Gudur. A.P, India chandukumar7574@gmail.com

S. Dorababu Department of CSE ASCET, Gudur. A.P, India dorababu.sudarsa@gmail.com

ABSTRACT Here with in this report most of us recommend excess management connected with heterogeneous wireless sensor networks (HWSNs), utilizing multipath aiming to help solution client enquiries near questionable in addition to malevolent hubs. The main element thought of the repeating administration is usually to effort this buy and sell -off concerning vitality utilization compared to this increase in trustworthiness, ease, In addition to security to help boost this framework very helpful lifetime. All of us program this trade off being an enhancement matter intended for alterably choosing the best repeating degree to use to help multipath aiming intended for disruption tolerance while using the purpose that the inquiry reaction achievements likelihood is actually increased although dragging out this very helpful lifetime. Other than, most of us consider this to be enhancement matter to the circumstance in which a voting-based disseminated disruption finding calculations is actually linked with find in addition to oust malevolent hubs in a HWSN. DSR seemed to be specifically designed for utilization in multi-hop wireless random communities. Ad-hoc method permits the actual network to become entirely self-organizing in addition to selfconfiguring meaning you don't have on an active network structure or even management. To meet these trouble when using the Slow Gradual cluster election Algorithm (GCA), it is reducing energy usage of community groupings in addition to overall communities. The item elected the actual gradual cluster amid nodes that happen to be practical for the actual gradual cluster head in addition to proved the power effectiveness inside network. Key words — Heterogeneous wireless sensor Networks; multipath routing; interruption location; dependability, security; energy conversation several completely new multipath direction-finding protocols particularly intended for sensor networks. Multipath direction-finding is an effective approach, which often selects a number of trails to supply files through supplier in order to desired destination in wireless sensor systems. Multipath direction-finding ensures a number of trails between your sourcedestination match. Within sole way completely new route development procedure is actually begun, which often boosts energy use. Node disappointment additionally reasons packets to be lowered and may cause a hold off in delivering your data to the destroy, as a result the real-time demands of the multimedia system apps will not be met. Multi-path routing boosts the volume of feasible paths and also via that that improves the heavy ness and also throughput of the transmissions. Multipath directionfinding is actually commercial intended for major

I. INTRODUCTION Wireless sensor Networks (WSN) is made of spatially spread autonomous detectors in order to check real as well as environmental conditions, for instance temperature, noise , force, for example. and to cooperatively go the files through the community with a major position. Greater modern wireless networks usually are bi-directional, additionally which allows manage connected with sensor action. The improvement connected with wireless sensor networks has been encouraged simply by military apps for instance battlefield surveillance; today like wireless communities are widely-used in many manufacturing and also client apps, for instance manufacturing procedure monitoring and also manage, appliance wellness monitoring, and so on. Wireless sensor networks(WSNs) include resulted in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

186

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

insert predicament when compared with sole way. Multipath routing is actually mostly utilized often intended for insert handling as well as intended for dependability. Load handling is possible simply by handling energy utilization among the nodes increasing community life-time. The ability of your community to provide services in order to decided on community targeted visitors more than various technology. The primary aim connected with QoS is to offer priority including focused bandwidth, controlled jitter and also latency need simply by many real-time and also interactive targeted visitors, and also increased reduction attributes. Intrusion detection system (IDS) using the aim in order to discover and also remove malevolent nodes. Within sensor cpa networks, almost all adversaries would targeted the direction-finding stratum, since which allows the crooks to manage the data flowing inside the community. In addition to, sensor networks usually are mostly about exposure files back to the bottom station, and also disrupting this would produce a great episode an excellent 1. And so, intended for like sensor networks, the most likely structures on an IDS can be network based, compared to host-based.

ISBN: 378 - 26 - 138420 - 5

neighbor nodes and the continuing energy level and also one-hop neighbor details (GCA-ON), which often elects group brains according to Im and the comparable position details connected with sensor nodes. This kind of mostly concentrates on the group scalp choice. It focuses upon lowering energy utilization of nearby groups and also overall wireless systems. The idea decided the group scalp among nodes that happen to be simple for the group scalp and also turned out the energy productivity. The group boss election will be based on the energy and the boss can deliver the announcements in order to inter group associates and the intra groups. And it also locates the common energy of the overall community and it re-elects the new group scalp. II. LITERATURE SURVEY D. Wagner et al., (2003)[16].yet none of them have been outlined with security as an objective. We propose security objectives for directing in sensor systems, demonstrate how assaults against specially I.R.Chen et ing.,[3] Suggested your data sensing are used in endemic program in location including security along with security supervising along with order, control in struggle field. To produce adaptive fault tolerant good quality regarding assistance (QOS) control criteria depending on hop by-jump files distribution using origin along with way redundancy, having your objective to meet up with program QOS requirements.

The network-based IDS works by using uncooked community packets because the databases. The idea listens upon the community and also catches and also looks at person packets instantly. DSR can be a reactive direction-finding process which often has the ability to deal with the MANET without needing recurrent table update announcements such as tabledriven direction-finding protocols complete. DSR has been created intended for use within multi-hop wireless ad hoc networks. Ad-hoc process makes it possible for the community to be totally selforganizing and also self configuring which means that there's no need on an present community national infrastructure as well as supervision. Intended for restricting the bandwidth, accomplishing this to get a way is only carried out when a way is needed by way of node (On- Demand-Routing).

Bao et ing.,[15] Suggested your scalable clusterbased hierarchical confidence operations standard protocol with regard to WSN to help successfully manage detrimental nodes.. Some sort of story likelihood style summarize some sort of HWSN composed a lot of sensor nodes having unique social along with good quality regarding assistance (QOS) behaviours. Anomaly structured breach recognition used in both the recognition likelihood along with false beneficial likelihood

Within DSR the sender (source, initiator) decides the whole way on the supplier in order to the destination node (Source-Routing) and also debris the deals with of the second time beginners nodes of the route inside the packets. When compared with other reactive direction-finding protocols such as ABR as well as SSA, DSR is actually beacon-less which means that you will discover not any hello-messages utilized between your nodes in order to notify the neighbours about your ex occurrence. Gradual cluster election Protocol (GCA) which often little by little elects group brains in line with the proximity in order to

A.P.R da Silva (2005) [10] might be connected to secure Wsns against a few sorts of assaults. Be that as it may, there are a few assaults for which there is no known aversion strategy. For these cases, it is important to utilize some system of interruption location. Other than keeping the gatecrasher from bringing about harms to the system, the interruption recognition framework (IDS) can get data identified with the assault procedures, helping in the improvement of avoidance frameworks.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

187

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

In this work we propose an IDS that fits the requests and limitations of Wsns. Reenactment results uncover that the proposed IDS is proficient and faultless in discovering various types of reenacted assaults.. In Traditional Technique, efficient redundancy supervision of your clustered HWSN to extend its life-time procedure inside presence associated with unreliable along with detrimental nodes. Most of us tackle the actual trade-off concerning energy usage as opposed to. QoS obtain throughout dependability, timeliness along with protection with all the aim to increase the actual lifetime of a new clustered HWSN though gratifying app QoS requirements inside wording associated with multipath course-plotting. Far more exclusively, most of us assess the suitable volume of redundancy where data are generally sent with a remote control sink inside presence associated with unreliable along with detrimental nodes, in order that the issue achievement chance is actually maximized though increasing the actual HWSN lifetime. SYSTEM MODEL

ISBN: 378 - 26 - 138420 - 5

Because of constrained energy, a packet is sent bounce by jump without utilizing affirmation or retransmission. All sensors are liable to catch assaults (inside aggressors). Since all sensors are haphazardly placed in operational region catch rate applies to both Chs and Sns thus traded off hubs are additionally arbitrarily conveyed in operation territory. [19] Compromised node performs two most energy conserving attacks, bad-mouthing attacks, packet dropping attacks. Environment conditions cause a node to fail with certain probability include hardware failure and transmission failure (due to noise and interference). Hostility to HWSN is characterized by as per node capture rate, determined based on historical data and knowledge about the target application environment.

WSN consists of sensors involving unique functionality sorts of sensors are usually Cluster Minds (CHs) in addition to Sensor Nodes (SNs). CHs are usually better than SNs with power in addition to computational means, stand for the original energy level involving CHs in addition to SNs, in addition to placed on almost any shape with the detailed region [3]. CHs and SNs are distributed in the operational area ensure coverage by deploying CHs and SNs randomly and distributed according to homogeneous spatial Poisson processes. Radio ranges of CH and SN for transmission are initialized. Radio range and transmission power of both CHs and SNs are dynamically adjusted throughout system lifetime to maintain connectivity between CHs and between SNs. Multi-hop Routing is required for communication between two nodes with distance greater than single hop

Fig 2 Block diagram of WSN A WSN should not just fulfil the application particular Qos prerequisites, for example, dependability, opportunities and security, additionally minimize vitality utilization to drag out the framework valuable lifetime. It is regularly put stock in the exploration group that bunching is a powerful answer for accomplishing adaptability, vitality preservation, and unwavering quality. Utilizing heterogeneous hubs can further improve execution and draw out the framework lifetime.In existing system DSR is utilized to comprehend this problem.dsr is a touchy directing convention which can deal with a MANET without utilizing occasional table-overhaul messages like table-driven steering conventions do. DSR was particularly intended for utilization in multi-jump remote impromptu systems. Specially appointed convention permits the system to be totally orchestrating toward oneself and organizing toward oneself which implies that there is no requirement for a current system base or organization.

Fig1 : Cluster-based WSN Architecture.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

188

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

along with inside giving the user, self-assurance that this brand new process will continue to work and become powerful. The particular enactment level entails careful setting up, research of the active process along with its restrictions on enactment, building involving ways to attain changeover along with assessment involving changeover strategies.

Dynamic Source Routing Protocol DSR is a receptive directing convention which can deal with a MANET without utilizing occasional table upgrade messages like table-driven steering conventions do. DSR was particularly intended for utilization in multi-jump remote specially appointed systems. Impromptu convention permits the system to be totally dealing with toward oneself and self designing which implies that there is no requirement for a current system base or organization.

SCHEMES: 1. Multi – Course-plotting 2. Intrusion Patience 3. Energy Productive 4. Simulation Procedure

For limiting the data transfer capacity, the methodology to discover a way is just executed when a way is needed by a hub (On-Demand-Routing). In DSR the sender (source, initiator) decides the entire way from the source to the terminus hub (SourceRouting) and stores the locations of the moderate hubs of the course in the bundles. Contrasted with other responsive directing conventions like ABR or SSA, DSR is guiding less which implies that there are no welcome messages utilized between the hubs to inform their neighbours about her vicinity. DSR was produced for Manets with a little distance across somewhere around 5 and 10 bounces and the hubs ought to just move around at a moderate speed.dsr is focused around the Link-State-Algorithms which imply that every hub is proficient to spare the most ideal path to an end. Additionally if a change shows up in the system topology, then the entire system will get this data by flooding.

1. Multi – Course-plotting Within this component, Multipath redirecting is regarded a highly effective system regarding failing along with attack tolerance to improve facts shipping inside WSNs. Principle concept is actually that this chance involving at least one particular journey achieving this kitchen sink node or even base stop increases as we convey more walkways carrying out facts shipping. Alot of previous exploration dedicated to using multipath redirecting to improve consistency, a few attentionshas been paid out to be able to using multipath redirecting to be able to accept insider attacks. These reports, nonetheless, generally pushed aside this tradeoff among QoS attain versus. Power usage that may negatively shorten the system life time.

Disadvantages Raises in traditional based protocols

2. Intrusion tolerance

Servicing standard protocol will not in the area mend a new damaged web page link. Your damaged web page link is merely divulged for you tothe actual initiator. Your DSR standard protocol is merely efficient in MANETs having below 190 nodes. Difficulties glimpse simply by rapid going of much more serves, so your nodes can solely move around however using a reasonable swiftness. Inundating the actual network might cause collusions relating to the packets. Likewise you can a compact occasion hold up at the get started of a brand-new relationship because the initiator ought to first chose the approach to the target.

Within these schemes, attack tolerance via multipath redirecting, you will discover a pair of important problems to fix[1] (1) The amount of walkways to work with along with (2) What exactly walkways to work with. Towards ideal your knowledge, i am the very first to cope with this “how numerous walkways to be able to use” dilemma. With the “what walkways to be able to use” dilemma, our strategy is actually distinct from active function in this we all usually do not think about distinct redirecting practices.

III. SYSTEM IMPLEMENTATION

3. Energy Productive Within this component, you will discover a pair of approaches by which power successful IDS might be carried out inside WSNs. One particular strategy particularly pertinent to be able to level WSNs is designed for a second time beginners node to be able to feedback maliciousness along with power

Rendering is the level of the venture when the theoretical design and style is actually turned out in to a functioning process. As a result it can be regarded as being essentially the most vital level inside obtaining an excellent brand new process

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

189

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

reputation involving its neighbour nodes for the sender node (e. Gary the gadget guy., the origin or even kitchen sink node) who is going to and then operate the knowledge to be able to path packets in order to avoid nodes with unsatisfactory maliciousness or even power reputation. Another strategy which usually we all take up with this cardstock using regional host-based IDS regarding power resource efficiency.

ISBN: 378 - 26 - 138420 - 5

Electing the Cluster And Cluster Heads: In this module fundamentally concentrates on the bunch head determination. The Gradual Cluster head race Calculation (GCA) is utilized for choosing the group and bunch heads. It chose the bunch head among hubs which are feasible for the group head and demonstrated the vitality proficiency.The group pioneer decision will be focused around the vitality and the pioneer will send the messages to bury bunch parts and the intra groups .And it discovers the normal vitality of the general system and it rechooses the new group head. the vitality utilization will be less over than this element source steering convention.

4 Simulation Procedures Within this component, the cost of executing this active redundancy managing protocol defined earlier mentioned, as well as recurrent clustering, recurrent attack diagnosis, along with query digesting via multipath redirecting, with regard to power usage

Deploying Redundancy Management System:

IDS: Intrusion Detection System (IDS)[9] is a device or software application that monitors network or system activities for malicious activities and produces reports to a management station. Intrusion Detection and prevention System (IDPS) are primarily focused on identifying possible incidents, logging information about them and reporting attempts.

The viable repetition administration framework was conveyed .after the bunch and group head determination the excess administration framework is connected. Furthermore the Qos was enhanced, for example, unwavering quality, and opportunities is progressed. the information is exchanged from source to sink. Through this repetition system the information is steered from source to sink. So the vitality of the hub will be expanded.

Gradual Cluster Head Election Algorithm (GCA): In the proposed technique the way must be given and a substitute must be picked by the directing method and we execute the trust system to assess the neighbour hubs and there is no requirement for grouping system and the light weight IDS [20]. could be an extra gimmick to catch the different assault sorts and our adjustment predominantly concentrate on the vitality utilization and the interruption tolerance and our adjusted convention will act as indicated by the trust usage it picks the main when the trust quality accomplishes and there won't be any steering overhead .And it will enhance the QOS and the vitality utilization additionally will be less.

IV. PERFORMANCE EVALUATION HWSN includes 3000 SN nodes and also 100 CH nodes, started within a square area of 92(200m × 200m).Nodes usually are dispersed in your neighbourhood after having a Poisson process along with denseness SN = 35 nodes/(20 × 20 m2) and also CH = 1node/(20×20 m2) in deployment time period. Yourstereo varies rch and also rsh usually are dynamically fine-tuned among 5m to be able to 25m and also 25m to be able to 120m respectively to be able to sustain multilevel online connectivity. The 1st energy levels of SN and also CH nodes usually are Esno= 0. 8 Joules and alsoEcho= 10 Joules so they really wear out energy in concerning the same time frame. The force dissipation to perform the actual transmitter and also recipient circuitry is usually 50 nJ/bits. Your energy used by the actual monitor amplifier to realize and also tolerable indicate to be able to noises rate. Fig. 3&4 demonstrates a good best combo (mp&ms) under low and also highget rate for life time maximization.

The Gradual Cluster head decision Algorithm (GCA) is utilized for choosing the group and bunch heads. This technique chiefly concentrates on the group head choice. It concentrates on decreasing vitality utilization of near by bunches and general systems. It chose the bunch head among hubs which are feasible for the group head what’s more demonstrated the vitality effectiveness [20]. The bunch pioneer decision will be focused around the vitality and the pioneer will send the messages to entomb group parts and the intra bunches .And it discovers the normal vitality of the generally system and it re-chooses the new group head.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

190

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

excess (mp) and source repetition (ms), and also the best interruption identification settings regarding the amount of voters (m) and the interruption conjuring in term ( under which the lifetime of a heterogeneous remote sensor system is boosted while fulfilling the unwavering quality, convenience and security necessities of question handling applications in the vicinity of questionable remote correspondence and noxious hubs. At last, we connected our investigation results to the configuration of an element excess administration calculation to distinguish and apply the best plan parameter settings at runtime accordingly to environment progressions to drag out the framework lifetime.

Fig.3 Effect of T IDS on MTTF under low capture rate

The WSN mustn't just match the application particular QoS demands for example stability, timeliness as well as protection, but in addition limit strength use to help prolong the machine helpful lifetime. It's generally thought in the analysis group in which clustering is an efficient answer pertaining to attaining scalability, strength resource efficiency, as well as stability. Using heterogeneous nodes can additional improve functionality as well as prolong the actual program life-time. In current program DSR is utilized to fix this concern. DSR is usually a reactive direction-finding protocol which is capable to take care of any MANET without needing regular table-update messages similar to table-driven direction-finding protocols perform. DSR has been specifically designed pertaining to utilization in multi-hop wireless ad hoc networks. Ad-hoc protocol enables the actual circle to become absolutely selforganizing as well as self-configuring which means that you don't have to have an current circle national infrastructure or maybe current administration. To fulfill the aforementioned problem utilizing the Continuous Gradual cluster head election Algorithm, it truly is cutting down strength usage of nearby groupings as well as all round networks. This elects the actual cluster mind among nodes which are possible for the actual cluster mind as well as proved the energy proficiency in the circle.

Fig.4 Effect of TIDS on MTTF under high capture rate

Energy Consumption

VI. REFERENCES

Iterations Fig5Energy consumption in DSR & GCA method

[1]

V. CONCLUSION In this paper we performed a trade-off investigation of vitality utilization vs. Qos pick up in dependability, opportunity, and security for excess administration of grouped heterogeneous remote sensor systems using multipath directing to answer client inquiries. We created a novel likelihood model to dissect the best excess level in wording of way

Hamid Al-Hamadi and Ing-Ray Chen (2013) ‘Redundancy Management of Multipath Routing for Intrusion Tolerance in Heterogeneous Wireless Sensor Networks’ Vol 10.

[2] E. Felemban, L. Chang-Gun, and E. Ekici, "MMSPEED: multipath Multi-SPEED protocol for QoS guarantee of reliability and. Timeliness inwireless sensor networks," IEEE Trans. Mobile Comput., vol. 5, no. 6, pp.738-754, 2006. [3] I. R. Chen, A. P. Speer, and M. Eltoweissy, "Adaptive FaultTolerantQoS Control Algorithms for Maximizing System Lifetime of Query-Based Wireless Sensor Networks," IEEE

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

191

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Trans. on Dependable andSecure Computing, vol. 8, no. 2, pp. 161-176, 2011.

ISBN: 378 - 26 - 138420 - 5

MANET," InternationalConference on Communication and Networking, 2008, pp.

[4] M. Yarvis, N. Kushalnagar, H. Singh, A. Rangarajan, Y. Liu, and S.Singh, "Exploiting heterogeneity in sensor networks," 24th Annu. JointConf.of the IEEE Computer and Communications Societies (INFOCOM),2005, pp. 878-890 vol. 2.

Computing,

[19 ] Trust Management of Multipath Routing for Intrusion Tolerance in Heterogeneous WSN [20] Trust Based Voting Scheme and Optimal Multipath Routing for Intrusion Tolerance in Wireless Sensor Network

[5] H. M. Ammari and S. K. Das, "Promoting Heterogeneity, Mobility, andEnergy-Aware Voronoi Diagram in Wireless Sensor Networks," IEEETrans. Parallel Distrib. Syst., vol. 19, no. 7, pp. 995-1008, 2008. [6] X. Du and F. Lin, "Improving routing in sensor networks with heterogeneous sensor nodes," IEEE 61st Vehicular TechnologyConference, 2005, pp. 2528-2532. [7] S. Bo, L. Osborne, X. Yang, and S. Guizani, "Intrusion detectiontechniques in mobile ad hoc and wireless sensor networks," IEEEWireless Commun., vol. 14, no. 5, pp. 56-63, 2007. [8] I. Krontiris, T. Dimitriou, and F. C. Freiling, "Towards intrusion detectionin wireless sensor networks," 13th European Wireless Conference, Paris,France, 2007. [9] J. Deng, R. Han and S. Mishra, “INSENS: Intrusion tolerant routing for wireless sensor networks,� 2006. [10] A. P. R. da Silva, M. H. T. Martins, B. P. S. Rocha, A. A. F. Loureiro, L.B. Ruiz, and H. C. Wong, "Decentralized intrusion detection in wirelesssensor networks," 1st ACM Workshop on Quality of Service & Securityin Wireless and Mobile Networks, Montreal, Quebec, Canada, 2005. [11] Y. Zhou, Y. Fang, and Y. Zhang, "Securing wireless sensor networks: asurvey," IEEE Communications Surveys & Tutorials, vol. 10, no. 3, pp. 6[12] V. Bhuse and A. Gupta, "Anomaly intrusion detection in wireless sensor networks," J. High Speed Netw., vol. 15, no. 1, pp.33-51, 2006. [13] G. Bravos and A. G. Kanatas, "Energy consumption and trade-offs on wireless sensor networks," 16th IEEE Int. Symp. OnPersonal, Indoor and Mobile Radio Communications, pp. 1279-1283, 2005. [14] A. P. R. da Silva, M. H. T. Martins, B. P. S. Rocha, A. A. F. Loureiro, L. B. Ruiz, and H. C. Wong, "Decentralized intrusion detection in wireless sensor networks," 1st ACM Workshop on Quality of Service & Security in Wireless and Mobile Networks, Montreal, Quebec, Canada, 2005. [15] I.R. Chen, F. Bao, M. Chang, and J.H. Cho, "Trust management for encounter-based routing in delay tolerant networks" IEEEGlobecom 2010, Miami, FL, Dec. 2010. 16] C. Karlof and D. Wagner, "Secure routing in wireless sensor networks: attacks and countermeasures," IEEE Int. Workshop onSensor Network Protocols and Applications, pp. 113-127, 2003. [17] Y. Lan, L. Lei, and G. Fuxiang, "A multipath secure routing protocolbased on malicious node detection," Chinese Control and DecisionConference, 2009, pp. 4323-4328. [18] D. Somasundaram and R. Marimuthu, "A Multipath Reliable Routing fordetection and isolation of malicious nodes in

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

192

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Electric power generation using piezoelectric crystal P.BALAJI NAGA YASHWANTH, GAUTHAM KUMAR MOKA, DONEPUDI JASHWANTH

ANIL NEERUKONDA INSTITUTE OF TECHNOLOGY AND SCIENCES

ABSTRACT

Key words: energy generation ,piezoelectric crystals

The usefulness of most high technology devices such as cell phones, computers, and sensors is limited by the storage capacity of batteries. In the future, these limitations will become more pronounced as the demand for wireless power outpaces battery development which is already nearly optimized. Thus, new power generation techniques are required for the next generation of wearable computers, wireless sensors, and autonomous systems to be feasible. Piezoelectric materials are excellent power generation devices because of their ability to couple mechanical and electrical properties. For example, when an electric field is applied to piezoelectric a strain is generated and the material is deformed. Consequently, when a piezoelectric is strained it produces an electric field; therefore, piezoelectric materials can convert ambient vibration into electrical power. Piezoelectric materials have long been used as sensors and actuators; however their use as electrical generators is less established. A piezoelectric power generator has great potential for some remote applications such as in vivo sensors,embedded MEMS devices, and distributed networking. Developing piezoelectric generators is challenging because of their poor source characteristics (high voltage, low current, high impedance) and relatively low power output. This paper presents a theoretical analysis to increase the piezoelectric power generation that is verified with experimental results.

INTRODUCTION Mechanical stresses applied to piezoelectric materials distort internal dipole moments and generate electrical potentials (voltages) in direct proportion to the applied forces. These same crystalline materials also lengthen or shorten in direct proportion to the magnitude and polarity of applied electric fields. Because of these properties, these materials have long been used as sensors and actuators. One of the earliest practical applications of piezoelectric materials was the development of the first SONAR system in 1917 by Langevin who used quartz to transmit and receive ultrasonic waves. In 1921, Cady first proposed the use of quartz to control the resonant frequency of oscillators. Today, piezoelectric sensors (e.g., force, pressure, acceleration) and actuators (e.g., ultrasonic, micro positioning) are widely available. The same properties that make these materials useful for sensors can also be utilized to generate electricity. Such materials are capable of converting the mechanical energy of compression into electrical energy, but developing piezoelectric generators is challenging because of their poor source characteristics (high voltage, low current, high impedance). This is especially true at low frequencies and relatively low power output.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

193

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

PIEZOELECTRIC CRYSTAL? The phenomenon of generation of a voltage under mechanical stress is referred to as the direct piezoelectric effect, and the mechanical strain produced in the crystal under electric stress is called the converse piezoelectric effect.

 The effect is explained by the displacement of ions in crystals that have a nonsymmetrical unit cell  When the crystal is compressed, the ions in each unit cell are displaced, causing the electric polarization of the unit cell.  Because of the regularity of crystalline structure, these effects accumulate, causing the appearance of an electric potential difference between certain faces of the crystal.  When an external electric field is applied to the crystal, the ions in each unit cell are displaced by electrostatic forces, resulting in the mechanical deformation of the whole crystal.

P = d x stress and E = strain/d Piezoelectricity, discovered by Curie brothers in 1880, originated from the Greek word “ piezenin ”, meaning, to press. MAKING •

The piezoelectric axis is then the axis of polarization. If the polycrystalline material is poled as it is cooled through its curie point, the domains in the crystals are aligned in the direction of the strong electric field. In this way, a piezoelectric material of required size, shape and piezoelectric qualities can be made within limits. In a given crystal, the axis of polarization depends upon the type of stress. There is no crystal class in which the piezoelectric polarization is confined to a single axis. In several crystal classes, however, it is confined to a plane. Hydrostatic pressure produces a piezoelectric polarization in the crystals of those ten classes that show piezoelectricity, in addition to piezoelectricity.

 displacement of electrical charge due to the deflection of the lattice in a naturally piezoelectric quartz crystal  The larger circles represent silicon atoms, while the smaller ones represent oxygen.  Quartz crystals is one of the most stable piezoelectric materials.

ARTIFICIAL MATERIALS polycrystalline, piezoceramics are man made materials which are forced to become piezoelectric by applying large electric field.  high charge sensitivity

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

194

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

 materials available which operate at 1000 F (540 C)  characteristics vary with temperature

• •

ISBN: 378 - 26 - 138420 - 5

This help in reducing ripple in the output waveform. Here the output voltage produced is 2 to 3 volts.

CONFIGURATION:   

Red indicates the crystal Arrows indicate the direction of applied force the compression design features high rigidity, making it useful for implementation in high frequency pressure and force sensors. •

 FOOT STEP GENERATION:

The inverter circuit converts 12 v DC to 12 v AC . Now the obtained 12 v AC is connected to a step up transformer. Commercially available transformer to reach is 12 v AC to 220 v AC. Transformer step up 12 v AC to 220-230v AC. The obtained voltage can be used for many applications.

POWER

When a piezoelectric material is put under pressure it generates potential across the other end. This basic principal us used to produced electrical supply. •

By the ouput obtained a 12v battery is charged . Now the output of 12 v battery is given to a inverter circuit.

The output obtained is a AC voltage. This is the passed through a rectifying circuit which converts AC to DC voltage. The output is connected to a rectifying circuit , which produce a pulsating dc.

APPLICATIONS: The above method can be employed in many ways like… 1.railway station. 2. malls in cities. 3. escalators. Etc

To obtain pure DC we connect a capacitor parallel to the load.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

195

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The other applications where pressure can be employed are: 1. In vehicles tire 2. In runway 3. Industries 4. Ships 5. In drilling machines.

ISBN: 378 - 26 - 138420 - 5

[4]-Gaur and Gupta “Engineering Physics”crystal structure.[58.8]

CONCLUSION: As the results shows that by using double actuators in parallel we can reduce the charging time of the battery and increase the power generated by the piezoelectric device. In second research where a piezoelectric generator was put to the test and generated some 2,000 watt-hours of electricity. The setup consists of a ten-meter strip of asphalt, with generators lying underneath, and batteries in the road’s proximity. So that it is clear by using parallel combination we can overcome the problems like of impedance matching and low power generation. The results clearly show that piezoelectric materials are the future of electric power generation.

Reference: [1]- “Piezoelectric Plate’s and Buzzers”, Oct. 17, 2009. [Online]. Available: http://jingfengele.en.alibaba.com/product/50 91700650159415/Piezoelectric_Buzzer_Plates.html [Accessed: Oct.17, 2009]. [2]- Anil Kumar(2011) „Electrical Power Generation Using Piezoelectric Crystal‟International Journal of Scientific & Engineering Research- Volume 2, Issue 5, May-2011. [3]- www.BEProjectReport.com

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

196

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

INTEGRATION OF DISTRIBUTED SOLAR POWER GENERATION USING BATTERY ENERGY STORAGE SYSTEM K.MOUNIKA

Sri.G.VEERANNA

M.E(Power Systems & Automation)

Asst.Professor

Department Electrical & Electronics Engineering, S.R.K.R Engineering college, Bhimavaram, Andhra Pradesh

Abstract : This paper presents an overview of

sunlight into electricity using semiconductors that

the challenges of integrating solar power to the

exhibit the photovoltaic effect. Photovoltaic effect

electricity distribution system, a technical

involves the creation of voltage in a material upon

overview of battery energy storage systems, and

exposure to electromagnetic radiation. The solar cell

illustrates a variety of modes of operation for

is the elementary building block of the photovoltaic

battery energy storage systems in grid-tied solar

technology. Solar cells are made of semiconductor

applications. . Battery energy storage systems are

materials, such as silicon. One of the properties of

increasingly being used to help integrate solar

semiconductors that makes them most useful is that

power into the grid. These systems are capable of

their conductivity may easily be modified by

absorbing and delivering both real and reactive

introducing impurities into their crystal lattice. The integration of significant amounts of

power with sub -second response times. With these capabilities, battery energy storage systems

photovoltaic (PV) solar power generation to the

can mitigate such issues with solar power

electric grid poses a unique set of challenges to

generation as ramp rate, frequency, and voltage

utilities and system operators. Power from grid-

issues. Specifically, grid-tied solar power

connected solar PV units is generated in quantities

generation is a distributed resource whose output

from a few kilowatts to several MW, and is then

can change extremely rapidly, resulting in many

pushed out to power grids at the distribution level,

issues for the distribution system operator with a

where the systems were often designed for 1-way

large quantity of installed photovoltaic devices.

power flow from the substation to the customer. In climates with plentiful sunshine, the widespread

Index Terms— Battery energy storage systems,

adoption of solar PV means distributed generation on

photovoltaic, renewable, solar.

a scale never before seen on the grid. Grid-connected

I.

INTRODUCTION

solar

PV

dramatically

changes the load pro-file of an electric utility

Photovoltaic is the field of technology and

customer. The expected widespread adoption of solar

research related to the devices which directly convert

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

197

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

generation by customers on the distribution system

counteract the change in generation. In small power

poses significant challenges to system operators both

systems, frequency can also be adversely affected by

in transient and steady state operation, from issues

sudden changes in PV generation. Battery energy

including voltage swings, sudden weather -induced

storage systems (BESS), whether centrally located at

changes in generation, and legacy protective devices

the substation or distributed along a feeder, can

designed with one-way power flow in mind. When

provide power quickly in such scenarios to minimize

there is plenty of sunshine during the day, local solar

customer interruptions. Grid-scale BESS can mitigate

generation can reduce the net demand on a

the above challenges while improving system

distribution feeder, possibly to the point that there is

reliability and improving the economics of the

a net power outflow to the grid. In addition, solar

renewable resource.

power is converted from dc to ac by power electronic

This paper describes the operation and

converters capable of delivering power to the grid.

control methodologies for

Due to market inefficiencies, the typical solar

designed to mitigate the negative impacts of PV

generator is often not financially rewarded for

integration,

providing reactive power support, so small inverters

distribution system efficiency and operation. The

are often operated such that they produce only real

fundamentals of solar PV integration and BESS

power while operating a lagging power factor,

technology are presented below, followed by specific

effectively taking in or absorbing reactive power, and

considerations in the control system design of solar

increasing the required current on the feeder for a

PV coupled BESS installations. The PV-coupled

given amount of real power. A radial distribution

BESS systems described in this paper utilize the XP-

feeder with significant solar PV generation has the

Dynamic Power Resource (XP-DPR).

potential to generate most of its own real power

II.

while

a grid-scale BESS

improving

overall

power

PHOTOVOLTAIC INTEGRATION

during daylight hours, while drawing significant reactive power. Modest

Solar power’s inherent intermittency poses

levels of solar PV generation

on

distribution circuits can be easily managed by the

challenges in terms of power quality and reliability.

distribution system operator (DSO). However, both

A weather event such as a thunderstorm has the

the DSO and the customers of electric retail service

potential to reduce solar generation from maximum

may soon feel the undesirable impacts on the grid as

output to negligible levels in a very short time. Wide-

PV penetration levels increase.

area weather related output fluctuations can be

A PV system consists of a number of

strongly correlated in a given geographical area,

interconnected components designed to accomplish a

which means that the set of solar PV generators on

desired task, which may be to feed electricity into the

feeders down-line of the same substation has the

main distribution grid. There are two main system

potential to drastically reduce its generation in the

configurations – stand-alone and grid-connected. As

face of a mid-day weather event. The resulting output

its name implies, the stand-alone PV system operates

fluctuations can adversely affect the grid in the form

independently of any other power supply and it

of voltage sags if steps are not taken to quickly

usually supplies electricity to a dedicated load or

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

198

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

loads. It may include a storage facility (e.g. battery

= 0.01

+

ISBN: 378 - 26 - 138420 - 5

bank) to allow electricity to be provided during the −

night or at times of poor sunlight levels. Stand-alone

−1

(6)

systems are also often referred to as autonomous systems since their operation is independent of other

Based on (6), it is evident that the power

power sources. By contrast, the grid-connected PV

delivered by the PV array is a function of insolation

system operates in parallel with the conventional

level at any given temperature.

electricity distribution system. It can be used to feed electricity into the grid distribution system or to power loads which can also be fed from the grid. The PV array – characteristic is described by the following; =

−1

(2)

In (2), q is the unit charge, k the Boltzman’s constant, A is the p-n junction ideality factor, and Tc the cell temperature. Current irs is the cell reverse Fig. 1. Simplifi ed one-line diagram of a BESS in parallel with a Solar PV fa-cility connected to the grid.

saturation current, which varies with temperature according to =

(3)

III.

BATTERY ENERGY STORAGE

In (3), Tref is the cell reference temperature, the reverse saturation current at Tref. and EG

A. Battery Energy Storage Basics

the

band-gap energy of the cell. The PV current iph

A grid-scale BESS consists of a battery bank,

depends on the insolation level and the cell

control system, power electronics interface for ac - dc

temperature according to

power

conversion, protective circuitry, and a

transformer to convert the BESS output to the = 0.01

+

(4)

transmission or distribution system voltage level. The

In (4), iscr is the cell short-circuit current at

one- line diagram of a simple BESS is shown in Fig.

the reference temperature and radiation, Kv a

1. A BESS is typically connected to the grid in

temperature coefficient, and the insolation level in

parallel with the source or loads it is providing

kW/m . The power delivered by the PV array is

benefits to, whereas tradi-tional uninterruptible

calculated by multiplying both sides of (2) by vpv.

power supplies (UPS) are installed in series with their

=

−1

loads. The power conversion unit is typically a bi-

(5)

directional unit capable of four quadrant operation,

Substituting iph from (4) in (5), Ppv becomes

means that both real and reactive power can be delivered or absorbed independently according to the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

199

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

needs of the power system, up to the rated apparent power of the converter.

Most BESS control systems can be operated via automatic generation control (AGC) signals much

The battery bank consists of many batteries connected

in

a

combination

like a conventional utility generation asset, or it can

series-parallel

be operated in a solar-coupled mode where real and

configuration to provide the desired power and

reactive power commands for the converter will be

energy capabilities for the application. Units are

generated many times per second based on real -time

typically described with two numbers, the nameplate

PV output and power system data. In the case of the

power given in MW, and the maximum storage time

XP -DPR, three -phase measurements from potential

given in MWh. The BESS described in this paper is a

and current transducers (PTs and CTs) are taken in

1.5/1 unit, means it stores 1 MWh of energy, and can

real-time on an FPGA device, and once digitized

charge or discharge at a maximum power level of 1.5

these signals become the input for proprietary real

MW. In renewable energy applications, it is common

time control algorithms operating at kHz speeds.

to operate a BESS under what is known as partial

Various control algorithms have been used for PV

state of charge duty (PSOC), a practice that keeps the

applications, providing control of ramp rates,

batteries partially discharged at all times so that they

frequency support.

are capable of either absorbing from or discharging power onto the grid as needed.

Fig.2.Configuration of the grid-connected hybrid PV /Battery generation system

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

200

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

generators. Frequency deviation is caused by a

B.Ramp Rate Control Solar PV generation facilities have no

mismatch in generation and load, as given by the

inertial components, and the generated power can

swing equation for a Thevenin equivalent power

change very quickly when the sun becomes obscured

source driving the grid. The system inertia is

by passing cloud cover. On small power systems with

typically described using a normalized inertia

high penetrations of PV generation, this can cause

constant called the H constant, defined as

serious problems with power delivery, as traditional

=

thermal units struggle to maintain the balance of

power in the face of rapid changes. During solar -

H can be estimated by the frequency response of the

coupled operation, the BESS must counteract quick

system after a step-change such as a unit or load trip.

changes in output power to ensure that the facility

The equation can be re-written so that the system H is

delivers ramp rates deemed acceptable to the system

easily calculated from the change in frequency of the

operator. Allowable ramp rates are typically speci-

system after a generator of known size has tripped

fied by the utility in kilowatts per minute (kW/min),

off, according to

and are a common feature of new solar and wind

1 =2

power purchase agree-ments between utilities and

=

independent power producers. Here the ramp rate

1 −∆ 2

=

−∆ 2

refers only to real power, and that the reactive power capabilities of the BESS can be dispatched simultane-

where the unit of H is seconds,

is system angular

ously and independently to achieve other power

speed,

system goals.

remaining generation online after the unit trip, and

Ramp Rate Control algorithm used in the XP-DPR

is the system frequency,

is the

is the size of the generator that has tripped.

continuously monitors the real power output of the When frequency crosses a certain threshold, it is

solar generator, and commands the unit to charge or

desirable to command the BESS to charge in the case

discharge such that the total power output to the

of over-frequency events, typically caused by loss of

system is within the boundaries defined by the

load, or to discharge for under-frequency events,

requirements of the utility. The system ramp rate is

which often result when a generator has tripped

maintained to less than 50 kw/min, whereas the solar

offline. Using proportional control to deliver or

resource alone had a maximum second-to- second

absorb power in support of the grid frequency

ramp rate of over 4 MW/min.

stabilization is referred to as droop response, and this

C. Frequency Response

is common behavior in generator governors equipped

Even with ramp- rate control, there are still going

with a speed-droop or regulation characteristic.

to be occasional frequency deviations on the system.

Droop response in a governor is characterized as a

On small, low-voltage systems, it is common to see

proportional controller with a gain of 1/R, with R

frequency deviations of 1–3 Hz from the nominal 50

defined as

or 60 Hz frequency. Frequency deviation has adverse effects on many types of loads as well as other

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

201

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Where

ISBN: 378 - 26 - 138420 - 5

is the grid frequency,

frequency dead band, and Where

is steady-state speed at no load,

steady-state speed at full load, and

is the

is the power

rating of the BESS in KVA.

is

A set of droop characteristic curves for a 1 MW

is the nominal

BESS is depicted in Fig. 3.

or rated speed of the generator. This means that a 5% droop response should result in a 100% change in power output when frequency has changed by 5%, or 3 Hz on a 60 Hz power system. Since the BESS uses a power electronics interface, there is no inertia or speed in the system, and we must approximate this desirable behavior found in thermal generators. The straight forward implementation is to digitally calculate an offset for the BESS output power command as response proportional to the frequency. The response has units of kW and is determined as

Fig. 3. Frequency droop response curves for 5% response on a 1 MW BESS.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

202

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

IV.

ISBN: 378 - 26 - 138420 - 5

SIMULATION RESULTS

The photovoltaic and battery energy storage system are combined and connected to the grid and is simulated in Simulink /MATLAB R2009a.

Fig.4., Results For Solor Power Measured Over 24 Hours

Fig.5., Ramp Rate control to 50 kW/min for a 1 MW photovoltaic installation and a 1.5 MW/1 MWh BESS for a full day

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

203

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

204

ISBN: 378 - 26 - 138420 - 5

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig. 5., depicts the operation of an XP-DPR BESS

[3] C. Hill and D. Chen, “Development of a real-

smoothing the volatile power output of a 1 MW solar

time testing environment for battery energy

farm. Here the system ramp rate is maintained to less

storage

than 50 kW/min, whereas the solar resource alone had a

applications,” in Proc. IEEE Power Energy Soc.

maximum second-to- second ramp rate of over

Gen. Meeting, Detroit, MI, Jul. 2011.

4MW/min.

systems

in

renewable

energy

[4] A. Nourai and C. Schafer, “Changing the electricity game,” IEEE Power Energy Mag., vol. 7, no. 4, pp. 42–47, Jul./Aug. 2009.

V.

CONCLUSION

[5] R. H. Newnham, W. G. A Baldsing, and A.

Integration of energy storage systems into

Baldsing, “Advanced man-agement strategies for

the grid to manage the real power variability of solar

remote-area power-supply systems,” J. Power

by providing rate variation control can optimize the

Sources, vol. 133, pp. 141–146, 2004.

benefits of solar PV. Using the BESS to provide

[6] C. D. Parker and J. Garche, “Battery energy-

voltage stability through dynamic var support, and

storage systems for power supply networks,” in

frequency regulation via droop control response

Valve-Regulated Lead Acid Batteries, D. A. J.

reduces integration challenges associated solar PV.

Rand, P. T. Mosely, J. Garche, and C. D. Parker,

Coupling solar PV and storage will drastically

Eds. , Amsterdam, The Netherlands: Elsevier,

increase reliability of the grid, enables more effective

2004, pp. 295–326.

grid management, and creates a dispatchable power

[7] N. W. Miller, R. S. Zrebiec, R. W. Delmerico,

product from available resources. Battery energy

and G. Hunt, “Design and commissioning of a 5

storage systems can also improve the economics of

MVA, 2.5 MWh battery energy storage,” in

distributed solar power generation by reduced need

Proc. 1996 IEEE Power Eng. Soc. Transm.

for cycle traditional generation assets and increasing

Distrib. Conf., pp. 339–345.

asset utilization of existing utility generation by

[8] “Analysis of a valve-regulated lead-acid battery

allowing the coupled PV solar and BESS to provide

operating in utility en-ergy storage system for

frequency and voltage regulation services.

more than a decade,” 2009. [9] A. Nourai, R. Sastry, and T. Walker, “A vision & strategy for deploy-ment of energy storage in

VI.

REFERENCES

electric utilities,” in Proc. IEEE Power En-ergy Soc. Gen. Meet., Minneapolis, MN, Jul. 2010.

[1] F. Katiraei and J. R. Aguero, “Solar PV

[10] P. Kundur, Power System Stability and Control.

Integration Challenges,” IEEE Power Energy

New York: Mc-Graw-Hill, 1994, pp. 589–594.

Mag., vol. 9, no. 3, pp. 62–71, May/Jun. 2011. [2] N. Miller, D. Manz, J. Roedel, P. Marken, and E. Kronbeck, “Utility scale battery energy storage systems,” in Proc. IEEE Power Energy Soc. Gen. Meeting, Minneapolis, MN, Jul. 2010.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

205

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

ESTIMATION OF FREQUENCY FOR A SINGLE LINK-FLEXIBLE MANIPULATOR USING ADAPTIVE CONTROLLER KAZA JASMITHA M.E (CONTROL SYSTEMS)

Dr K RAMA SUDHA PROFESSOR

DEPARTMENT OF ELECTRICAL ENGINEERING Andhra University, Visakhapatnam, Andhra Pradesh. Abstract- In this paper, it is proposed an adaptive control procedure for an uncertain flexible robotic arm. It is employed with fast online closed loop identification method combined with an output –feedback controller of Generalized Proportional Integral (GPI).To identify the unknown system parameter and to update the designed certainty equivalence, GPI controller a fast non asymptotic algebraic identification method is used. In order to examine this method, simulations are done and results shows the robustness of the adaptive controller. Index Terms- Adaptive control, algebraic estimation, flexible robots, generalized proportional integral (GPI) control.

I.INTRODUCTION FLEXIBLE arm manipulators mainly applied: space robots, nuclear maintenance, microsurgery, collision control, contouring control, pattern recognition, and many others. Surveys of the literature dealing with applications and challenging problems related to flexible manipulators may be found in [1] and [2]. The system, which is partial differential equations (PDEs), is a distributed-parameter system of infinite dimensions. It makes difficult to achieve high- level performance for nonminimum phase behavior. To deal with the control of flexible manipulators and the modeling based on a truncated (finite dimensional) model obtained from either the

finite-element method (FEM) or assumed modes methods, linear control [3], optimal control [4]. Adaptive control [5], sliding-mode control [6], neural networks [7], or fuzzy logic [8] are the control techniques. To obtain an accurate trajectory tracking these methods requires several sensors, for all of these we need to know the system parameters to design a proper controller. Here we propose a new method, briefly explained in [9], an online identification technique with a control scheme is used to cancel the vibrations of the flexible beam, motor angle obtained from an encoder and the coupling torque obtained from a pair of strain gauges are measured as done in the work in [10]. Coulomb friction torque requires a compensation term as they are the nonlinearities effects of the motor proposed in [11]. To minimize this effect robust control schemes are used [12]. However, this problem persists nowadays. Cicero et al. [13] used neural network to compensate this friction effect. In this paper, we proposed an output-feedback control scheme with generalized proportional integral (GPI) is found to be robust with respect to the effects of the unknown friction torque. Hence, compensation is not required for these friction models. Marquez et al was first untaken this controller. For asymptotically sable closed-loop system which is internally unstable the velocity of a DC motor should be controlled.For this we propose, by further manipulation of the integral

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

206

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

reconstructor, an internally stable control scheme of the form of a classical compensator for an angularposition trajectory task of an un-certain flexible robot, which is found to be robust with respect to nonlinearities in the motor. Output-feedback controller is the control scheme here; velocity measurements, errors and noise are produced which are not required. Some specifications of the bar are enough for the GPI control. Unknown parameters should not affect the payload changes. Hence this allows us to estimate these parameter uncertainties, is necessary. The objective of this paper is unknown parameter of a flexible bar of fast online closedloop identification with GPI controller. The author Fliess et al. [15] (see also [16]) feedback-control system s are reliable for state and constant parameters estimation in a fast (see also [17], [18]). As these are not asymptotic and do need any statistical knowledge of the noise corrupting data i.e. we don’t require any assume Gaussian as noise. This assumption is common in other methods like maximum likelihood or minimum least squares. Furthermore, for signal processing [19] and [20] this methodology has successfully applied.

Fig. 1. Diagram of a single-link flexible arm

This paper is organized as follows. Section II describes the flexible-manipulator model. Section III is devoted to explain the GPI-controller design. In Section IV, the algebraic estimator mathematical development is explained. Section V describes the

ISBN: 378 - 26 - 138420 - 5

adaptive-control procedure. In Section VI, simulations of the adaptive-control system are shown. . Finally, Section VII is devoted to remark the main conclusions.

II. MODEL DESCRIPTION A. Flexible-Beam Dynamics PDE describes the behavior of flexible slewing beam which is considered as Euler-Bernoulli beam. Infinite vibration modes involves in this dynamics. So, reduced modes can be used where only the low frequencies, usually more significant, are considered. They are several approaches to reduce model. Here we proposed: 1) Distributed parameters model where the infinite dimension is truncated to a finite number of vibration modes [3] 2) Lumped parameters models where a spatial discretization leads to a finite-dimensional model. In this sense, the spatial discretization can be done by both a FEM [22] and a lumped-mass model [23]. As we developed in [23] a single-link flexible manipulator with tip mass is modeled, it rotates Zaxis perpendicular to the paper, as shown in Fig. 1. Gravitational effect and the axial deformation are neglected. As the mass of the flexible beam is floating over an air table itallows us to cancel the gravitational effect and the friction with the surface of the table. Stability margin of the system, increases structural damping, a design without considering damping may provide a valid but conservative result [24]. In this paper the real structure is studied is made up of carbon fiber, with high mechanical resistance and very small density. We considered the studies is under the hypothesis of small deformation as the total mass is concentrated at the tip position because the mass of the load is bigger than that of the bar, then the mass of the beam can be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

207

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

neglected. I.e., the flexible beam vibrates with the fundamental mode; as the rest modes are very far from the first so they can be neglected. Therefore we can consider only one mode of vibration. Here after the main characteristics of this model will influence the load changes in very easy manner, that make us to apply adaptive controller easily. Based on these considerations, we propose the following model for the flexible beam: mL2t  c( m   t ) (1)

Motor servo amplifier system [in Newton meter per volts] electromechanical constant is defined by   parameter k . ˆm And ˆm are the angular acceleration of the motor [in radians per seconds squared] and the angular velocity of the motor [in radians per second], respectively. Γ is the coupling torque measured in the hub [in Newton meters], and n is the reduction ratio of the motor gear. u is the motor input voltage [in volts]. This is the control variable of the system. It is given as the input to a servo amplifier which controls the input current to the motor by means of an internally PI current controller [see Fig. 2(a)]. As it is faster than the mechanical dynamics electrical dynamics are rejected. Here the servo amplifier can be considered as a constant relation ke between the

Where m is the unknown mass at the tip position. L and c = (3EI/L) are the length of the flexible arm and the stiffness of the bar, respectively, assumed to be perfectly known. The stiffness depends on the flexural rigidity EI and on the length of the bar L. θm is the angular position of the motor gear.1θtand t are the unmeasured angular position and angular acceleration of the tip, respectively. B. DC-Motor Dynamics In many control systems a common electromechanical actuator is constituted by the DC motor [25]. A servo amplifier with a current inner loop is supplied for a DC motor. The dynamics of the system is given by Newton’s law    ku  jˆm  vˆm ˆ c  (2) n Where J is the inertia of the motor [in kilograms square meters], ν is the viscous friction coefficient [in Newton meters seconds], and ˆ c is the

voltage and current to the motor: im  Vk e [see Fig.2 (b)], where imis the armature circuit current and ~ keincludes the gain of the amplifier k and R as the input resistance of the amplifier circuit.

unknown Coulomb friction torque which affects the motor dynamics [in Newton meters]. This nonlinear-friction term is considered as a perturbation, depending only on the sign of the motor angular velocity. As a consequence,  Coulomb’s friction, when ˆ  0 , follow the model: ˆ (ˆ  0)    coul m  ˆ c .sign (ˆm )      ˆ coul (ˆm  0) (3)

 And when ˆ  0 min(ku, coup )(u  0)  ˆ c .sign(u )    max(ku,coup )(u  0)

ISBN: 378 - 26 - 138420 - 5

For motor torque must exceed to begin the movement coul is used as static friction value.

Fig.2.(a)Complete amplifier scheme. (b) Equivalent amplifier scheme

The total torque given to the motor ΓT is directly proportional to the armature circuit in the form ΓT = kmim, where kmis the electromechanical constant of the motor. Thus, theelectromechanical constant of the motor servo amplifier system is k = kekm.

(4)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

208

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

by

C. Complete-System Dynamics

ISBN: 378 - 26 - 138420 - 5

  kuc  Jˆm  vˆm  ˆ c

(10) The controller to be designed will be robust with respect to the unknown piecewise constant torque disturbances affecting the motor dynamics. Then the perturbation-free system to be considered is the following: kuc  jm  vm (11)

The dynamics of the complete system, actuated by a DC motor are formulated by given simplified model: mL2t  c( m   t )

   ku  Jˆm  vˆm  ˆ c  n   c( m   t )

(5,6,7) Equation (5) represents the dynamics of the flexible beam; (6) expresses the dynamics of the dc motor; and (7) stands for the coupling torque measured in the hub and produced by the translation of the flexible beam, which is directly proportional to the stiffness of the beam and the difference between the angles of the motor and the tip position, restively.

Where K  k / n .To specifies the developments, let A  K / J and B  v / J .The transfer function of a DC motor is written as  ( s) A Gm( s)  m  u c ( s) s( s  B)

(12) Fig.3 shows the compensation scheme of the coupling torque measured in the hub. The regulation of the load position θt(t) to track a given smooth reference trajectory  * t (t ) is desired.

III. GPI CONTROLLER

To synthesis the feedback-control law, we will use only the measured coupling motor position  m and

The flexible-bar transfer function in Laplace notation from (5) can be re written as  t (s) 2 Gb( s )    m (s) s 2   2 (8) 2 1/2 Where ω = (c/(mL )) is the unknown natural frequency of the bar due to the lack of precise knowledge of m. as done in [10], the coupling torque can be canceled in the motor by means of the a compensation term. In the case, the voltage applied to the motor is of the form  u  uc  k .n (9)

coupling torque  . One of the prevailing restrictions throughout our treatment of the problem is our desire of not to measure, or compute on the basis samplings, angular velocities of the motor shaft or of the load.

Fig. 4. Flexible-link dc-motor system controlled by a two-stage GPI-controller design

A. Outer loop controller Consider the model of the flexible link, given in (1). This subsystem is flat, with flat output given by t t  . This means that all variables of the

Fig.3 compensation of the coupling torque measured in the hub

Where uc is the voltage applied before the

unperturbed system may be written in terms of the flat output and a finite number of its time derivatives (See [9]). The parameterization of

compensation term. The system in (6) is then given

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

209

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

 m t  in terms of  m t  is given, in reduction

gear terms, by:

m 

ISBN: 378 - 26 - 138420 - 5

Disregarding the constant error due to the tracking error velocity initial conditions, the estimated error velocity can be computed in the following form:

mL2  t  t c

e t 

(13) System (9) is a second order system in which it is desired to regulate the tip position of the flexible bar t , towards a given smooth

c mL2

t

(18)

 e    e  d   m

t

0

The integral reconstructor neglects the possibly nonzero initial condition e t 0  and, hence, it exhibits a constant estimation error. When the reconstructor is used in the derivative part of the PID controller, the constant error is suitably compensated thanks to the integral control action of the PID controller. The use of the integral reconstructor does not change the closed loop features of the proposed PID controller and, in fact, the resulting characteristic polynomial obtained in both cases is just the same. The design gains k 0 , k1, k 2  need to be changed due to the use of

reference trajectory  * t t  with  m acting as a an auxiliary Control input. Clearly, if there exists an auxiliary open loop control input,

 * m t  , that ideally achieves the tracking of  *t t  for suitable initial conditions, it satisfies then the second orderdynamics, in reduction gear terms (10).

mL2 *  m t    t t    *t t  c *

(14) Subtracting (10) from (9), we obtain an expression in terms of the angular tracking errors: c e t  e m  e t  mL2 (15)

the integral reconstructor. Substituting the

Where e m   m   * m t , e t   t   * t t  . For

 s   0  *   *m   1 (19)   t  t  s2  The tip angular position cannot be measured, but it certainly can be computed from the expression relating the tip position with the motor position and the coupling torque. The implementation may then be based on the use of the coupling torque measurement. Denote the coupling torque by  it is known to be given by:

integral reconstructor e  t (14) byinto the PID controller (12) and after some rearrangements we obtain:



this part of the design, we view em as an incremental control input for the linksdynamics. Suppose for a moment we are able to measure theangular position velocity tracking error e t , then the outer loopfeedback incremental controller could be proposed to be the followingPID controller,

em  et 

t  mL2   K2et  K1et  K0 et  d  c  0 

(16)

  c  m   t   mL2  n  coup

We proceed by integrating the expression (11) once, to obtain: c e t t   e t 0   mL2

m

(20)

Thus, the angular position is readily expressed as,

t

 e    e  d   m

1 t   m   c

t

0

(21)

(17) INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

210

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

In Fig. 2 depicts the feedback control scheme under which the outer loop controller would be actually implemented in practice. The closed outer loop system in Fig. 2 is asymptotically exponentially stable. To specify the parameters,  0 ,  1 ,  2 we can choose to locate

The open-loop control u * c (t ) that ideally achieves the open-loop tracking of the inner loop is given by 1 B u * c (t )  * m (t )  m (t ) . (27) A A The inner loop system in Fig. 4 is exponentially stable. We can choose to place all the closed-loop poles in a desired location of the left half of the complex plane to design the parameters {  3 ,  2 , 1 ,  0 }. As done with the outer loop, all

the closed loop poles in the left half of the complex plane. All three poles can be located in the same point of the real line, s = −a, using the following polynomial equation, (22) s 3  3as 2  3 a 2 s  a 3  0 Where the parameter a represents the desired location of the poles. The characteristic equation of the closed loop system is,

s 3  k 2 s 2   0 1  k1 s   0 k 2  k 0   0 (23) Identifying each term of the expression (18) with those of (19), the design parameters  2 ,  1 ,  0 can be uniquely specified. 2

2

poles can be located at the same real value,  3 , 2 , 1 , and  0 can be uniquely obtained by equalizing the terms of the two following polynomials:

B. Inner loop controller

(s  p) 4  s 4  4 ps 3  6 p 2 s 2  4 p 3 s  p 4  0 (28) 4 3 2 s  ( 3  B ) s  ( 3 B   2 A) s   1 As   0 A  0 (29) Where the parameter p represents the common location of all the closed-loop poles, this being strictly positive.

The angular position  m , generated as an auxiliary control input in the previous controller design step, is now regarded as a reference trajectory for the motor controller. We denote this reference trajectory by  * mr . The dynamics of the DC motor, including the Coulomb friction term, is given by (10). The design of the controller to be robust with respect to this torque disturbance is desired. The following feedback controller is proposed:

IV. IDENTIFICATION As explained in the previous section, the control performance depends on the knowledge of the parameter ω. In order to do this task, in this section, we analyze the identification issue, as well as the reasons of choosing the algebraic derivative method as estimator. Identification of continuous-time system parameters has been studied from different points of view. The surveys led by Young in [27] and Unbehauen and Rao in [29] and [30], respectively, describe most of the available techniques. The different approaches are usually classified into two categories:1) Indirect approaches: An equivalent discrete-time model to fit the date is

t   ˆ   k e  k e  k  3 m 2 m 1  e m ( ) d ( )  v J 0  ev  eˆ m   t t   K K  k 0   (e m ( 2 ))d ( 2 )d ( 1 )   0 0  (24) the following integral reconstructor for the angular-velocity error signal eˆ is obtained: m

t

K v eˆ m   ev ( ) d ( )  e m . J 0 J

ISBN: 378 - 26 - 138420 - 5

rearrangements, the feedback control law is obtained  s 2   1s   0  * (u c  u * c )   2  ( mr   m ) . (26) s( s   3 )  

(25)

Replacing eˆ in (25) into (24) and, after some m

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

211

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

some arbitrarily small time t = Δ >0, by means of the expression. arbitary for t  [0, )  2  oe   ne (t ) (30) t  [,)  d e (t ) for

needed. After that, the estimated discrete-time parameters are transferred to continuous time.2) Direct approaches: In the continuous-time model, the original continuous-time parameters from the discrete-time data are estimated via approximations for the signals and operators. In the case of the indirect method, a classical wellknown theory is developed (see [31]). Nevertheless, these approaches have several disadvantages: 1) They require computationally costly minimization algorithms without even guaranteeing con-vergence. 2) The estimated parameters may not be correlated with the physical properties of the system. 3) At fast sampling rates, poles and zeros cluster near the −1 point in the z-plane. Therefore, many researchers are doing a big effort following direct approaches (see [32]–[35], among others). Unfortunately, identification of robotic systems is generally focused on indirect approaches (see [36], [37]), and as a consequence, the references using direct approaches are scarce. On the other hand, the existing identification techniques, included in the direct approach, suffer from poor speed performance.Additionally, it is well known that the closed-loop identification is more complicated than its open-loop counterpart (see [31]). These reasons have motivated the application of the algebraic derivative technique previously presented in the introduction. In the next point, algebraic manipulations will be shownto develop an estimator which stems from the differentialequations, analyzed in the model description, incorporating the measured signals in a suitable manner. A. Algebraic Estimation of the Natural Frequency. In order to make more understandable the equation deduction, we suppose that signals are noise free. The main goal is to obtain an estimation of  2 as fast as possible, which we will denote by oe .

Where ne (t ) and de (t ) are the output of the time-varying linear unstable filter d e (t )  z 3 ne (t )  t 2 t (t )  z1

z1  z 2  4t t (t ) z2  2t (t )

z3  z 4 (31)

z4  t 2 ( m (t )   t (t ))

Proof: consider (5) t   2 ( m   t )

(32)

The Laplace transform of (32) is s 2 t ( s )  s t (0)  t (0)   2 ( m ( s )   t ( s )) (33) Taking two derivatives with respect to the complex variable s, the initial conditions are cancelled 2 d 2 ( s 2 t ) d 2 ( t )  2  d ( m )      2  ds 2  ds 2 ds   (34) Employing the chain rule, we obtain s2

d 2 ( t ) d d 2 ( m ) d 2 ( t )  4s t  2 t   2 (  ) 2 ds ds ds 2 ds 2

(35) Consequently, in order to avoid multiplications by positive powers of s, which are translated as undesirable time derivatives in the time domain, we multiply the earlier expression by s 2 . After some rearrangements, we obtain d 2 ( t ) d  4 s 1 t  2 s 2 t 2 ds  2  ds 2 ˆ d ( m ) d 2 (ˆt ) s 2 (  ) ds 2 ds 2 (36) Let L denote the usual operational calculus transform acting on exponentially bounded signals with bounded left support (see[38]). Recall that

L1 s()  (d / dt )(), L1 (d v / ds v )()  (1) v .t v (), t

and L1(1 / s ) ()   ()( ) d .

taking

this

into

0

Proposition 4.1: The constant parameter  2 of the noise free system described by (5)–(7) can be exactly computed, in a nonasymptotic fashion, at

account, we can translate(36) into the time domain

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

212

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY t t 

2 

[t  t (t )  4   t ( )d  2    t ( )dd 0

0 0

t t

t t 2

 

m

( )d d     2 t ( )dd

(37) The time realization of (37) can be written via time-variant linear (unstable) filters d e (t )  z 3 ne (t )  t 2 t (t )  z1 0 0

0 0

z1  z 2  4t t (t ) z2  2t (t )

ISBN: 378 - 26 - 138420 - 5

Taking advantage of the estimator rational form in (37), the quotient will not be affected by the filters. This invariance is emphasized with the use of the different notations in frequency and time domain such as n (t ) F (s )ne (t )  2 oe  f  d f (t ) F ( s)d e (t ) (40) Where n f (t ) and d f (t ) are the filtered

2

z3  z 4

z 4  t 2 ( m (t )   t (t ))

numerator and denominator, respectively, and F(s) is the filter used. The choice of this filter depends on the a priori available knowledge of the system. Nevertheless, if such a knowledge does not exist, pure integrations of the form ,

The natural frequency estimator of  2 is given by arbitary for t  [0, )   2 oe   ne (t ) t  [,)  d e (t ) for (38)

1/ s p , p  1 may be utilized, where highfrequency noise has been assumed. This hypothesis has been motivated by recent developments in nonstandardanalysis toward a new nonstochastic noise theory (more details in [39]). Finally, the parameter  2 is obtained by arbitary for t  [0, )  2  oe   ne (t ) t  [,)  d e (t ) for (41)

Where  ia an arbitrary small real number. Note that, for the time t=o,ne(t) and de(t) are both zero. Therefore, the quotient is undefined for a small period of time. After a time t = Δ >0, the quotient is reliably computed. Note that t = Δ depends on the arithmetic processor precision and on the data acquisition card. The unstable nature of the linear systems in perturbed Brunovsky’s form (38) is of no practical consequence on the determination of the unknown parameters since we have the following reasons: 1) Resetting of the unstable timevarying systems and of the entire estimation scheme is always possible and, specially, needed when the unknown parameters are known to undergo sudden changes to adopt new constant values. 2) Once the parameter estimation is reliably accomplished, after some time instant t = Δ >0, the whole estimation process may be safely turned off. Note that we only need to measure θm and Γ, since θt is available according to (21). Unfortunately, the available signals θm and Γ are noisy. Thus, the estimation precision yielded by the estimator in (30) and (31) will depend on the signal-to-noise ratio3 (SNR). B. Unstructured Noise We assume that θm and Γ are perturbed by an added noise with unknown statistical properties. In order to enhance theSNR, we simultaneously filter the numerator and denominator by the same low-pass filter.

V.ADAPTIVE CONTROL PROCEDURE Fig. 4 shows the adaptive-control system implemented in practice in our laboratory. The estimator is linked up, from time t0  0[ s] , to the signals coming from the encoder  m and the pair of strain gauges Γ. Thus, the estimator begins to estimate when the closed loop begins to work, and then, we can obtain immediately the estimate of the parameter. When the natural frequency of the system is estimated at time t1, the switch s1 is switched on, and the control system is updated with this new parameter estimate. This is done in closed loop and at real time in a very short period of time. The updating of the control system is carried out by substituting ω by the estimated parameter ω0e in (14) and (19). In fact, the feedforward term which ideally controls in open loop the inner loop subsystem u *c [see (27)] also depends on

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

213

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY *

from the knowledge of the system natural frequency. Obviously, until the estimator obtains the true value of the natural frequency, the control system begins to work with an initial arbitrary value which we select ω0i. Taking these considerations into account, the adaptive controller canbe defined as follows. For the outer loop, (14) is computed as 1  m* (t )  2 t * (t )   t * (t ) x (42) Moreover,(23) is computed as s 3  2s 2  x 2 (1   1 ) s  x( 2   0 )  0

(43)

For the inner loop only changes the feedforward term in (27) which depends on the bounded derivatives of the new  *m (t ) in (42). The variable x is defined as x  oi , t  t1

x  oe , t  t1

ISBN: 378 - 26 - 138420 - 5

10−3 [N · m · s], and electromechanical constant k = 0.21 [(N · m)/V]. With these parameters, A and B of the transfer function of the dc motor in (12) can be computed as follows: A = 61.14 [N/(V · kg · s)] and B=15.15 [(N · s)/(kg · m)]. The mass used to simulate the flexible-beam behavior is m=0.03 [kg], the length L=0.5 [m], and the flexural rigidity is EI =0.264 [N · m2]. According to these parameters, the stiffness is c=1.584 [N · m], and the natural frequency is ω = 14.5 [rad/s]. Note that we consider that the stiffness of the flexible beam is perfectly known; thus, the real natural frequency of the beam will be estimated as well as the tip position of the flexible bar. Nevertheless, it may occur that the value of the stiffness varies from the real value, and an approximation is then included in the control scheme. Such approximated value is denoted by c0. In this case, we consider that the computation of the stiffness fits in with the real value, i.e., c0  c . A meticulous stability

the ω value because the variable  m is obtained

(44, 45)

VI. SIMULATIONS The major problems associated with the control of flexible structures arise from the structure is a distributed parameter system with many modes, and there are likely to be many actuators [40]–[44]. We propose to control a flexible beam whose second mode is far away from the first one, with the only actuator being the motor and the only sensors as follows: an encoder to measure the motor position and a pair of strain gauges to estimate the tip position. The problem is that the high modal densities give rise to the well-known phenomenon of spillover [45], where contributions from the unmodeled modes affect the control of the modes of interest. Nevertheless, with the simulations as follows, we demonstrate that the hypothesis proposed before is valid, and the spillover effect is negligible. In the simulations, we consider a saturation in the motor input voltage in a range of [−10, 10] [in volts]. The parameters used in the simulations are as follows: inertia J = 6.87 ·10−5 [kg · m2], viscous friction ν = 1.041 ·

analysis of the control system under variations of the stiffness c is carried out in Appendix, where a study of the error in the estimation of the natural frequency is also achieved. The sample time used in the simulations is 1 · 10−3 [s]. The value of ˆ  119.7 · 10−3 [N · m] taken in simulations is the true value estimated in real experiments. In voltage terms is ˆ c / k  0.57[V ] In order to design the gains of the inner loop controller, the poles can be located in a reasonable location of the negative real axis. If closed-loop poles are located in, for example, −95, the transfer function of the controller from (26), that depends on the location of the poles in closed loop of the inner loop and the values of the motor parameters A and B as shown in (28) and (29), respectively, results in the following expression:

uc  u *c 789s 2  5.6.104 s  1.3.106  (46) The  *m   m s( s  365) feedforward term in (27), which depends on the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

214

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

values of the motor parameters, is computed in accordance with u *c  0.02*m (t )  0.25 *m (t ) (47)

given by  m * (t ) 

with a natural frequency of the bar given by an initial arbitrary estimate of oi  9 [rad/s]. The

RESULTS

(48) 1.2

The open-loop reference control input from (14) in terms of the initial arbitrary estimate of oi is given by 1 * *  t (t )   t (t )  12.3.10 3 * t (t )   *t (t ) 2  oi

1

0.8

(49)

0.6

a n g le in ra d

*

 m (t ) 

0.4

The desired reference trajectory used for the tracking problem of the flexible arm is specified as a Bezier’s eighth-order polynomial. The online algebraic estimation of the unknown parameter ω, in accordance with (31), (40), and (41), is carried out in Δ = 0.26 s [see Fig. 5(a)]. At the end of this small time interval, the controller is immediately replaced or updated by switching on the interruptor s1 (see Fig. 4), with the accurate parameter estimate, given by oe = 14.5 [rad/s]. When the

0.2

0

-0.2 0

0.5

1

1.5

2

2.5 time in sec

3

3.5

4

4.5

5

Without using estimation 1 input output

0.9 0.8 0.7

controller is updated, s1 is switched off. Fig. 5(b) shows the trajectory tracking with the adaptive controller. Note that the trajectory of the tip and the reference are superimposed. The tip position t tracks the desired trajectory  *t

a n g le in r a d

0.6 0.5 0.4 0.3 0.2 0.1

with no steady-state error [see Fig. 5(c)]. In this

0

figure, the tracking error  t *   t is shown. The corresponding transfer function of the new updated controller is then found to be

 m   * m 0.3s  25.7  * s  30 t   t

1 *  t (t )   t * (t )  4.7.103*t (t )   *t (t ) (51) 2 oi

The input control voltage to the dc motor is shown in Fig. 5(d), the coupling torque is shown in Fig. 5(e), and the Coulomb friction effect in Fig. 5(f). In Fig. 6, the motor angle θm is shown

transfer function of the controller (19), which depends on the location of the closedloop poles of the outer loop, −10 in this case, and the natural frequency of the bar as shown in (22) and (23), respectively, is given by the following expression.

 m   * m 2.7 s  17.7  * s  30 t  t

ISBN: 378 - 26 - 138420 - 5

from (14) in terms of the new estimate oe is

For

0

0.5

1

1.5

2

2.5 time in sec

3

3.5

4

4.5

5

  14.5[rad / sec], A  61.14, B  15.15

(50)

The open-loop reference control input  * m (t )

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

215

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

REFERENCES

-3

4

x 10

[1] S. K. Dwivedy and P. Eberhard, “Dynamic analysis of flexible manipulators, a literature review,” Mech.Mach. Theory, vol. 41, no. 7, pp. 749–777, Jul. 2006. [2] V. Feliu, “Robots flexibles: Hacia una generación de robots con nuevas prestaciones,” Revista Iberoamericana de Automática e Informática Industrial, vol. 3, no. 3, pp. 24–41, 2006. [3] R. H. Canon and E. Schmitz, “Initial experiments on the end-point control of a flexible robot,” Int. J. Rob. Res., vol. 3, no. 3, pp. 62–75, 1984. [4] A. Arakawa, T. Fukuda, and F. Hara, “H∞ control of a flexible robotics arm (effect of parameter uncertainties on stability),” in Proc. IEEE/RSJ IROS, 1991, pp. 959–964. [5] T. Yang, J. Yang, and P. Kudva, “Load adaptive control of a single-link flexible manipulator,” IEEE Trans. Syst., Man, Cybern., vol. 22, no. 1, pp. 85–91, Jan./Feb. 1992. [6] Y. P. Chen and H. T. Hsu, “Regulation and vibration control of an FEMbased single-link flexible arm using sliding-mode theory,” J. Vib. Control, vol. 7, no. 5, pp. 741–752, 2001. [7] Z. Su and K. A. Khorasani, “A neural network-based controller for a single-link flexible manipulator using the inverse dynamics approach,” IEEE Trans. Ind. Electron., vol. 48, no. 6, pp. 1074–1086, Dec. 2001. [8] V. G. Moudgal, W. A. Kwong, K. M. Passino, and S. Yurkovich, “Fuzzy learning control for a flexible-link robot,” IEEE Trans. Fuzzy Syst., vol. 3, no. 2, pp. 199–210, May 1995. [9] J. Becedas, J. Trapero, H. Sira-Ramírez, and V. Feliu, “Fast identification method to control a flexible manipulator with parameter uncertainties,” in Proc. ICRA, 2007, pp. 3445– 3450. [10] V. Feliu and F. Ramos, “Strain gauge based control of singlelink flexible very light

3

2

a n g le in ra d

1

0

-1

-2

-3 0

0.5

1

1.5

2

2.5 time in sec

3

3.5

4

4.5

ISBN: 378 - 26 - 138420 - 5

5

Error

CONCLUSION A two-stage GPI-controller design scheme is proposed in connection with a fast online closed-loop continuous-time estimator of the natural frequency of a flexible robot. This methodology only requires the measurement of the angular positionof the motor and the coupling torque. Thus, the computation of angular velocities and bounded derivatives, which always introduces noise in the system and makes necessary the use of suitable lowpass filters, is not required. Among the advantages of this technique, we find the following advantages: 1) a control robust with respect to the Coulomb friction; 2) a direct estimation of the parameters without an undesired translation between discrete- and continuous-time domains; and 3) independent statistical hypothesis of the signal is not required, so closedloop operation is easier to implement. This methodology is well suited to face the important problem of control degradation in flexible arms as a consequence of payload changes. Its versatility and easy implementation make the controller suitable to be applied in more than 1-DOF flexible beams by applying the control law to each separated dynamics which constitute the complete system. The method proposed establishes the basis of this original adaptive control to be applied in more complex problems of flexible robotics.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

216

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

weight robots robust to payload changes,” Mechatronics, vol. 15, no. 5, pp. 547–571, Jun. 2005. [11] H. Olsson, H. Amström, and C. C. de Wit, “Friction models and friction compensation,” Eur. J. Control, vol. 4, no. 3, pp. 176–195, 1998. [12] V. Feliu, K. S. Rattan, and H. B. Brown, “Control of flexible arms with friction in the joints,” IEEE Trans. Robot. Autom., vol. 9, no. 4, pp. 467–475, Aug. 1993. [13] S. Cicero, V. Santos, and B. de Carvahlo, “Active control to flexible manipulators,” IEEE/ASME Trans. Mechatronics, vol. 11, no. 1, pp. 75–83, Feb. 2006. [14] R. Marquez, E. Delaleau, and M. Fliess, “Commande par pid généralisé d’un moteur électrique sans capteur mecanique,” in Premiére Conférence Internationale Francophone d’Automatique, 2000, pp. 521–526. [15] M. Fliess and H. Sira-Ramírez, “An algebraic framework for linear identification,” ESAIM Control Optim. Calculus Variations, vol. 9, pp. 151–168, 2003. [16] M. Fliess, M. Mboup, H. Mounier, and H. Sira-Ramírez, Questioning Some Paradigms of Signal Processing via Concrete Examples. México, México: Editorial Lagares, 2003, ch. 1. [17] J. Reger, H. Sira-Ramírez, andM. Fliess, “On non-asymptotic observation of nonlinear systems,” in Proc. 44th IEEE Conf. Decision Control, Sevilla, Spain, 2005, pp. 4219–4224. [18] H. Sira-Ramírez and M. Fliess, “On the output feedback control of a synchronous generator,” in Proc. 43rd IEEE Conf. Decision Control, The Bahamas, 2004, pp. 4459–4464. [19] J. R. Trapero, H. Sira-Ramírez, and V. Feliu-Batlle, “An algebraic frequency estimator for a biased and noisy sinusoidal signal,” Signal Process., vol. 87, no. 6, pp. 1188–1201, Jun. 2007. [20] J. R. Trapero, H. Sira-Ramírez, and V. Feliu-Batlle, “A fast on-line frequency

ISBN: 378 - 26 - 138420 - 5

estimator of lightly damped vibrations in flexible structures,” J. Sound Vib., vol. 307, no. 1/2, pp. 365–378, Oct. 2007. [21] F. Bellezza, L. Lanari, and G. Ulivi, “Exact modeling of the flexible slewing link,” in Proc. IEEE Int. Conf. Robot. Autom., 1991, vol. 1, pp. 734–804. [22] E. Bayo, “A finite-element approach to control the end-point motion of a single-link flexible robot,” J. Robot. Syst., vol. 4, no. 1, pp. 63–75, Feb. 1987. [23] V. Feliu, K. S. Rattan, and H. Brown, “Modeling and control of single-link flexible arms with lumped masses,” J. Dyn. Syst. Meas. Control, vol. 114, no. 7, pp. 59–69, 1992. [24] W. Liu and Z. Hou, “A new approach to suppress spillover instability in structural vibration,” Struct. Control Health Monitoring, vol. 11, no. 1, pp. 37–53, Jan.–Mar. 2004. [25] R. D. Begamudre, Electro-Mechanical Energy ConversionWith Dynamics of Machines. New York: Wiley, 1998. [26] H. Sira-Ramírez and S. Agrawal, Differentially Flat Systems. NewYork: Marcel Dekker, 2004. [27] P. C. Young, “Parameter estimation for continuous-time models—A survey,” Automatica, vol. 17, no. 1, pp. 23–29, Jan. 1981. [28] H. Unbehauen and G. P. Rao, “Continuous-time approaches to system identification—A survey,” Automatica, vol. 26, no. 1, pp. 23–35, Jan. 1990. [29] N. Sinha and G. Rao, Identification of Continuous-Time Systems. Dordrecht, The Netherlands: Kluwer, 1991. [30] H. Unbehauen and G. Rao, Identification of Continuous Systems. Amsterdam, The Netherlands: North-Holland, 1987. [31] L. Ljung, System Identification: Theory for the User, 2nd ed. Englewood Cliffs, NJ: Prentice-Hall, 1999. [32] S. Moussaoui, D. Brie, and A. Richard, “Regularization aspects in continuous-time

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

217

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

model identification,” Automatica, vol. 41, no. 2, pp. 197–208, Feb. 2005. [33] T. Söderström and M. Mossberg, “Performance evaluation of methods for identifying continuous-time autoregressive processes,” Automatica, vol. 36, no. 1, pp. 53 59, Jan. 2000. [34] E. K. Larsson and T. Söderström, “Identification of continuous-time AR processes from unevenly sampled data,” Automatica, vol. 38, no. 4, pp. 709–718, Apr. 2002. [35] K. Mahata and M. Fu, “Modeling continuous-time processes via input-tostate filters,” Automatica, vol. 42, no. 7, pp. 1073– 1084, Jul. 2006. [36] R. Johansson, A. Robertsson, K. Nilsson, and M. Verhaegen, “Statespace system identification of robot manipulator dynamics,” Mechatronics, vol. 10, no. 3, pp. 403–418, Apr. 2000. [37] I. Eker, “Open-loop and closed-loop experimental on-line identification of a three mass electromechanical system,”Mechatronics, vol. 14, no. 5, pp. 549–565, Jun. 2004. [38] J. Mikusinski and T. Boehme, Operational Calculus, 2nd ed., vol. I.New York: Pergamon, 1987. [39] M. Fliess, “Analyse non standard du bruit,” C.R. Acad. Sci. Paris, 2006. p. Ser. I 342. [40] T. Önsay and A. Akay, “Vibration reduction of a flexible arm by time optimal open-loop control,” J. Sound Vib., vol. 147, no. 2, pp. 283–300, 1991. [41] D. Sun, J. Shan, Y. Su, H. H. T. Liu, and C. Lam, “Hybrid control of a rotational flexible beam using enhanced pd feedback with a nonlinear differentiator and PZT actuators,” Smart Mater. Struct., vol. 14, no. 1, pp. 69–78, Feb. 2005. [42] J. Shan, H. T. Liu, and D. Shun, “Slewing and vibration control of a single link flexible manipulator by positive position feedback

ISBN: 378 - 26 - 138420 - 5

(ppf),” Mechatronics, vol. 15, no. 4, pp. 487– 503, May 2005. [43] C. M. A. Vasques and J. D. Rodrigues, “Active vibration control of smart piezoelectric beams: Comparison of classical and optimal feedback control strategies,” Comput. Struct., vol. 84, no. 22/23, pp. 1402–1414, Sep. 2006. [44] S. S. Han, S. B. Choi, and H. H. Kim, “Position control of a flexible gantry robot arm using smart material actuators,” J. Robot. Syst., vol. 16, no. 10, pp. 581–595, Oct. 1999. [45] M. J. Balas, “Active control of flexible systems,” J. Optim. Theory Appl., vol. 25, no. 3, pp. 415–436, Jul. 1978. [46] E. G. Christoforou and C. Damaren, “The control of flexible link robots manipulating large payloads: Theory and experiments,” J. Robot. Syst., vol. 17, no. 5, pp. 255–271, May 2000. [47] C. J. Damaren, “Adaptive control of flexible manipulators carrying large uncertain payloads,” J. Robot. Syst., vol. 13, no. 4, pp. 219–288,Apr. 1996. [48] V. Feliu, J. A. Somolinos, and A. García, “Inverse dynamics based control system for a three-degree-of-freedom flexible arm,” IEEE Trans. Robot. Autom., vol. 19, no. 6, pp. 1007 1014, Dec. 2003. [49] K. Ogata, Modern Control Engineering, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1996.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

218

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

MODELLING OF ONE LINK FLEXIBLE ARM MANIPULATOR USING TWO STAGE GPI CONTROLLER B Sudeep

Dr.K.Rama Sudha

ME Control systems

Professor

Department Electrical & Electronics Engineering, Andhra University, Visakhapatnam, Andhra Pradesh Abstract—In this article, a two stage Generalized

as Coulomb friction effect. With the controller proposed

Proportional Integral type (GPI), Controller is

no estimation of this nonlinear phenomena

designed for the control of an uncertain flexible robotic

is therefore required. This is a substantial improvement

arm with unknown internal parameters in the motor

over existing control schemes based on the on-line

dynamics. The GPI controller is designed using a two-

estimation of the friction parameters. Developments for

stage design procedure entitling an outer loop,

the controller are based on two concepts namely flatness

designed under the singular perturbation assumption

based exact feed forward linearization and Generalized

of no motor dynamics; and subsequently an inner loop

Proportional Integral (GPI) control. As a result using the

which forces the motor response to track the control

GPI control method, a classical compensating second

input position reference trajectory derived in the

order network with a ”roll-off” zero bestowing a rather

previous design stage. For both, the inner and the

beneficial integral control action with respect to constant

outer loop, the GPI controller design method is easily

load perturbations is obtained. The control scheme

implemented to achieve the desired tracking control.

proposed in this article is truly an output feedback controller since it uses only the position of the motor.

Key words—Flexible Arm manipulator, trajectory

Velocity measurements, which always introduce errors in

tracking, generalized proportional integral (GPI)

the signals and noises and makes it necessary the use of

control.

suitable low pass filters, are not required. Furthermore, the only measured variables are the motor and tip position.

I. INTRODUCTION

The goal of this work is to control a very fast system, only

In this paper, a two stage GPI controller is proposed for

knowing the stiffness of the bar, without knowing the rest

the regulation of an uncertain single-link flexible arm with

of the parameters in the system .A brief outline of this

unknown mass parameter at the tip, motor inertia, viscous

work is the following: Section II explains the system and

friction and electromechanical constant in the motor where

the

the tracking of a trajectory must be too precise. As in Feliu

proposed. An on-line closed loop algebraic identifier is

and Ramos [1] a two stage design procedure is used but

presented in the Section III, yielding expressions for DC

using now a GPI controller viewpoint the particular

motor and flexible bar parameters. This results will be

requirement of robustness with respect to unknown

verified via simulation in Section IV. Finally, the last

constant torque on the motor dynamics, this is known

section is devoted to main conclusions and further works.

generalized

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

219

proportional

integrator

controller

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

II. MODEL DESCRIPTION Consider the following simplified model of a very

III. GENERALIZED PROPORTIONAL

lightight flexible link, with all its mass concentrated at the

INTEGRATOR CONTROLLER

tip, actuated by a DC motor, as shown in Fig. 1. The

In Laplace transforms notation, the flexible bar transfer

dynamics of the system is given by:

function, obtained from (1), can be written as follows,

mL2t  c m   t  

(1) 

ku  J  m  V  m  T c  T coup 

T coup 

 s   Gb s   t  2 0 2  m s  s   0 2



(2)

c  m   t  n

(4)

(3) Where

0

c mL

is

the

unknown

natural

2

where m and L are the mass in the tip position and the

frequency of the bar due to the lack of precise knowledge

length of the flexible arm, respectively, assumed to be

of m and L. It is assumed that the constant c is perfectly

unknown, and c is the stiffness of the bar, which is

known. As it was done in [1] the coupling torque can be

assumed to be perfectly known, J is the inertia of the

compensated in the motor by means of a feed-forward

motor,

term which allows to decouple the two dynamics, the

V the viscous friction coefficient, T c is the 

unknown Coulomb friction torque, T

coup

flexible link dynamics and the bar dynamics, which allows

is the

an easier design task for the controller since the two

measured coupling torque between the motor and the link,

dynamics can be studied separately. In this case the

k is the known electromechanical constant of the motor,

voltage applied to the motor is of the form,

 

u is the motor input voltage, 

m

stands for the acceleration

T coup u  uc  K



of the motor gear,  m is the velocity of the motor gear. The constant factor gea.Thus, 

n is the Reduction ratio of the motor

 m

 

m

(5)

/n

,

m

where uc is the voltage applied before the feed-forward

is the angular position of

term. The system in (2) is then given by:

the motor and  t is the unmeasured angular position of the tip.

 



ku c  J  m  V  m  T c

(6)

Where T c is a perturbation, depending only on the sign of the angular velocity. It is produced by the Coulomb‘s friction phenomenon. The controller to be designed will be robust with respect to unknown piecewise constant torque disturbances affecting the motor dynamics. Then the perturbation free system to be considered is the following:

ku c  Jm  Vm

(7)

Fig. 1. Diagram of a single link flexible arm

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

220

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

System (9) is a second order system in which it is desired

A  K / J,B  v/ J

To simplify the developments, let

ISBN: 378 - 26 - 138420 - 5

to regulate the tip position of the flexible bar

The DC motor transfer function is then written as:

a given smooth reference trajectory

 m s  A  u c s  s s  B 

proposed

 * t t  with  m

acting as a an auxiliary Control input. Clearly, if there

(8)

exists an auxiliary open loop control input, ideally achieves the tracking of

The

feed-forward

technique

has

 t , towards

 * m t  , that

 *t t  for suitable initial

conditions, it satisfies then the second order dynamics, in

been

reduction gear terms (10).

successfully tested in previous experimental works where it was implemented with direct driven motors [5], [6], and in motors with reduction gears [1], [7], [8]. It is desired to regulate the load position reference trajectory 

*

m

 * m t  

 t t  to track a given smooth

t 

mL2 *  t t    * t t  c

(10)

For the synthesis of the Subtracting (10) from (9), obtain an expression in terms

feedback law only the measured motor position

 m and

of the angular tracking errors:

the measured coupling torque T coup are used. Desired

e t 

controller should be robust with respect to unknown constant

torque

disturbances

affecting

the

motor

c e m  e t  mL2

(11)

dynamics. One of the prevailing restrictions throughout *

t , e t

  t   * t t  . For this

our treatment of the problem is our desire of not to

Where e m   m  

measure, or compute on the basis samplings, angular

part of the design, view e m as an incremental control

velocities neither of the motor shaft nor of the load.

m

input for the links dynamics. Suppose for a moment it is possible to measure the angular position velocity tracking

A. Outer loop controller

error e t , then the outer loop feedback incremental controller could be proposed to be the following PID

Consider the model of the flexible link, given in (1). This Sub system is flat, with flat output given by

controller,

 t t  . This

means that all variables of the unperturbed system may be

em  et 

written in terms of the flat output and a finite number of its time derivatives (See [9]). The parameterization of in terms of

 m t 

 m t  is given, in reduction gear terms, by:

m 

mL2  t  t c

t  mL2   K2et  K1et  K0 et  d  c  0 

(12)

Integrating the expression (11) once, to obtain:

e t t   e t 0  

(9)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

221

c mL2

t

 e    e  d   m

t

(13)

0

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Disregarding the constant error due to the tracking error

ISBN: 378 - 26 - 138420 - 5

1 t   m   c

velocity initial conditions, the estimated error velocity can

(17)

be computed in the following form: e t 

c mL2

t m

In Fig. 2 depicts the feedback control scheme under which

(14)

 e    e  d   t

the outer loop controller would be actually implemented in

0

The integral reconstructor neglects the possibly nonzero

practice. The closed outer loop system in Fig. 2 is

initial condition et 0 and, hence, it exhibits a constant

asymptotically exponentially stable. To specify the parameters,

 0 ,  1 ,  2  choose to locate the closed loop

poles in the left half of the complex plane. All three poles estimation error. When the reconstructor is used in the

can be located in the same point of the real line, s = −a,

derivative part of the PID controller, the constant error is using the following polynomial equation,

suitably compensated thanks to the integral control action

s 3  3as 2  3a 2 s  a 3  0

of the PID controller. The use of the integral reconstructor

(18)

does not change the closed loop features of the proposed PID controller and, in fact, the resulting characteristic polynomial obtained in both cases is just the

same. The design gains k 0 , k1, k 2

 need to be changed

Where the parameter a represents the desired location of the poles. The characteristic equation of the closed loop

due to the use of the integral reconstructor. Substituting

system is,

the integral reconstructor e  t

s 3  k 2 s 2   0 1  k1 s   0 k 2  k 0   0 2

(14) by into the PID

controller (12) and after some rearrangements :

2

Identifying each term of the expression (18) with those of (19), the design parameters



m

 s   0  *   *m   1   t  t  s2 

(15)

 2 ,  1 ,  0  can be uniquely

specified.

(15)

B. Inner loop controller The tip angular position cannot be measured, but it certainly can be computed from the expression relating the

The angular position

tip position with the motor position and the coupling

input in the previous controller design step, is now

torque. The implementation may then be based on the use

regarded as a reference trajectory for the motor controller.

of the coupling torque measurement. Denote the coupling

Denote this reference trajectory by

torque by  it is known to be given by:

 * mr .

The dynamics of the DC motor, including the Coulomb

  c m   t   mL2  n  coup

 m generated as an auxiliary control

(16)

friction term, is given by (6). It is desired to design the controller to be robust with respect to this torque

Thus, the angular position is readily expressed as,

disturbance. A controller for the system should then include a double integral compensation action which is

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

222

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

capable of overcoming ramp tracking errors. The ramp error is mainly due to the integral angular velocity reconstructor, performed in the presence of constant,or piece-wise constant, torque perturbations characteristicof the Coulomb phenomenon before stabilization around zero velocity. The integral reconstructor is hidden in the GPI control scheme. Fig. 2. Flexible link dc motor system controlled by a two

The following feedback controller is proposed.

stage GPI controller design t  vˆ J ˆ ev  em   k3em  k2em  k1  em ( )d( ) K K 0  (20)

The closed inner loop system in Fig. 2 is asymptotically exponentially

t t

stable.

To

design

the

parameters

 3 ,  2 ,  1 ,  0  choose to place the closed loop poles in

 k0  (em ( 2 ))d( 2 )d(1) 00

a desired location of the left half of the complex plane. All

In order to avoid tracking error velocity measurements

poles can be located at the same real value, using the

again obtain an integral reconstructor for the angular

following polynomial equation,

velocity error signal

( s  p ) 4  s 4  4 ps 3  6 p 2 s 2  4 p 3 s  p 4  0 (24) t

K v eˆ m   ev ( )d ( )  e m J 0 J

(21) Where the parameter p represents the common location of all the closed loop poles. The characteristic equation of the

Replacing

eˆ m (21)

into

(17)

and

after

closed loop system is,

some

rearrangements the feedback control law is obtained as:

s 4  (3  B)s3  (3 B  2 A)s 2  1 As  0 A  0 (25)

  2 s 2  1 s   0  * (uc  u c )    ( mr   m )  s( s   3 )  *

(22)

Identifying the corresponding terms of the equations (24) and (25), the parameters

 3 ,  2 , 1 ,  0 

may be

uniquely obtained.

The new controller clearly exhibits an integral action which is characteristic of compensator networks that robustly perform against unknown constant perturbation inputs. The open loop control

that ideally achieves the

open loop tracking of the inner loop is given by

u *c (t ) 

1 * B  m (t )  m (t ) A A

(23)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

223

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

B. Outer loop

The parameter used for the flexible arm is c = 1.584 (Nm), being unknown the mass (m) and the length (L). The poles for the outer loop design are located at −35 in the real axis. With a natural frequency of the bar given by an initial, arbitrary, estimate of  0  9

(rad/sec), the transfer

function of the controller is given by the following expression

 m   * m 2.7 s  17.7  s  30  *t   t

Fig.3 Simulink file for GPI controller

The open loop reference control input

IV. SIMULATIONS

(28)

 * m t  in (10) is

given by: A. Inner loop

 * m t   0.123* t t    * t t 

(29)

The parameters used for the motor are given by an initial, Arbitrary, estimate of:

C. Results

A = 61.14(N/(Vkgs)), B = 15.15((Ns)/(kgm)) The system should be as fast as possible, but taking care of

The desired reference trajectory used for the tracking

possible saturations of the motor which occur at 2 (V).

problem of the flexible arm is specified as a delayed

The poles can be located in a reasonable location of the

exponential function.The controlled arm response clearly

negative real axis. If closed loop poles are located in, say,

shows a highly oscillatory response. Nevertheless, the

−60, the transfer function of the controller results in the

controller tries to track the trajectory and locate the arm in

following expression:

the required steady state position.

uc  u * c 798s 2  56000s  1300000 (26)  ss  365  *m  m The feed-forward term in (23) is computed in accordance with,

u * c  0.02* m  0.25 * m

(27)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

224

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Fig.6 Trajectory Tracking with GPI controller

Fig.4 Trajectory Tracking with integral controller

Fig.7 Comparison with different controllers

It can be noticed from Fig.7 that the reference trajectory tracking error rapidly converges to zero by using a GPI controller, and thus a quite precise tracking of the desired Fig.5 Trajectory Tracking with PI controller

trajectory is achieved.

D. Some remarks

The Coulomb’s friction torque in our system is given by . 

 sign( ) where  is the Coulomb’s friction coefficient m

and

. 

m

is the motor velocity. The compensation voltage

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

225

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

that it would be required to compensate the friction torque,

REFERENCES

. 

[1] V. Feliu and F. Ramos., “Strain gauge based control

as it was proposed in [2] is of about 0.36sign( m ) Volts. In

of single-link flexible very lightweight robots robust to

our feedback control scheme there is no need of a nonlinear

compensation

term

since

the

ISBN: 378 - 26 - 138420 - 5

payload changes.” Mechatronics., vol. 15, pp. 547–571,

controller

2004.

automatically takes care of the piecewise constant

[2] H. Olsson, K. Amstr¨om, and C. C. de Wit., “Friction

perturbation arising from the Coulomb friction coefficient.

models and friction compensation,” European Journal of Control, vol. 4, pp. 176–195, 1998.

V. CONCLUSIONS

[3]

M. Fliess and H. Sira-Ram´ırez, “An algebraic

framework for linear identification,” ESAIM Contr. Optim. A two stage GPI controller design scheme has been

and Calc. of Variat., vol. 9, pp. 151–168, 2003.

proposed and reference trajectory tracking of a single-link

[4]

flexible arm with unknown mass at the tip and parameters

M. Fliess, M. Mboup, H. Mounier, and H. Sira

Ram´ırez,

of the motor. The GPI control scheme here proposed only

Questioning

some

paradigms

of

signal

processing via concrete examples. Editorial Lagares,

requires the measurement of the angular position of the

M´exico City., 2003, ch. 1 in Algebraic methods in

motor and that of the tip. For this second needed

flatness, signal processing and state estimation, H. Sira-

measurement, a replacement can be carried out in terms of

Ramirez and G. Silva-Navarro (eds).

a linear combination of the motor position and the

[5] Fractional-order Systems and Controls Fundamentals

measured coupling torque provided by strain gauges

and Applications Concepción A. Monje · YangQuan Chen

conveniently located at the bottom of the flexible arm. The

Blas M. Vinagre · Dingyü Xue · Vicente Feliu.

GPI feedback control scheme here proposed is quite robust

[6] ——, “Modeling and control of single-link flexible

with respect to the torque produced by the friction term

arms with lumped

and its estimation is not really necessary. Finally, since the

masses.” J Dyn Syst, Meas Control, vol. 114(3), pp. 59–

parameters of the GPI controller depend on the natural

69, 1992.

frequency of the flexible bar and the parameters of the

[7] V. Feliu, J. A. Somolinos, C. Cerrada, and J. A.

motor A and B, this method is robust respect to zero mean

Cerrada, “A new control

high frequency noises and yields good results, as seen

scheme of single-link flexible manipulators robust to

from digital computer based simulations. Motivated by the

payload changes,” J.

encouraging simulation results presented in this article, is

Intell Robotic Sys, vol. 20, pp. 349–373, 1997.

proposed that by use of an on-line, non asymptotic,

[8] V. Feliu, J. A. Somolinos, and A. Garc´ıa, “Inverse

algebraic parameter estimator for this parameters will be

dynamics based control

the topic of a forthcoming publication.

system for a three-degree-of-freedom flexible arm,” IEEE Trans Robotics Automat, vol. 19(6) pp. 1007-1014, 2003. [9] H. Sira-Ram´ırez and S. Agrawal, “Differentially flat systems,” Marcel Dekker, 2004.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

226

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

DESIGN OF A ROBUST FUZZY LOGIC CONTROLLER FOR A SINGLELINK FLEXIBLE MANIPULATOR K.RINI SOWHARIKA

Dr.R.VIJAYA SANTHI

PG Scholar

Asst.Professor Department Electrical Engineering, Andhra University, Visakhapatnam, Andhra Pradesh.

Abstract: In this paper, a single link flexible

truncated (finite dimensional) model obtained from

manipulator is designed and controlled using a

either the finite-element method (FEM) or assumed

fuzzy-PID controller. The controller is employed

modes method. These methods require several

to control an uncertain Flexible Robotic Arm and

sensors to be used in order to obtain an accurate

its internal parameters in the motor dynamics. In

trajectory tracking, and many of them also require the

order to verify this method, proposed controller is

knowledge of all the system parameters to design

being tested against the conventional Integral and

properly the controller. Here a new method is

PID controller. The simulation results show the

considered to cancel the vibration of the flexible

robustness of the proposed Fuzzy PID controller

beam

under motor dynamics.

technique with a control scheme in a suited manner,

which

gathers

an

online

identification

with the only measures of both the motor angle

I.

obtained from an encoder and the coupling torque

INTRODUCTION

obtained from a pair of strain gauges as done in the work in. In that paper, the nonlinearities effect in the

FLEXIBLE arm manipulators span a wide range

motor dynamics, such as the Coulomb friction torque.

of applications: space robots, nuclear maintenance,

Robust control schemes minimized this effect.

microsurgery, collision control, contouring control,

Fuzzy logic is a form of many-valued logic;

pattern recognition, and many others. The system,

it deals with reasoning that is approximate rather than

described by partial differential equations (PDEs), is a

distributed-parameter

system

of

fixed and exact. Compared to traditional binary sets

infinite

fuzzy logic variables may have a truth value that

dimensions. Its non minimum phase behavior makes

ranges in degree between 0 and 1. Fuzzy logic has

it difficult to achieve high-level performance. Control

been extended to handle the concept of partial truth,

techniques such as linear control, optimal control, adaptive

control,

sliding-mode

control,

where the truth value may range between completely

neural

true

networks, or fuzzy logic deal with the control of

and

completely

false. Furthermore,

when

linguistic variables are used, these degrees may be

flexible manipulators and the modeling based on a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

227

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

managed by specific functions. Irrationality can be

are considered. In order to reduce the model, several

described in terms of what is known as the

approaches were proposed: 1) distributed parameters

fuzzjective. Fuzzy Logic Control Technique has been

model where the infinite dimension is truncated to a

a

control

finite number of vibration modes; and 2) lumped

Techniques. Many researchers have suggested that

parameters models where a spatial discretization

these controllers have the potential for robust control

leads to a finite-dimensional model. In this sense, the

in

spatial discretization can be done by both a FEM and

good

the

replacement

face

of

for

system

conventional

parameter

and

load

uncertainties. It is realized that fuzzy logic control

a lumped-mass model.

can perform very effectively when the operating conditions change rapidly. These features make up very attractive for power system applications since power system is a highly non-linear and chaotic system.

Fig. 2. Solid model design of flexible joint arm

A single-link flexible manipulator with tip mass is modeled, as developed in, that can rotate about the Z-axis perpendicular to the paper, as shown in Fig. 1. The axial deformation and the gravitational effect are neglected, because the mass of the flexible beam is floating over an air table which allows us to cancel the gravitational effect and the friction with Fig. 1. Diagram of a single-link flexible arm.

II.

the surface of the table. Since structural damping always increases the stability margin of the system, a

MODEL DESCRIPTION

design without considering damping may provide a valid but conservative result.

A. Flexible-Beam Dynamics

The real structure studied in this paper is made of carbon fiber, with high mechanical

The flexible slewing beam studied in this

resistance and very small density. We study it under

paper is considered to be a Euler–Bernoulli beam

the hypothesis of small deformations with all its mass

whose behavior is described by a PDE. Its dynamics

concentrated at the tip position because the mass of

involves infinite vibration modes. As the frequency

the load is bigger than that of the bar, then the mass

of those modes increases, its amplitude decreases.

of the beam can be neglected. In other words, the

This means that reduced models can be used, where

flexible beam vibrates with the fundamental mode;

only the low frequencies, usually more significant,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

228

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

therefore, the rest of the modes are very far from the

The dynamics of the complete system,

first one and they can be neglected. Thus, we only

actuated by a dc motor, is given by the following

consider

simplified model:

one

mode

of

vibration.

The

main

̈ = (

characteristic of this model is that the influence of the load changes can be modeled in a very easy manner,

= (

Based on these considerations, we propose the

where

)

̇ +

(3) +

)

(4) (5)

flexible beam;(4) expresses the dynamics of the dc

(1)

motor; and (5) stands for the coupling torque

is the unknown mass at the tip position.

and = (3

)

Equation (3) represents the dynamics of the

following model for the flexible beam:

̈ = (

̈ +

=

thus adaptive controller can be easily applied.

measured in the hub and produced by the translation

/ ) are the length of the flexible arm

of the flexible beam, which is directly proportional to

and the stiffness of the bar, respectively, assumed to

the stiffness of the beam and the difference between

be perfectly known. The stiffness depends on the

the angles of the motor and the tip position,

flexural rigidity

respectively.

and on the length of the bar .

is the angular position of the motor gear.

and

( )

( )=

( )

are the unmeasured angular position and angular Where

acceleration of the tip, respectively.

=(

=

(6)

) is the unknown natural frequency

of the bar due to the lack of precise knowledge of . The coupling torque can be canceled in the motor by means of a compensation term. In this case, the voltage applied to the motor is of the form; = Where

+

(7)

.

is the voltage applied before the

compensation term. The system in (4) is then given by

Fig.3. compensation of coupling torque in a hub

=

̈ +

̇ +

(8)

B. DC-Motor Dynamics The controller to be designed will be robust with respect to the unknown piecewise constant torque

A common electromechanical actuator, in many control systems, is constituted by the dc motor. The

disturbances affecting the motor dynamics

dc motor used is supplied by a servo amplifier with a

the perturbation-free system to be considered is the

current inner loop control. We can write the dynamic

following: =

equation of the system by using Newton’s second

̈ +

̇

. Then,

(9)

law.

=

̈

+

̇

+

̇ ,

+

(2)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

229

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

where = / . To simplify the developments, let = /

ISBN: 378 - 26 - 138420 - 5

function is then written as

and = / . The dc-motor transfer ( )

( )=

( )

=

.

(

=

(10)

)

( ). Suppose for a moment, that we are

able to measure the angular-position tracking error , then the outer loop feedback incremental

Fig. 3 shows the compensation scheme of

controller could be proposed to be the following PID

the coupling torque measured in the hub.

controller:

track a given smooth reference trajectory

=

( ) to

The regulation of the load position

( ) is

+

̇ −

( )

(13)

desired. For the synthesis of the feedback-control law, we are using only the measured motor position

In such a case, the closed-loop tracking error

and the measured coupling torque . One of the

evolves, governed by

prevailing restrictions throughout our treatment of the problem is our desire of not to measure, or compute

( )

+

̈ +

̇ +

=0

(14)

on the basis samplings, angular velocities of the motor shaft or of the load.

The design parameters { 2, 1, 0} are then in terms of

The parameterization of

is

chosen so as to render the closed-loop characteristic

given, in reduction-gear terms, by

polynomial into a Hurwitz polynomial with desirable roots.

̈ +

=

̈ +

=

(11)

III.

INNER LOOP CONTROLLER

System (11) is a second-order system in which to regulate the tip position of the flexible bar

The angular position

towards a given smooth reference trajectory, ∗

( ) is desired, with

auxiliary control input in the previous controller

acting as an auxiliary

design step, is now regarded as a reference trajectory

control input. Clearly, if there exists an auxiliary ∗

open loop control input the tracking of

for the motor controller. We denote this reference

( )that ideally achieves

trajectory by

( )for suitable initial conditions, it

̈ ∗( ) +

The ()

. The design of the controller to be

desired.

reduction-gear terms. ( )=

robust with respect to this torque disturbance is

satisfies then the second order dynamics, in ∗

, generated as an

(12)

following

feedback

controller

is

proposed:

Subtracting (12) from (11), an expression in

= ̇̂

+

− ̇̂

terms of the angular tracking errors is obtained ̈

=

Where

(

− =

) −

(13) ∗

( ) ( )−

∫ ∫

( ) ( ) ( )

(15)

( ),

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

230

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The following integral reconstructor for the angularvelocity error signal ̇̂ is obtained: ̇̂

( ) ( ) −

= ∫

Replacing ̂̇

IV.

(16)

ISBN: 378 - 26 - 138420 - 5

FUZZY-PID CONTROLLER Here, A classical FLS is represented as in

Fig-4. As shown, rules play a central role in the FLS

in (25) into (24) and, after

framework. Rules can be provided by experts or can

some rearrangements, the feedback control law is

be extracted from numerical data. The IF-part of a

obtained (

)=

(

(

)

rule is an antecedent while the THEN-part of a rule is

) (17)

its consequent. Fuzzy sets are associated with terms that appear both in the antecedent and in the

∗(

The open-loop control

) that ideally achieves the

consequent, while Membership Functions (MFs) are

open-loop tacking of the inner loop is given by

∗(

̈

)=

( )+

̇

( )

used to describe these fuzzy sets.

(18)

The inner loop system in Fig. 4 is exponentially stable. We can choose to place all the closed-loop poles in a desired location of the left half of

the

complex ,

parameters {

,

plane ,

to

design

the

}. As done with the outer

loop, all poles can be located at the same real value, ,

and

,

equalizing

,

Fig.4., structure of the fuzzy logic controller

can be uniquely obtained by

the

terms

of

the

two

following Table:1

polynomials:

Control rules of the fuzzy controller ( + ) =

+4

+6

+4

+

=0

(19)

Input2

+( =0

+ )

+(

+

)

+

Input1 N Z P

N P N N

Z N P N

P N P N

+

(20)

Where the parameter

Fuzzification: Translates inputs (real values)

represents the common

to fuzzy values. Inference System: Applies a fuzzy

location of all the closed-loops poles, this being

reasoning mechanism to obtain a fuzzy output. Type

strictly positive.

Defuzzificator/Reductor:

The

defuzzificator

transduces one output to precise values; the type redactor transforms a fuzzy set into a Type-1 fuzzy set.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

231

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Knowledge Base: Contains a set of fuzzy

and input2, are first fuzzified by two interval t fuzzy

rules, and a membership functions set known as the

sets i.e., ‘positive’ and ‘negative’ represented by (

database. The two normalized input variables, input1

1) and

(

2) respectively.

BLOCK DIAGRAM OF THE SINGLE LINK FLEXIBLE MANIPULATOR USING FUZZY PID CONTROL

Fig.5., Block Diagram Of The Single Link Flexible Manipulator Using Fuzzy PID Control

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

232

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

V.

ISBN: 378 - 26 - 138420 - 5

They are;

SIMULATION & RESULTS The major problems associated with the

1.

Integral controller

control of flexible structures arise from the structure

2.

Conventional PID controller

is a distributed parameter system with many modes,

3.

Fuzzy PID controller

The simulation results of the above controllers and

and there are likely to be many actuators.

given bellow

There are three types of controllers used to control the system which are simulated and compared for better results.

I)

Using Integral Controller

Fig.6, the simulation result of integral controller

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

233

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

II)

ISBN: 378 - 26 - 138420 - 5

Using Conventional PID Controller

Fig.7, the simulation result of the conventional pid controller

III)

Using Fuzzy PID Controller

Fig.8, the simulation result of the fuzzy PID controller

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

234

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

IV)

ISBN: 378 - 26 - 138420 - 5

The Combined Figure Of The Output Simulated

Fig.9. The Combined Figure Of The Output Simulated

Therefore, the above results prove that the

proposed controller, as it is being compared with the

system controlled by the fuzzy PID controller has

conventional integral and PID controller.

better n accurate results when compared to other controllers

VI.

VII. CONCLUSION:

REFERENCES:

[1] S. K. Dwivedy and P. Eberhard, “Dynamic analysis of flexible manipulators, a literature review,”

Here, fuzzy PID controller is designed to control

Mech.Mach. Theory, vol. 41, no. 7, pp. 749–777, Jul.

the single link flexible manipulator . The proposed

2006.

controller

the

[2] R. H. Canon and E. Schmitz, “Initial experiments

conventional PID controller. The proposed Fuzzy

on the end-point control of a flexible robot,” Int. J.

PID controller is tested for different motor dynamics.

Rob. Res., vol. 3, no. 3, pp. 62–75, 1984.

Simulation results show the efficiency of the

[3] Y. P. Chen and H. T. Hsu, “Regulation and

is

being

designed

based

on

vibration control of an FEM based single-link

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

235

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

flexible arm using sliding-mode theory,” J. Vib.

[11] V. Feliu, K. S. Rattan, and H. Brown, “Modeling

Control, vol. 7, no. 5, pp. 741–752, 2001.

and control of single-link flexible arms with lumped

[4] Z. Su and K. A. Khorasani, “A neural-network-

masses,” J. Dyn. Syst. Meas. Control, vol. 114, no. 7,

based controller for a single-link flexible manipulator

pp. 59–69, 1992.

using the inverse dynamics approach,” IEEE Trans.

[12] W. Liu and Z. Hou, “A new approach to

Ind. Electron., vol. 48, no. 6, pp. 1074–1086,

suppress spillover instability in structural vibration,”

Dec.2001.

Struct. Control Health Monitoring, vol. 11, no. 1, pp.

[5] V. G. Moudgal, W. A. Kwong, K. M. Passino,

37–53, Jan.–Mar. 2004.

and S. Yurkovich, “Fuzzy learning control for a

[13] R. D. Begamudre, Electro-Mechanical Energy

flexible-link robot,” IEEE Trans. Fuzzy Syst., vol. 3,

ConversionWith Dynamics of Machines. New York:

no. 2, pp. 199–210, May 1995.

Wiley, 1998.

[6] J. Becedas, J. Trapero, H. Sira-Ramírez, and

[14] H. Sira-Ramírez and S. Agrawal, Differentially

V.Feliu, “Fast identification method to control a

Flat Systems. NewYork: Marcel Dekker, 2004.

flexible manipulator with parameter uncertainties,” in

[15] P. C. Young, “Parameter estimation for

Proc. ICRA, 2007, pp. 3445–3450.

continuous-time models—A survey,” Automatica,

[7] V. Feliu and F. Ramos, “Strain gauge based

vol. 17, no. 1, pp. 23–29, Jan. 1981.

control of single link flexible very light weight robots

[16] H. Unbehauen and G. P. Rao, “Continuous-time

robust to payload changes,” Mechatronics, vol. 15,

approaches to system identification—A survey,”

no. 5, pp. 547–571, Jun. 2005.

Automatica, vol. 26, no. 1, pp. 23–35, Jan. 1990.

[8] H. Olsson, H. Amström, and C. C. de Wit,

[17] N. Sinha and G. Rao, Identification of

“Friction models and friction compensation,” Eur. J.

Continuous-Time

Control, vol. 4, no. 3, pp. 176–195, 1998.

Netherlands: Kluwer, 1991.

[9] S. Cicero, V. Santos, and B. de Carvahlo, “Active

[18] H. Unbehauen and G. Rao, Identification of

control to flexible manipulators,” IEEE/ASME Trans.

Continuous Systems. Amsterdam, The Netherlands:

Mechatronics, vol. 11, no. 1, pp. 75–83, Feb. 2006.

North-Holland, 1987.

Systems.

Dordrecht,

The

[10] E. Bayo, “A finite-element approach to control the end-point motion of a single-link flexible robot,” J. Robot. Syst., vol. 4, no. 1, pp. 63–75, Feb. 1987.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

236

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

FPGA Implementation Of RF Technology And Biometric Authentication Based ATM Security

Author-1: K.MOHAN

Author-2: S.VAMSEE KRISHNA

Email.id: umail2mohan@gmail.com

Email id: vamseebe@gmail.com

MTECH in VLSI SYSTEM DESIGN

Assistant Professor In Dept Of ECE

Siddarth institute of engineering and technology

Siddarth institute of engineering and technology

puttur.

Puttur.

ABSTRACT Three-factor authentication method was introduced as advancement to two-factor authentication schemes in remote authentication. The three factors used in authentication are a smart card, password and a biometric. The authentication is based on the characteristics of these three factors. To improve the security in the remote authentication, biometric was introduced. Due to the uniqueness and the characteristics of biometrics, they are quite suitable for user authentication and also reduce the drawbacks inherited from passwords and smart cards.

‘something you are’ is taken into thought multifactor. Most early authentication mechanisms area unit entirely supports secret. whereas such protocols area unit comparatively simple to implement, passwords (and human generated passwords in particular) have several vulnerabilities. As associate degree example, human generated and unforgettable passwords area unit typically short strings of characters and (sometimes) poorly designated. By exploiting these vulnerabilities, straightforward wordbook attacks will crack passwords during a short time [1]. Due to these issues, hardware authentication tokens area unit introduced to strengthen the protection in user authentication, and smart-card-based secret authentication has become one amongst the foremost common authentication mechanisms.

Instead of Smart cards we are designed the RF Technology to identify the account person details. A face recognition system is a computer automatically application for identifying or verifying a person from a digital image from a video. One of the ways to do this is by comparing selected facial features from the image and a facial database. With the help of Camera to detect whether that account person is authorized or unauthorized. If the authorized person only to access the account. If unauthorized person we will give certain intimation given to owners mobile by using the MMS Modem. The above process will be done by FPGA and Mat lab.

Smart-card-based Arcanum authentication provides two-factor authentication, particularly a in login needs the shopper to own a legitimate smart-card and an accurate Arcanum.

Keywords: RF Technology, Face Recognition Method, VLSI.

An authentication issue could be a piece of knowledge and method wont to demonstrate or verify the identity of someone or different entity requesting access below security constraints. Multifactor authentication (MFA) could be a system wherever in 2 or a lot of various factors area unit employed in conjunction to demonstrate. victimization over one issue is typically referred to as “strong authentication”. the method that solicits multiple answers to challenge queries in addition as retrieves ‘something you have’ or ‘something you are’ is taken into account multifactor. True multifactor authentication needs the employment of resolution from 2 or a lot of the 3 classes of things.

INTRODUCTION An authentication issue can be a bit of data and technique accustomed certify or verify the identity of a personal or completely different entity requesting access below security constraints. 3 authentication could be a system wherever in 2 or a lot of various factors area unit wont to demonstrate the persons. victimization higher than one issue is typically referred to as “strong authentication”. the method of multiple answers to challenge queries equally as retrieves ‘something you have’ or

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

237

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

victimization multiple solutions from constant class wouldn't represent multifactor authentication . Two-factors or multi-factor authentication is precisely what it feels like. rather than victimization just one kind of authentication issue, like solely things a user is aware of (Login Ids, passwords, secret pictures, shared secrets, invited personnel info, etc), two-factor authentication needs the addition of a second issue, the addition of one thing the user HAS or one thing the user IS. Two-factor authentication isn't a replacement idea particularly within the banking business. Twofactor authentication is employed whenever a bank client visits their native ATM. One authentication issue is that the physical ATM cards the client slides into the machine. The second issue is that the PIN they enter. while not each, authentication cannot occur.

ISBN: 378 - 26 - 138420 - 5

countersign mustn't be a word that may be found in an exceedingly lexicon. A lexicon attack uses a info of words like a lexicon, attempting all the words within the info for a match. It’s price stating the plain here[md]attackers have access to dictionaries in different languages. In different words, a countersign employing a word from another language is as straightforward to crack as a countersign employed in your language. The common method that tokens area unit used for authentication is with websites. The user varieties within the variety displayed within the token on an internet page. If the user varieties within the same variety well-known by the server at that point, the user is echt. It’s common to use multifactor authentication with token-based authentication. additionally to coming into the quantity displayed within the token, the user is commonly needed to enter a username and countersign. This proves they need one thing (the token), and that they grasp one thing (their password).

Three issue is that the commonest issue used and might be a countersign or a straightforward personal number (PIN). However, it's conjointly the best to beat. once victimization passwords, it’s necessary to use sturdy passwords. a robust countersign encompasses a mixture of upper-case letter, lower case, numbers, and special characters. within the past, security professionals suggested that passwords ought to be a minimum of eight characters long. However, with the increasing strength of countersign bats, it’s common to listen to professionals recommending longer passwords. as an example, several organizations need that administrator passwords be a minimum of fifteen characters long.

AUTHENTICATION METHODS: Token Based Authentication: The Token primarily based technique class is once more because the name suggests authentication supported a TOKEN such as: a key, a magnetic card, a wise card, a badge and a passport. even as once someone loses a key, he wouldn't be ready to open the lock, a user agency loses his token wouldn't be ready to login, per se the token primarily based authentication class is kind of liable to fraud, thieving or loss of the token itself.

Longer passwords area unit more durable to recollect unless they’re place into some kind of purposeful order. as an example, a phrase like “Security breeds success” will become a countersign of “S3curityBr33d$Succ3$”. Notice that every word starts with a capital, every minuscule “s” is modified to a $, every minuscule “e” is modified to a three, and also the areas area unit removed. The countersign is less complicated to recollect, nonetheless is extremely advanced. However, if a user is needed to recollect an extended countersign with none which means, like “1kqd9% lu@7cpw#”, they're way more probably to write down the countersign down, weakening the protection.

Knowledge Based Authentication The thought of data based mostly Authentication is just the utilization of typical passwords, pins or pictures to achieve access into most laptop systems and networks. matter (alphabetical) and graphical user authentications area unit 2 strategies that area unit presently used. True matter authentication that uses a username and watchword has inherent weaknesses and disadvantages which can be mentioned within the following section. Inherit Based Authentication The Inherent primarily based Authentication class that is additionally called identification, because

Passwords mustn't embody personal knowledge sort of a user’s name or username. in addition a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

238

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

the name suggests, is that the machine-driven method/s of biometric identification or identification supported measurable physiological or behavioural characteristics like fingerprints, palm prints, hand pure mathematics, face recognition, voice recognition and such different similar strategies. Biometric characteristics square measure neither duplicatable nor transferable. they're constant and immutable . so it's close to not possible to change such characteristics or faux them. what is more such characteristics cannot be transferred to different users nor be taken as happens with tokens, keys and cards. not like the protection of a user’s secret, biometric characteristics, for example the user’s fingerprint or iris pattern, aren't any secret. thus there's no danger of an opening in security.

In proposed algorithm 1st level authentication is provided by a smart card by using RF Transmitter and RF Receiver. Whenever authorized frequency occurs then it automatically opens the camera then 2 Level authentication will be started this will be done by Face recognition using PCA algorithm implemented in Mat Lab then if person was authorized then he forwarded to text based Password i.e. 3rd level else it automatically send a MMS to the owner when an unauthorized was detected and door lock and buzzer will be activated. If owner replies with a secret code then the person can access the account. The second step is that the capturing of a face image. this may commonly be done employing a still or video camera. The face image is passed to the popularity computer code for recognition (identification or verification). this may commonly involve variety of steps like normalizing the face image then making a ‘template’ of ‘print’ to be compared to those within the information. The match will either be a real match which might cause investigatory action or it'd be a ‘false positive’ which suggests the popularity algorithmic rule created a blunder and also the alarm would be off. every component of the system is set at totally different locations at intervals a network, creating it simple for one operator to retort to a spread of systems.

PROPOSED AUTHENTICATION TECHNIQUE:

The information age is quickly revolutionizing the method transactions square measure completed. Everyday actions square measure more and more being handled electronically, rather than with pencil and paper or face to face. This Advancement in electronic transactions has resulted in a very bigger demand for quick and correct user identification and authentication. Access codes for buildings, banks accounts and laptop systems typically use PIN's for identification and security clearances. exploitation the right PIN gains access, the booming transactions will occur, however the user of the PIN isn't verified. once sensible cards square measure lost or taken, AN unauthorized user will typically come back up with the correct personal codes. This paper describes however face recognition technology will facilitate to the $64000 world banking machines.

RFID Recieve

Door Lock

Webcam

FPGA

PC

ALARM

MMS Modem

ISBN: 378 - 26 - 138420 - 5

FPGA An FPGA could be a device that contains a matrix of reconfigurable gate array logic electronic equipment. once a FPGA is organized, the inner electronic equipment is connected in a very means that makes a hardware implementation of the software package application. in contrast to processors, FPGAs use dedicated hardware for process logic associate degree don't have an software. FPGAs are actually parallel in nature therefore totally different process operations don't need to contend for identical resources. As a result, the performance of 1 a part of the appliance isn't affected once further process is additional. Also, multiple management loops will run on one FPGA device at totally different rates. FPGA-based management systems will enforce essential interlock logic and may be designed to forestall I/O forcing by associate degree operator. However, in contrast to hard-wired computer circuit board (PCB) styles that have fastened hardware

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

239

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

the info out pins of the HT-12D, D0,D1,D2 and D3. The HT-12D receives the 12-bit word and interprets the primary 8-bits as address and also the last 4-bits as information. Pins 1-8 of the HT-12E area unit the address pins. victimisation the address pins of the HT-12E, we are able to choose completely different addresses for up to 256 receivers. The address is set by setting pins 1-8 on the HT-12E to ground, or simply feat them open. The address designated on the HT-12E circuit should match the address designated on the HT12D circuit (exactly), or the knowledge are going to be unnoticed by the receiving circuit.

resources, FPGA-based systems will virtually wire their internal electronic equipment to permit reconfiguration when the system is deployed to the sphere. FPGA devices deliver the performance and responsibility of dedicated hardware electronic equipment. A single FPGA will replace thousands of separate parts by incorporating legion logic gates in a very single computer circuit (IC) chip. the inner resources of associate degree FPGA chip include a matrix of configurable logic blocks (CLBs) enclosed by a boundary of I/O blocks . Signals ar routed among the FPGA matrix by programmable interconnect switches and wire routes

When the received addresses from the encoder matches the decoders, the Valid Transmission pin17 of the HT-12D can go HIGH to point that a sound transmission has been received and also the 4-bits of information area unit barred to the info output pins, 10-13. The electronic transistor circuit shown within the schematic can use the American state, or valid transmission pin to light-weight the light-emitting diode. once the American state pin goes HIGH it activates the 2N2222 electronic transistor that successively delivers power to the light-emitting diode providing a visible indication of a sound transmission reception.

FPGAs contain programmable logic parts referred to as "logic circuits", and a hierarchy of reconfigurable interconnects that permit the blocks to be "wired together" somewhat like several (changeable) logic gates which will be inter-wired in (many) totally different configurations. Logic blocks will be organized to perform advanced combinatory functions, or just easy logic gates like AND and XOR. In most FPGAs, the logic blocks conjointly embody memory components, which can be easy flip-flops or additional complete blocks of memory.

Controlling the Project with a FPGA Using these RF transmitter & receiver circuits with a FPGA would be easy. we are able to merely replace the switches used for choosing knowledge on the HT-12E with the output pins of the FPGA. conjointly we are able to use another output pin to pick out TE, or transmit change on the HT-12E. By taking pin-14 LOW we tend to cause the transmitter section to transmit the info on pins 1013.

RF ENCODER AND DECODER: General Encoder and Decoder Operations The Holtek HT-12E IC encodes 12-bits of {data of knowledge} and serially transmits this data on receipt of a Transmit change, or a coffee signal on pin-14 /TE. Pin-17 the D_OUT pin of the HT-12E serially transmits no matter information is out there on pins ten,11,12 and 13, or D0,D1,D2 and D3. information is transmitted at a frequency designated by the external generator electrical device.

To receive info merely attach the HT-12D output pins to the FPGA. The VT, or valid transmission pin of the HT-12D might signal the FPGA to grab the 4-bits of knowledge from the info output pins. If you're employing a FPGA with interrupt capabilities, use the Green Mountain State pin to cause a jump to associate interrupt vector and method the received knowledge.

By victimisation the switches connected to the info pins on the HT-12E, as shown within the schematic, we are able to choose the knowledge in binary format to send to the receiver. The receiver section consists of the Ming dynasty RE-99 and also the HT-12D decoder IC. The DATA_IN pin14 of the HT-12D reads the 12-bit binary info sent by the HT-12E then places this information on its output pins. Pins 10, 11,12 and thirteen area unit

The HT-12D knowledge output pins can LATCH and stay during this state till another valid transmission is received. NOTE: you may notice that in each schematics every of the Holtek chips have resistors hooked up to pins fifteen and sixteen. These resistors should be the precise values shown within the schematic. These resistors

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

240

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

set the inner oscillators of the HT-12E/HT-12D. it's counseled that you simply opt for a tenth electrical device for every of those resistors to make sure the right circuit oscillation. Range of Operation Figure 4: RF-434 Pin Diagram

The normal operating range using (only) the LOOP TRACE ANTENNA on the transmitter board is about 50 feet. By connecting a quarter wave antenna using 9.36 inches of 22 gauge wire to both circuits, you can extend this range to several hundred feet. Your actual range may vary due to your finished circuit design and environmental conditions. The transistors and diodes can be substituted with any common equivalent type. These will normally depend on the types and capacities of the particular loads you want to control and should be selected accordingly for your intended application.

MMS Modems A GSM electronic equipment is AN external electronic equipment device, like the Wavecom FASTRACK electronic equipment. Insert a GSM SIM card into this electronic equipment, And connect the electronic equipment to an offered port on your laptop. A GSM electronic equipment is a laptop Card put in an exceedingly pc, like the Nokia Card Phone. A GSM electronic equipment may even be a typical GSM mobile with the suitable cable and code driver to attach to a port on your laptop. Phones like the Nokia 7110 with a DLR-3 cable, or varied Ericsson phones, square measure typically used for this purpose.

RF DETAILS The TWS-434 and RWS-434 are extremely small, and are excellent for applications requiring short-range RF remote controls. The transmitter module is only 1/3 the size of a standard postage stamp, and can easily be placed inside a small plastic enclosure. TWS-434: The transmitter output is up to 8mW at 433.92MHz with a range of approximately 400 foot (open area) outdoors. Indoors, the range is approximately 200 foot, and will go through most walls.....

A dedicated GSM electronic equipment (external or laptop Card) is typically preferred to a GSM mobile. this is often attributable to some compatibility problems that may exist with mobile phones. for instance, if you want to be ready to receive inward MMS messages along with your entree, and you're employing a mobile as your electronic equipment, you want to utilize a mobile that doesn't support WAP push or MMS. this is often as a result of the mobile mechanically processes these messages, while not forwarding them via the electronic equipment interface. equally some mobile phones won't permit you to properly receive SMS text messages longer than one hundred sixty bytes (known as “concatenated SMS” or “long SMS”). this is often as a result of these long messages are literally sent as separate SMS messages, and therefore the phone tries to piece the message before forwarding via the electronic equipment interface. (We’ve ascertained this latter downside utilizing the Ericsson R380, whereas it doesn't seem to be a tangle with several different Ericsson models.)When you install your GSM electronic equipment, or connect your GSM mobile to the pc, make certain to put in the suitable Windows electronic equipment driver from the device manufacturer. To modify configuration, the currently SMS/MMS entree can communicate with the device via this driver. a further advantage of utilizing this driver is that you simply will use Windows medical specialty to make sure that the

RF 434 MHz Transmitters. Modulation: ASK The TWS-434 transmitter accepts both linear and digital inputs, can operate from 1.5 to 12 Volts-DC, and makes building a miniature handheld RF transmitter very easy. The TWS-434 is approximately the size of a standard postage stamp.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

241

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

electronic equipment is communication properly with the pc. The currently SMS/MMS entree will at the same time support multiple modems, only if your component has the offered communications port resources.

Figure 5: RTL view of VHDL Code

GSM smart modem

Previous Analysis:

Preserving security and privacy is a challenging issue in distributed systems. This paper makes a step forward in solving this issue by proposing a generic framework for three-factor authentication to protect services and resources from unauthorized use. The authentication is based on password, smart-card and biometrics. Our framework not only demonstrates how to obtain secure threefactor authentication from two-factor authentication, but also addresses several prominent issues of biometric authentication in distributed systems (e.g., client privacy and error tolerance). The analysis shows that the framework satisfies all security requirements on threefactor authentication and has several other practice-friendly properties (e.g., keyagreement, forward security and mutual authentication). The future work is to fully identify the practical threats on threefactor authentication and develop concrete three-factor authentication protocols with better performances.

Figure 6: Technological Schematic Whenever the face was detected true then it automatically opens a login form shown below

ATM Sequrity Page

Results:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

242

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

After Recognition

Unauthorized Person Whether person is checking as authorized or unauthorized

8. CONCLUSION: There are several schemes that manage three-factor authentication method. however it's a really troublesome task to get both client aspect and server aspect security. They additionally tried to produce privacy of the user biometric. Even though the theme achieved privacy protection, it couldn’t face up to positive identification attack. additionally server aspect attack is another crucial issue in such remote authentication schemes. Face recognition technologies have been associated generally with very costly top secure applications. Hence, our projected theme in all probability addresses the concerns of user privacy, example protection and trust problems and gives advantage of protective data from the user except the specified identity.

Authorized Person

REFERENCES [1] D.V. Klein, “Foiling the Cracker: A Survey of, and Improvements to, Password Security,” Proc. Second USENIX Workshop Security, 1990. [2] A.K. Jain, R. Bole, and S. Pankanti, Eds., “Biometrics: Personal Identification in Networked Society,” Norwell, MA: Kluwer, 1999. [3] D. Malone, D. Maio, A. K. Jain, and S. Prabhakar, “Handbook of Fingerprint Recognition” ACM SIGOPS Operating Syst. Rev., vol. 38, no. 4, pp. 91-96, Oct. 2004 [4] Ed. Dawson, J. Lopez, J. A. Montenegro, and E. Okamoto, “BAAI: Biometric Authentication and Authorization Infrastructure,” Proc. IEEE Intern. Conference on Information Technology: Research and Education (ITRE’03), pp. 274-278, 2004. [5] J.K. Lee, S.R. Ryu, and K.Y. Yoo, “Fingerprint Based Remote User Authentication Scheme Using Smart Cards,” Electron. Lett., vol. 38, no. 12, pp. 554-555, Jun. 2002.

Bank Login Page

Remaining Balance

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

243

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[6] C.C. Chang and I.C. Lin, “Remarks on Fingerprint-Based Remote User Authentication Scheme Using Smart Cards,” [7] C.H. Lin and Y.Y. Lai, “A Flexible Biometrics Remote User Authentication Scheme,” Compute. Standards Interfaces, vol. 27, no. 1, pp. 19-23, Nov. 2004 [8]3D Face Tracking and Expression Interference from a 2D sequence Using Manifold Learning: WeikaiLiao and GerardMedioni, [9] A. Elgammal. Learning to track: Conceptual manifoldmapforclosedformtracking.CVPR2005,pp.724–730.1 [10] A.Elgammal and.-S. Lee. Inferring 3dbodyposefromsilhouettes using activity manifold learning.CVPR2004,pp.681–688 [11]L.GuandT.Kanade.3dalignment of face single image. CVPR 2006,pp.1305–1312.

in a

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

244

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Design and Simulation of High Speed CMOS Full Adder

Author 1: V.VINAY KUMAR

Author2: V.VISWANATH Email id: vissuvnl@gmail.com

Email.id: vvinaykumar153@gmail.com M.TECH in VLSI System Design,

Assistant prof. In Department of ECE,

Siddarth Institute of Engineering and Technology,

Siddarth Institute of Engineering and Technology,

Puttur, Chittor(Dist).

Puttur, Chittor(Dist).

faster operation speed and more regular circuit structure The arithmetic circuit is the important core in electronic systems. If the arithmetic circuit has good characteristics, the overall performance of electronic systems will be improved dramatically. Obviously, the performance of the arithmetic circuit directly determines whether the electronic system in market is competitive. It is well known that full adder is the crucial building block used to design multiplier, microprocessor, digital signal processor (DSP), and other arithmetic related circuits. In addition, the full adder is also dominant in fast adder design. Therefore, to effectively design a full adder with smaller chip area, low power consumption, fast operation speed and regular circuit structure, are the common required for IC designers.

ABSTRACT: Adder are the basic building blocks of any computing system.. These Arithmetic operations are widely used in most digital computer systems. Addition will be the basic component in arithmetic operation and is the base for arithmetic operations such as multiplication and the basic adder cell can be modified to function as subtractor by adding another xor gate and can be used for division. Therefore, 1-bit Full Adder cell is the most important and basic block of an arithmetic unit of a system. In this paper we analysis the 1-bit full adder using SR-CPL style of full adder design and Transmission gate style of design. Keywords: SR-Cpl, Transmission Gate Full Adder, Leakage, T-Spice

1. Introduction: Due to rapid advances in electronic technology, electronics market is becoming More competitive, which results in consumer electronic products requiring even more stringently high quality. The design of consumer electronic products requires not only light weight and slim size, but also low power and fast time-to-market. Therefore, the integrated circuit (IC) designers have to consider more important issues such as chip area, power consumption, operation speed, circuit regularity, and so on. Due to these design issues relevant to the key competitive factors of electronic systems, IC designers and electronic design automation (EDA) vendor are very concerned about the development of effective methodologies to fetch smaller chip area design, lower power consumption,

Since full adder plays an extremely important role in arithmetic related designs, many IC designers puts a lot of efforts on full adder circuit research. Consequently, there are many different types of full adders have been developed for a variety of different applications. These different types of full adders have different circuit structures and performance. Full adder designs have to make tradeoff among many features including lower power consumption, faster operating speed, reduced transistor count, full-swing output voltage and the output driving capability, depending on their applications to meet the needs of electronic systems. One important kind of full adder designs focus on adopting minimum transistor count to save chip area [1, 2, 3, 4, 5]. These full adder designs with fewer

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

245

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

transistors to save chip area does have excellent performance, however, due to MOS transistors reduced, these full adders have threshold voltage loss problem and poor output driving capability. Some full adders are designed to emphasize making up for threshold voltage loss to improve circuit performance [6, 7, 8]. These full-swing full adder designs insist on using fewer MOS transistors to reduce circuit complexity to go along with reduced power consumption and delay time. However, the full-swing full adders have no output driver in design leading to signal attenuation problems when they are connected in series to construct multi-bit adders. Therefore, many studies focus on gathering many features such as full-swing voltage, powerful output driving capability and good power delay product [9, 10, 11, 12, 13] in the meantime to boost the performance of full adder circuit design as a whole. However, the penalties have to pay for taking too many design issues into consideration are increased circuit complexity, larger chip area, difficult layout design, and increased transistor count. Therefore, how to design a full adder circuit with better performance and simpler structure is the main goal of full adder design field. In order to design a full adder with low circuit complexity, good circuit performance and the modularized structures, a multiplexer-based full adder is proposed in this study. The multiplexerbased full adder has not only regularly modularized structure, but also superior circuit performance.

ISBN: 378 - 26 - 138420 - 5

improve the total performance.

Fig. 2: Full adder logic architecture with four modules.

The traditional full adder logic architecture can be divided into three modules [6, 7, 8], and the logic architecture block diagram is shown in Fig. 1. NewHPSC full adder [9] and Hybrid-CMOS full adder [10] also belong to this category. These two full adders achieve logic functions of three modules by using pass transistor logic (PTL) and static complementary metal-oxide-semiconductor (CMOS) circuit design techniques. Fig. 2 schematically show another logic architecture block diagram of a full adder in which logic architecture is divided into four modules. DPLFA full adder [13] and SR-CPL full adder [13] also belong to this category. DPLFA full adder and SR-CPL full adder achieve logic functions of full adder modules by using double pass transistor logic (DPL) and swing restored complementary passtransistor logic (SR-CPL) circuit design techniques, respectively.

The rest of this paper is organized as follows: In section 2, some previous works on full adder design are discussed. A novel multiplexer-based full adder design is presented in section 3. In section 4, we show the experimental results and make a discussion. Finally, a brief conclusion is given in section 5. 2. Previous Works on Full Adder Design The full adder function is to sum two binary operands A, B and a carry input Ci, and then generate a sum output (S) and a carry output (Co). There are two factors affecting the performance of a full adder design: one is the full adder logic architecture, and the other is the circuit design techniques to perform the logic architecture function. Therefore, the full adder design approach requires using different types of logic architecture and circuit design technique to

2. Design Considerations A. Impact of Logic Style The logic style used in logic gates basically influences the speed, size, power dissipation, and the wiring complexity of a circuit. The circuit delay is determined by the number of inversion levels, the number of transistors in series, transistor sizes [3] (i.e., channel widths), and intra- and inter-cell wiring

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

246

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

capacitances. Circuit size depends on the number of transistors and their sizes and on the wiring complexity. Power dissipation is determined by the switching activity and the node capacitances (made up of gate, diffusion, and wire capacitances), the latter of which in turn is a function of the same parameters that also control circuit size. Finally, the wiring complexity is determined by the number of connections and their lengths and by whether Singlerail or dual-rail logic is used. All these characteristics may vary considerably from one logic style to another and thus make the proper choice of logic style crucial for circuit performance. As far as cellbased design techniques (e.g., standard-cells) and logic synthesis are concerned, ease-of-use and generality of logic gates is of importance as well. Robustness with respect to voltage and transistor scaling as well as varying process and working conditions, and compatibility with surrounding circuitries are important aspects influenced by the implemented logic style.

ISBN: 378 - 26 - 138420 - 5

should be kept minimal. Another source for capacitance reduction is found at the layout level, which, however, is not discussed in this paper. Transistor downsizing is an effective way to reduce switched capacitance of logic gates on noncritical signal paths. For that purpose, a logic style should be robust against transistor downsizing, i.e., correct functioning of logic gates with minimal or nearminimal transistor sizes must be guaranteed (ratioless logic). Supply voltage reduction: The supply voltage and the choice of logic style are indirectly related through delay-driven voltage scaling. That is, a logic style providing fast logic gates to speed up critical signal paths allows a reduction of the supply voltage in order to achieve a given throughput. For that purpose, a logic style must be robust against supply voltage reduction, i.e., performance and correct functioning of gates must be guaranteed at low voltages aswell. This becomes a severe problem at very low voltage of around 1 V and lower, where noise margins become critical.

B. Logic Style Requirements for Low Power According to the formula_

Switching activity reduction: Switching activity of a circuit is predominantly controlled at the architectural and registers transfer level (RTL). At the circuit level, large differences are primarily observed between static and dynamic logic styles. On the other hand, only minor transition activity variations are observed among different static logic styles and among logic gates of different complexity, also if glitching is concerned.

The dynamic power dissipation of a digital CMOS circuit depends on the supply voltage, the clock frequency, the node switching activities, the node capacitances, the node short circuit currents and the number of nodes. A reduction of each of these parameters results in a reduction of dissipated power. However, clock frequency reduction is only feasible at the architecture level, whereas at the circuit level frequency is usually regarded as constant in order to fulfill some given throughput requirement. All the other parameters are influenced to some degree by the logic style applied. Thus, some general logic style requirements for low-power circuit implementation can be stated at this point.

Short-circuit current reduction: Short-circuit currents (also called dynamic leakage currents or overlap currents) may vary by a considerable amount between different logic styles. They also strongly depend on input signal slopes (i.e., steep and balanced signal slopes are better) and thus on transistor sizing. Their contribution to the overall power consumption is rather limited but still not negligible (10–30%), except for very low voltages where the short-circuit currents disappear. A lowpower logic style should have minimal short-circuit currents and, of course, no static currents besides the inherent CMOS leakage currents.

Switched capacitance reduction: Capacitive load, originating from transistor capacitances (gate and diffusion) and interconnect wiring, is to be minimized. This is achieved by having as few transistors and circuit nodes as possible, and by reducing transistor sizes to a minimum. In particular, the number of (high capacitive) inter-cell connections and their length (influenced by the circuit size)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

247

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

C. Logic Style Requirements for Ease-of-Use For ease-of-use and generality of gates, a logic style should be highly robust and have friendly electrical characteristics, that is, decoupling of gate inputs and outputs (i.e., at least one inverter stage per gate) as well as good driving capabilities and full signal swings at the gate outputs, so that logic gates can be cascaded arbitrarily and work reliably in any circuit configuration. These properties are prerequisites for cell-based design and logic synthesis, and they also allow for efficient gate modeling and gate-level simulation. Furthermore, a logic style should allow the efficient implementation of arbitrary logic functions and provide some regularity with respect to circuit and layout realization. Both low-power and high-speed versions of logic cells (e.g., by way of transistor sizing) should be supported in order to allow flexible power-delay tuning by the designer or the synthesis tool. D. Static versus Dynamic Logic Styles A major distinction, also with respect to power dissipation, must be made between static and dynamic logic styles. As opposed to static gates, dynamic gates are clocked and work in two phases, a precharge and an evaluation phase. The logic function is realized in a single NMOS pull-down or PMOS pull-up network, resulting in small input capacitances and fast evaluation times. This makes dynamic logic attractive for high-speed applications. However, the large clock loads and the high signal transition activities due to the recharging mechanism result in excessive high power dissipation. Also, the usage of dynamic gates is not as straight forward and universal as it is for static gates, and robustness is considerably degraded. With the exception of some very special circuit applications, dynamic logic is no viable candidate for low-power circuit design

ISBN: 378 - 26 - 138420 - 5

The XOR–XNOR module performs XOR and XNOR logic operations on inputs A and B, and then generates the outputs H and H′. Subsequently, H and H′ both are applied to the sum and the carry modules for generation of sum output so and carry output Co. 1) SR-CPL Full Adder Design: A SR-CPL is designed using a combination of pass transistor logic (PTL) and static CMOS design techniques to provide high energy efficiency and improve driving capability. An energy efficient CMOS FA is implemented using swing restored complementary pass-transistor logic (SR-CPL) and PTL techniques to optimize its PDP.

Fig4: SR-CPL Full Adder Design 2) Transmission gate CMOS (TG) uses transmission gate logic to realize complex logic functions using a small number of complementary transistors. It solves the problem of low logic level swing by using pMOS as well as nMOS

3. Design Technologies: There are many sorts of techniques that intend to solve the problems mentioned above Full Adder Design: So = H′Ci + HC′o (1) Co = HCi + H′A (2) Where H = A Xor B and H′ = A Xnor B. A Full Adder is made up of an XOR–XNOR module, a sum module and a carry module.

Fig1: Transmission Gate Full Adder

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

248

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

4. Simulation Analysis: Full Adder was designed with different logic using Tanner Tools and simulated

Fig6: Transmission Gate Full Adder simulation

Fig11: SR-CPL full Adder Design

Tabulation: Circuit SR-CPL

Power Dissipation 2.309128e-005 watts

Transmission Gate

4.876595e-008 watts

The above circuits of full adder are simulated using Tanner Tools using TSMC018 and its delay and power of individual circuits are tabulated 5. Conclusion: In this paper we show the low power adder design full adder with less number transistors. CMOS design styles make the full adder to design in less no of transistors as wells as with lower power dissipation. For performance validation, Tanner simulations were conducted on FAs implemented with TSMC 018 CMOS process technology in aspects of power consumption, delay time. In contrast to other types of FAs with drivability, an SR-CPL-FA is superior to the other ones and can be applied to design related adder-based portable electronic products in practical applications in today’s competitive markets.

Fig12: SR-CPL full Adder simulation

References: [1] Neil H. E. Weste & David Harris, “CMOS VLSI Design- A circuit and Systems Perspective”, 4th edition, Addison Wesley, 2010 [2] C. N.Marimuthu, Dr. P. Thangaraj, Aswathy Ramesan, “Low power shift and add multiplier design", International Journal of Computer Science and Information Technology, June 2010, Vol. 2, Number 3. [3] Marc Hunger, Daniel Marienfeld, “New SelfChecking Booth Multipliers”, International Journal of Applied Mathematics Computer Sci.,

Fig5: Transmission Gate Full Adder Design

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

249

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

2008, Vol. 18, No. 3, 319–328 [4] C. Jaya Kumar, R. Saravanan, “VLSI Design for Low Power Multiplier using Full Adder”, European Journal of Scientific Research, ISSN 1450-216X Vol.72 No.1 (2012), pp. 5-16 [5] Ravi Nirlakalla, Thota Subba Rao, Talari Jayachandra Prasad, “Performance Evaluation of High Speed Compressors for High Speed Multipliers”, Serbian Journal of Electrical Engineering, Vol. 8, No. 3, November 2011, 293-306 [7] G. E. Sobelman and D. L. Raatz, ―Low-Power multiplier designusing delayed evaluation.‖ Proceedings of the International Symposium on Circuits and Systems (1995), pp. 1564– 1567. [8] T. Sakuta, W. Lee, and P. T. Balsara, Delay balanced multipliers for low power/low voltage DSP core. Proceedings of IEEE Symposium on Low Power Electronics (1995), pp. 36–37.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

250

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

FPGA Implementation Of Various Security Based Tollgate system Using ANPR Technique. Author 1: S.SHASHIDHAR

Author2: S.VAMSEE KRISHNA Email id: vamseebe@gmail.com

Email.id: sasidhar667@gmail.com M.TECH in Vlsi System Design,

Assistant prof. In Department of ECE,

Siddarth Institute of Engineering And Technology,

Siddarth Institute Of Engineering And Technology,

Puttur, Chittor(Dist).

Puttur, Chittor(Dist).

ABSTRACT: This paper presents the look and development of vehicle plate recognition for automated toll collection. Vehicle plate recognition (LPR) is that the extraction of car plate information from a picture since it's simpler and faster than the normal token based ticket system, it's all the potential to interchange the prevailing system. Moreover, it saves users valuable time by reducing the queue length before of the toll counter. It’s accustomed pay the quantity automatically and open & close the toll gate automatically. A vehicle plate is installed on each vehicle. A recognition device at the gate reads this knowledge from the vehicle and compares it with the info within the on-line database and permits the access consequently by gap the gate. This data is employed to print a daily or monthly bill for toll collection from the vehicles. This model has low complexity and takes fewer times in terms of car plate segmentation and character recognition. We aim to scale back the time consumed to pay the toll gate amount and also to assist the RTO, local department to trace the vehicle, just in case} if it absolutely was stolen or used for any illegal activities. Yet as we are reaching to increase the protection features within the toll gate because now a day’s toll gate are the doorway to the most cities. If we increase the protection within the toll gate section automatically the protection within the city are also increased. The proposed open-end credit has been designed using very (high-speed integrated circuit) hardware description language (VHDL) and simulated. Finally, it's downloaded in a very field programmable gate array (FPGA) chip and tested on some given scenarios. The FPGA implementation is administrated in one among the applying area automatic toll assortment.

rating and automatic toll assortment. Attributable to totally different operating environments, LPR techniques vary from application to application. Point able cameras produce dynamic scenes after they move. A dynamic scene image could contain multiple car places or no license plate the least bit. Moreover, after they do seem in a picture, license plates could have impulsive sizes, orientations and positions. And, if complicated backgrounds area unit concerned, detective work license plates will become quite a challenge. Typically, Associate in Nursing LPR method consists of 2 main stages (1) locating license plates and (2) distinctive license numbers. Within the 1st stage, car place candidate’s area unit determined supported the options of license plates. Options ordinarily used are derived from the car place format and therefore the alphanumeric characters constituting license numbers. The options concerning car place format embody form, symmetry height-to dimension magnitude relation colour texture of achromatic colour abstraction frequency and variance of intensity values Character options embody line blob the sign transition of gradient magnitudes, the ratio of characters the distribution of intervals between characters and therefore the alignment of characters. In reality, a tiny low set of sturdy, reliable, and easy-to-detect object options would be adequate. The car place candidates determined within the locating stage area unit examined within the identification number identification stage. There are unit 2 major tasks concerned within the identification stage, variety separation and variety recognition. Variety separation has within the past been accomplished by such techniques as projection morphology relaxation labelling, connected elements and blob colouring. Since the projection methodology assumes the orientation of a car place is thought and therefore the morphology methodology needs knowing the sizes of characters. A hybrid of connected elements and blob colouring techniques is taken into account for character separation. For this, we tend to develop our own character recognition technique that is predicated on the disciplines of each artificial neural networks and mechanics.

Keywords: License plate recognition, INTRODUCTION: AUTOMATIC car place recognition (LPR) plays a crucial role in various applications like unattended parking heaps security management of restricted areas traffic enforcement congestion

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

251

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

License Plate Recognition: Most of the quantity plate detection algorithms fall in additional than one class supported totally different techniques. To sight vehicle variety plate following factors ought to be considered:

ISBN: 378 - 26 - 138420 - 5

character into an identical size. Each example scans the character column by column to calculate the normalized cross correlation. The example with the most price is that the foremost similar one. Example matching is useful for recognizing singlefont, rotated, broken, and fixed-size characters. If a personality is completely totally different from the example, the example matching produces incorrect recognition result. Among the disadvantage of recognizing inclined characters is solved by storing several templates of an identical character with totally different inclination angles.

(1). Plate size: a plate will be of various sizes in a very vehicle image. (2). Plate location: a plate will be settled anyplace within the vehicle. (3). Plate background: A plate will have totally different background colours supported vehicle sort. As an example a government vehicle variety plate may need totally different background than different public vehicles. (4). Screw: A plate might have screw which may be thought-about as a personality. A number plate will be extracted by mistreatment image segmentation technique. There ar various image segmentation strategies accessible in numerous literatures. In most of the strategies image binarization is employed. Some authors use Otsu’s technique for image binarization to convert colour image to grey scale image. Some plate segmentation algorithms are supported colour segmentation. A study of car place location supported colour segmentation is mentioned. Within the following sections common variety plate extraction strategies are explained, that is followed by elaborate discussion of image segmentation techniques adopted in numerous literature of ANPR or LPR.

Fig2: Input Images

Fig3: Different Template Images

Fig1: License Plate recognition The templates that square measure almost like the registration number plate character is known by the comparison method with the character hold on within the information. Extracted registration number plate characters could have some noise or they'll be broken. The extracted characters could in addition be inclined. Example matching may be a straightforward and straightforward technique in recognition. The similarity between character and therefore the example is measured. Example matching is performed once resizing the extracted

Database Image

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

252

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

quantity of vehicles. The quantity on the idea of weight & the count of vehicles is additionally displayed on the screen. The quantity to be paid is mechanically deduced from the various checking accounts.

Block Diagram for Toll Gate System:

If a vehicle carries any reasonably gas that shouldn’t be carried, the gas sensing element detects the gas within the vehicle. Just in case if there's any reasonably gas that's detected, the RF transmitter is employed to alert the close police headquarters associated an alarm is enabled to alert the encircling areas. Afterwards motor one is employed to shut the gate and at the same time motor two is employed to drag up the spikes so as to puncture the vehicle. PRESSURE SENSOR: A piezoelectric sensor as shown in figure 5 is a device that uses the piezoelectric effect to measure pressure, acceleration, strain or force by converting them to an electrical charge. Here a simple pressure sensor is used to protect door or window. It generates a loud beep when somebody tries to break the door or window. The alarm stops automatically after three minutes. The circuit uses a piezo element as the pressure sensor.

Fig: 4New Toll Gate system The planned system makes certain that the traffic at the toll gates is efficient and security is additionally gift. The tax that is collected is predicated on the load carried by the vehicle. Through this technique we will additionally determine taken vehicles. The reading the knowledge from vehicle plate recognition system, computer compares the information within the info and permits the access consequently by opening/closing the gate. This knowledge is employed to print a daily or monthly bill for toll assortment from the vehicles. This fashion even taken vehicles is known.

Piezo buzzer exploits the piezoelectric property of the piezo electric crystals. The piezoelectric effect may be direct piezoelectric effect in which the electric charge develops as a result of the mechanical stressor reverse or indirect piezoelectric effect (Converse piezoelectric effect) in which a mechanical force such as pressure develops due to the application of an electric field.

The pressure of the vehicle is obtained victimization the pressure sensing element and consequently the pressure of the vehicle is showed on the display. A counter is employed to count the quantity of vehicles. The quantity on the idea of weight & the count of vehicles is additionally displayed on the screen. The quantity to be paid is mechanically deduced from the various checking accounts. If a vehicle carries any reasonably gas that shouldn’t be carried, the gas sensing element detects the gas within the vehicle. Just in case if there's any reasonably gas that's detected, the RF transmitter is employed to alert the close police headquarters associated an alarm is enabled to alert the encircling areas. Afterwards motor one is employed to shut the gate and at the same time motor two is employed to drag up the spikes so as to puncture the vehicle.

Fig5: Piezo electric sensor A typical example of direct piezoelectric effect is the generation of measurable amount of piezoelectricity when the Lead Zircon ate Titan ate crystals are deformed by mechanical or heat stress. The Lead Zircon ate Titan ate crystals also shows indirect piezoelectric effect by showing pressure when an electric potential is applied.

The pressure of the vehicle is obtained victimization the pressure sensing element and consequently the pressure of the vehicle is showed on the display. A counter is employed to count the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

253

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

OPERATION:

STRUCTURE AND CONFIGURATION:

Operation of pressure sensor is very simple. Here we have two plates that is one is input plate and the other is output plate, whenever pressure is applied as shown in figure 6 then these two plates come into contact we get voltage as a output then this output is send to the FPGA in turn it shows the weight of the vehicles is shown in the display as per our project. Accordingly toll tax is calculated. It is made up of a piezoelectric crystal. Depending on how a piezoelectric material is cut, three main modes of operation can be distinguished into transverse, longitudinal, and shear.

Structure and configuration of MQ-2 gas sensor is shown as figure, sensor composed by micro AL2O3 ceramic tube, Tin Dioxide (SnO2) sensitive layer, measuring electrode and heater are fixed into a crust made by plastic and stainless steel net. The heater provides necessary work conditions for work of sensitive components. The enveloped MQ-2 has 6 pin, four of them are used to fetch signals, and other two are used for providing heating current. WIRELESS COMMUNICATION MODULE: A general RF communication block diagram is shown in figure.8 since most of the encoders/decoders/microcontrollers are TTL compatible and mostly inputs by the user will be given in TTL logic level. Thus, this TTL input is to be converted into serial data input using an encoder or a microcontroller. This serial data can be directly read using the RF Transmitter, which then performs ASK (in some cases FSK) modulation on it and transmit the data through the antenna. In the receiver side, the RF Receiver receives the modulated signal through the antenna, performs all kinds of processing, filtering, demodulation, etc and gives out a serial data. This serial data is then converted to a TTL level logic data, which is the same data that the user has input.

Figure 6 Pressure sensor operations GAS SENSOR: The Flammable Gas and Smoke sensors can detect the presence of combustible gas and smoke at concentrations from 300 to 10,000 ppm. Owing to its simple analogue voltage interface, the sensor requires one analogue input pin from the FPGA.

Fig8: RF communication block diagram Results:

Fig7: Gas Sensor (type-MQ2) The product can detect the pressure of the smoke and send the output in the form of analogue signals. Our range can function at temperature ranging from -20 to 50 degree Celsius and consume less than 150 mA at 5V. Sensitive material of MQ-2 gas sensor in the figure is SnO2 (Tin dioxide), which with lower conductivity in clean air. When the target combustible gas exist, the sensor’s conductivity is higher along with the gas concentration rising. MQ-2 gas sensor has high sensitivity to LPG, Propane and Hydrogen, also could be used to Methane.

Fig9: Before Segmentation Number Plate

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

254

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

lighting variations, headlight dazzle, and partial vehicle obscuration. The next stage of development will involve increasing the vehicle coverage of the reference database and further trials at sites including parking facilities and public highways. FUTURE WORK: The future analysis of ANPR ought to focus on multi style plate recognition, video-based ANPR mistreatment temporal information, multi plate’s process, high definition plate image process, ambiguous-character recognition, and so on. In four important factors were planned to identify the multi style license plate problem: license plate rotational angle, character line number, the alphanumerical varieties used and character formats

Fig10: After Segmentation Number Plate

REFERENCES [1] Kin Seong Leong, Mun Leng Ng, Member, IEEE, Alfio R. Grasso, Peter H. Cole, "Synchronization of RFID Readers for Dense RFID Reader Environments", International Symposium on Applications and the Internet Workshops (SAINTW’06), 2005 [2] Manish Buhptani, Shahram Moradpour, "RFID Field Guide - Developing Radio Frequency Identification Systems", Prentice Hall, 2005, pp 79, 16-225, 160, 231

Fig11: Number Plate Recognition

[3] Raj Bridgelall, Senior Member, IEEE, " Introducing a Micro-wireless Architecture for Business Activity Sensing ", IEEE International Conference RFID, April 16-17,2008 [4] Sewon Oh, Joosang Park, Yongioon Lee, "RFID-based Middleware System for Automatic Identification", IEEE International Conference on Service Operations and Logistics, and Information, 2005 [5] Shi-Cho Cha Kuan-Ju Huang Hsiang-Meng Chang, " An Efficient and Flexible Way to Protect Privacy in RFID Environment with Licenses ", IEEE International Conference RFID, April 16-17,2008

Fig12: After Matching CONCLUSION: Motive of this projected system is to detect the images of the vehicle plate accurately. The automated real-time vehicle plate recognition system has been demonstrated to provide correct identification of vehicles captured images from the camera. The system has been shown to be robust to

[6] Urachada Ketprom, Chaichana Mitrpant, Putchapan Lowjun, “Closing Digital Gap on RFID Usage for Better Farm Management”, PICMET 2007, 5-9 August 07

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

255

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

MINIMIZATION OF VOLTAGE SAGS AND SWELLS USING DVR N.VISWANATH

Dr. K. RAMA SUDHA

PG Scholar

Professor

Department Of Electrical Engineering, Andhra University, Visakhapatnam, Andhra Pradesh most concerning disturbance

quality is voltage sag. Voltage sag is a sudden

ABSTRACT: - Power quality problem is an

drop in the Root Mean Square (RMS) [1] voltage

occurrence of non-standard voltage, current or

and is usually characterized by the retained voltage.

frequency that results in a failure or a

The major source of voltage sag is short- circuits

disoperation of end user equipments. Utility

on the utility lines. Faults from the disturbed

distribution networks, sensitive industrial loads

process will generate a momentary voltage sag

and critical commercial operations suffer from various

types

interruptions

of which

outages can

and

cost

[2][3] in the electrical environment to the end user.

service

The Dynamic Voltage Restorer (DVR) is an

significant

effective Custom Power device which is used to

financial losses. With the restructuring of power systems and

affecting power

mitigate the impacts of voltage sags on sensitive

with shifting trend towards

loads in distribution systems. DVR is used for

distributed and dispersed generation, the issue

balancing the load voltage due to harmonics and

of power quality is going to take newer

unbalancing at the source end, in order to eliminate

dimensions. The present work is to identify the

switching transients. DVR has to inject voltages

prominent concerns in this area and hence the

with large

measures that can enhance the quality of the

magnitude,

which is completely

undesirable. By varying load voltage angle, if the

power are recommended. This work describes

required nominal voltage is injected at the system

the techniques of correcting the supply voltage

frequency, the control operation will be efficient.

sag, swell and interruption in a distributed

To realize this, a method for estimating the

system. At present, a wide range of very flexible

frequency from the sampled injected voltage signal

controllers, which capitalize on newly available

has been presented.

power electronics components, are emerging for custom power applications. Among these, the

DVR consists of energy storage device,

distribution static compensator and the dynamic

pulse width modulation inverter, LC filter and

voltage restorer are most effective devices, both

series transformer. Pulse Width Modulated (PWM)

of them based on the VSC principle.

control technique is applied for inverter switching to produce a three phase 50 Hz sinusoidal voltages

KEY WORDS: Dynamic voltage restorer,

at the load terminals. The PWM scheme which is

Voltage Sag and swell, PWM Generator.

used to synthesize the injected voltage generates switching frequency harmonics must be prevented

I INTRODUCTION

from entering into the utility and customer system. The quality of output power delivered

A low-pass filter is introduced to accomplish this

from the utilities has become a major concern. The

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

256

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

function. Literature shows that a number of

voltage level for mitigation of power quality

techniques are available for improving power

phenomenon, known as “Custom Power Devices�,

quality problems and frequency estimation to

able to deliver customized solution to power

measure the signals which are available in distorted

quality problems.

form. Least mean square, Kalman filtering,

mitigating devices are normally connected between

Discrete Fourier transform, Smart discrete Fourier

the supply and the load.

Voltage sag and interruption

Transform and Newton method are some of the Dynamic voltage restorer [5] is a series

techniques shown in literature. Faults in the

connected device designed to maintain a constant

distribution system may cause voltage sag or swell

RMS voltage across a sensitive load. The structure

in the large parts of the system. Voltage sag and

of DVR is shown in Fig. I. The DVR consists of:

swell can cause sensitive equipment to fail and create a large current unbalance that trips the

Voltage Source Inverters: Voltage Source Inverters

circuit breakers. These effects can be very

converts the dc voltage from the energy storage

expensive for the customer, to avoid equipment

unit to a controllable three phase ac voltage. The

damage. There are many different methods to

inverter switches are normally fired using a

mitigate voltage sags and swells, but the use of a

sinusoidal Pulse Width Modulation scheme.

DVR is considered to be the most cost efficient method. DVR with PI controller has a simple

Injection

structure and offers a satisfactory performance

used in the DVR plays a crucial role in

over a wide range of operating conditions. The

ensuring

main problem of Conventional Controllers [3][4] is

effectiveness of the restoration scheme. It is

the correct tuning of the controller gains. When

connected in series with the distribution feeder.

there are variations in the system parameters and

transformers: Injection

the

maximum

transformers

reliability

and

Passive Filters: Passive Filters are placed at the

operating conditions, the controller may not

high voltage side of the DVR to filter the

provide the required control performance with fixed

harmonics. These filters are placed at the high

gains.

voltage side as placing the filters at the inverter side introduces phase angle shift which can disrupt

Power Quality problem is the main

the control algorithm.

concern in electricity industry. Power Quality includes a wide range of disturbances such as

Energy storage devices: Examples of energy

voltage sags/swells, flicker, harmonics distortion,

storage devices are dc capacitors, batteries, super-

impulse transient, and interruptions. And the

capacitors,

majority of power quality problems are due to

superconducting

magnetic

energy

Storage and flywheels. The capacity of energy

different fault conditions. These conditions cause

storage

voltage sag. Voltage sag can occur at any instant of

device

compensation

time, with amplitude ranging from 10-90% and a

has

a

capability

big of

impact the

on

the

system.

Compensation of real power is essential when large

duration lasting for half a cycle to one minute. It is

voltage sag occurs.

generally caused by faults in the power system and characterized by its magnitude and duration. The duration of voltage sag depends on clearance of fault by using protective devices. Power Electronics based devices installed at medium

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

257

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

DVR is connected between a terminal bus and load bus. The control technique to be adopted depends on the type of load as some loads are sensitive to only magnitude change whereas some other loads are sensitive to both magnitude and phase angle shift. Control techniques that utilize real

Fig 1: Structure of Dynamic Voltage Restorer

and

reactive

power

compensation

are

generally classified as pre-sag compensation, inII COMPENSATION OF VOLTAGE SAG

phase compensation and energy optimization

USING DVR

technique. The single line diagram of DVR connected in the distribution system

The single line diagram of test system is shown in Fig.2. The voltage source is connected to a feeder with an impedance of

Rs + jXS

(1)

The load is balanced and the impedance of the load is given by

RL + jXL

(2)

Fig 3 shows the test system with 3phase fault. Fig.3. single line diagram of dynamic voltage restorer

VL is the source voltage in volts s

connected to distribution system

vt is voltage at point of common coupling in volts. t

When the source

Rs+ jX s is impedance of the feeder in ohms

V

L

voltage drops

or

increases, the dynamic voltage restorer injects a series

is the load voltage in voltage

voltage through the injection transformer

so that the desired load [11] voltage magnitude can be maintained. The series injected voltage of

RL+ jX L is the load impedance in ohms.

the DVR, Vk can be written as:

Is is the source current and IL is the load current

Vk = Vt + Vl

(3)

Vk is the series injected voltage in the distribution system such that it mitigates the voltage sag and regulates the load bus voltage, Vl to a reference value Vl*. It is pre specified value. The reference voltage of the DVR can be written as Vk* = Vt + Vl*

(4)

Fig.2. Single line diagram of test system

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

258

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

III Function of DVR:

ISBN: 378 - 26 - 138420 - 5

contribute to the losses. The DVR will be most of the time in this mode. In boost mode (VDVR>0),

The main function of a DVR is the protection

of

sensitive

loads

from

the DVR is injecting a compensation voltage

voltage

through the booster transformer due to a detection

sags/swells coming from the network. Therefore as

of a supply voltage disturbance.

shown in Figure, the DVR is located on approach of sensitive loads. If a fault occurs on other lines, DVR

inserts

series

voltage [6]

VDVR

and

compensates load voltage to pre fault value. The momentary amplitudes of the three injected phase voltages are controlled such as to eliminate any detrimental effects of a bus fault to the load voltage VL. This means that any differential voltages caused by transient disturbances in the ac feeder will be compensated by an equivalent voltage generated by the converter and injected on the medium voltage Fig.4.Equivalent Circuit of DVR

level through the booster transformer. Figure 4 shows the equivalent circuit of The DVR works independently of the type

the DVR, when the source voltage is drop or

of fault or any event that happens in the system,

increase, the DVR injects a series voltage Vinj

provided that the whole system remains connected

through the injection transformer [9][10] so that the

to the supply grid, i.e. the line breaker does not trip.

desired load voltage magnitude VL can be

For most practical cases, a more economical design

maintained.

can be achieved by only compensating the positive The series injected voltage of the DVR

and negative sequence [7] components of the

can be written as

voltage disturbance seen at the input of the DVR. This option is Reasonable because for a typical

Vinj = VL + VS

distribution bus configuration, the zero sequence part of a disturbance will not pass through the step

(5)

Where;

down transformer because of infinite impedance for VL is the desired load voltage magnitude

this component.

VS is the source voltage during sags/swells

The DVR has two modes of operation

condition.

which are: standby mode and boost mode. In standby mode (VDVR=0), the booster transformer’s

The load current ILoad is given by,

low voltage winding is shorted through the converter. No switching of semiconductors occurs

=

in this mode of operation, because the individual

(

± ∗

)

(6)

converter legs [8] are triggered such as to establish a short-circuit path for the transformer connection. Therefore, only the comparatively low conduction losses of the semiconductors in this current loop

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

259

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Pulse-width modulation (PWM): PWM also works well with digital

Pulse-width modulation (PWM), or pulse-duration

controls, which, because of their on/off nature, can

modulation (PDM), is a modulation technique that

easily set the needed duty cycle. PWM has also

controls the width of the pulse, formally the pulse

been used in certain communication systems where

duration, based on modulator signal information.

its duty cycle has been used to convey information

Although this modulation technique can be used to

over a communications channel.

encode information for transmission, its main use is to allow the control of the power supplied to electrical devices, especially to inertial loads such as motors. In addition, PWM is one of the two principal algorithms used in photovoltaic solar battery chargers, the other being MPPT. The average value of voltage (and current) fed to the load is controlled by turning the switch between supply and load on and off at a fast pace. The longer the switch is on compared to the off periods, the higher the power supplied to the load. The PWM switching frequency has to be much higher than what would affect the load (the device that uses the power), which is to say that the resultant waveform perceived by the load must be as smooth as possible. Typically switching has to be done several times a minute in an electric stove, 120 Hz in a lamp dimmer, from few kilohertz (kHz) to tens of kHz for a motor drive and well into the tens or hundreds of kHz in audio amplifiers and

Fig 5: An Example of PWM in an Ac Motor Driver

computer power supplies. The

term duty

cycle describes

the

proportion of 'on' time to the regular interval or 'period' of time; a low duty cycle corresponds to low power, because the power is off for most of the time. Duty cycle is expressed in percent, 100% being fully on. The main advantage of PWM is that power loss in the switching devices is very low. When a switch is off there is practically no current, and when it is on and power is being transferred to the load, there is almost no voltage drop across the switch. Power loss, being the product of voltage and current, is thus in both cases close to zero.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

260

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

IV SIMULATION FIGURES

Fig.6. Main Block Diagram of DVR

Fig.7. Control System of the DVR

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

261

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

V SIMULATION RESULTS The faults in the three phase source can be eliminated by calculating phase angle θ. But the calculation of θ becomes complex some times. So by using PWM generator the calculation of the phase angle can be found easily from the magnitude part only. The figure8 shows the three phase waveform where fault occur at phase A. By using DVR with PWM generator the fault is eliminated and the output waveform is shown in figure9.

+

Fig.8.The simulation of the input fault

Fig.9. Simulation result of the output

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

262

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[11] S. S. Choi, B. H. Li, and D. M. Vilathgamuwa,“Dynamic

VI CONCLUSION

voltage restoration with minimum energy injection,” IEEE Trans.

In the simulation study, MATLAB Simulink is used to

Power. Syst., vol. 15, no. 1, pp. 51–57, Feb. 2000. [12] Y. W. Li, D. M. Vilathgamuwa, P. C. Loh, and F. Blaabjerg, “A

simulate the model of dual dynamic voltage restorer is an

dual- functional medium voltage level DVR to limit downstream

effective custom power device for voltage sags and swells

fault cur- rents,” IEEE Trans. Power. Electron., vol. 22, no. 4, pp.

mitigation.

1330–1340, Jul. 2007.

The Dual DVR controls under different faults

without any difficulties and injects the appropriate voltage

[13] Y. W. Li, D. M. Vilathgamuwa, F. Blaabjerg, and P. C. Loh, “A Ro- bust control scheme for medium-voltage-level DVR

component to correct rapidly any abnormally in the supply

implementation,” IEEE Trans. Ind. Electron., vol. 54, no. 4, pp. 2249–

voltage to keep the load voltage balanced and constant at the

2261, Aug. 2007.

nominal value.

[14] S. S. Choi, T. X. Wang, and D. M. Vilathgamuwa, “A series compen- sator with fault current limiting function,” IEEE Trans. Power Del., vol. 20, no. 3, pp. 2248–2256, Jul. 2005. [15] B. Delfino, F. Fornari, and R. Procopio, “An effective SSC

VII REFERENCES

control scheme for voltage sag compensation,” IEEE Trans. Power Del., vol.

[1]

Fault Current Interruption by the Dynamic Voltage Restorer

20, no. 3, pp. 2100–2107, Jul. 2005.

Firouz Badrkhani Ajaei, StudentMember, IEEE, Shahrokh Farhangi, and Reza

[16] C. Zhan, V. K. Ramachandaramurthy, A. Arulampalam, C.

Iravani, Fellow, IEEE

Fitzer, S. Kromlidis, M. Barnes, and N. Jenkins, “Dynamic voltage

[2]

Bollen M. H. J,(2000) :Understanding Power Quality Problems;

Voltage

Sags

and

interruptions,

ser.

IEEE

Press

Series

restorer based on voltage-space-vector PWM control,” IEEE Trans.

on

Ind. Appl., vol. 37, no. 6, pp. 1855–1863, Nov./Dec. 2001.

PowerEngineering, Piscataway, NJ.

[17] D. M. Vilathgamuwa, P. C. Loh, and Y. Li, “Protection of

[3]

microgrids during utility voltage sags,” IEEE Trans. Ind. Electron.,

Choi, S.S.; Li, B.H.; Vilathgamuwa, D.M.(2000) : A Comparative

Study Of Inverter- And Line-Side Filtering Scheme In Dynamics Voltage

vol. 53, no. 5, pp. 1427–1436, Oct. 2006.

Restorer, Power Engineering Society Winter Meeting,. IEEE. Vol. 4, pp.

[18] F. Badrkhani Ajaei, S. Afsharnia, A. Kahrobaeian, and S.

2967-2972.

Farhangi, “A fast and effective control scheme for the dynamic

[4] N. G. Hingorani, “Introducing custom power,” IEEE Spectr., vol. 32, no.

voltage restorer,” IEEE Trans. Power Del., vol. 26, no. 4, pp. 2398–

6, pp. 41–48, Jun. 1995.

2406, Oct. 2011.

[5] J. G. Nielsen, F. Blaabjerg, and N. Mohan, “Control strategies for dy-

[19] M. S. Sachdev and M. A. Barlbeau, “A new algorithm for

namic voltage restorer compensating voltage sags with phase jump,” in

digital impedance relays,” IEEE Trans. Power App., Syst., vol.

Proc. IEEE APEC’, 2001, pp. 1267–1273.

PAS-98, no.6, pp. 2232–2240, Nov./Dec. 1979.

[6]

Ghosh A ; Ledwich G.( 2001) : Structures and control of a dynamic

voltage regulator (DVR), in Proc. IEEE Power Eng. Soc. Winter Meeting, Columbus, OH. [7]

Ghosh A ; Ledwich G.( 2002) : Power Quality Enhancement Using

Custom Power Devices, Norwell, MA: Kluwer. [8] G. J. Li, X. P. Zhang, S. S. Choi, T. T. Lie, and Y. Z. Sun, “Control strategy for dynamic voltage restorers to achieve minimum power injection without introducing sudden phase shift,” Inst. Eng. Technol. Gen. Transm. Distrib., vol. 1, no. 5, pp. 847–853, 2007. [9] S. S. Choi, B. H. Li, and D. M. Vilathgamuwa, “Design and analysis of the inverter-side filter used in the dynamic voltage restorer,” IEEE Trans. Power Del., vol. 17, no. 3, pp. 857–864, Jul. 2002. [10] B. H. Li, S. S. Choi, and D. M. Vilathgamuwa, “Design considerations on the line-side filter used in the dynamic voltage restorer,” Proc. Inst. Elect. Eng., Gen. Transm. Distrib., vol. 148, no. 1, pp. 1–7, Jan. 2001.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

263

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

POWER-QUALITY IMPROVEMENT OF GRID INTERCONNECTED WIND ENERGY SOURCE AT THE DISTRIBUTION LEVEL BHARTH KUMAR.D Dr.R.VIJAYA SANTHI PG Scholar Asst.Professor DEPARTMENT OF ELECTRICAL ENGINEERING Andhra University, Visakhapatnam, Andhra Pradesh. ABSTARCT: Utility of power electronic converters are increasingly connected in distribution systems using renewable energy resources. But due to their intermittent nature, they may pose a threat to network in terms of stability, voltage regulation and power quality issues. The present work explains the concept of grid interconnection of Wind Farm using grid interfacing inverter along with the facility of power quality improvement. In this paper it presents control strategy for achieving maximum benefits from grid-interfacing inverter when interconnected in 3-phase 4-wire distribution system is installed. For controlling the performance as a multi-function device which is active power filter functionality inverters are used, using hysteresis current control technique. With such a control, the combination of grid-interfacing inverter and the 3-phase 4-wirelinear/non-linear unbalanced load at point of common coupling appears as balanced linear load to the grid. This new control concept is demonstrated with extensive MATLAB/Simulink simulation study.

energy solution. Since the past decade, there has been an enormous interest in many countries on renewable energy for power generation. The market liberalization and government’s incentives have further accelerated the renewable energy sector growth. A renewable resource is a natural resource which can replenish with the passage of time, either through biological reproduction or other naturally recurring processes. Renewable resources are a part of Earth's natural environment and the largest components of its ecosphere. Renewable resources may be the source of power for renewable energy. Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale such as sunlight, wind, rain, tides, waves and heat. Renewable energy replaces conventional fuels such as coal, nuclear, natural gas etc. in three distinct areas. The extensive use of power electronics based equipment and non-linear loads at PCC generate harmonic currents, which may deteriorate the quality of power [1], [2]. In [3] an inverter operates as active inductor at a certain frequency to absorb the harmonic current. A similar approach in which a shunt active filter acts as active conductance to damp out the harmonics in distribution network is proposed in [4]. A [5] control strategy

Index Terms—Active power filter (APF), distributed generation (DG), distribution system, grid interconnection, power quality (PQ), renewable energy.

I INTRODUCTION Electric utilities and end users of electric power are becoming increasingly concerned about meeting the growing energy demand. Seventy five percent of total global energy demand is supplied by the burning of fossil fuels. But increasing air pollution, global warming concerns, diminishing fossil fuels and their increasing cost have made it necessary to look towards renewable sources as a future

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

264

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

for renewable interfacing inverter based on p-q theory is proposed.

ISBN: 378 - 26 - 138420 - 5

(small hydro, modern biomass, wind, solar, geothermal, and bio-fuels) account for another 3% and are growing rapidly. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond. Wind power, for example, is growing at the rate of 30% annually, with a worldwide installed capacity of 282,482 megawatts (MW) at the end of 2012.

Power generation: Renewable energy provides 19% of electricity generation worldwide. Renewable power generators are spread across many countries, and wind power alone already provides a significant share of electricity in some areas: for example, 14% in the U.S. state of Iowa, 40% in the northern German state of Schleswig-Holstein, and 49% in Denmark. Some countries get most of their power from renewable, including Iceland (100%), Norway (98%), Brazil (86%), Austria (62%) and Sweden (54%).

Renewable energy resources mainly can be used for power generation, heating and transportation purposes. In case of power generation, the generation can take place either as a separate unit to feed a particular area or the renewable energy resources can be interconnected to the grid at the transmission level, subtransmission level and distribution level to enhance load supplying capability of the grid. Large wind farms, concentrated solar power photovoltaic system, bio-power, hydro power, geothermal power are interconnected at the transmission and subtransmission levels. Photovoltaic system, small wind farm, hydro power and fuel cells are interconnected at the distribution level. The resources are connected to grid using grid interfacing inverter by suitable controlling of the inverter switches. But their highly intermittent nature may result in instability and power quality problems; hence an appropriate control circuitry is required.

Heating: Solar hot water makes an important contribution to renewable heat in many countries, most notably in China, which now has 70% of the global total (180 GWth). Most of these systems are installed on multi-family apartment buildings and meet a portion of the hot water needs of an estimated 50–60 million households in China. Worldwide, total installed solar water heating systems meet a portion of the water heating needs of over 70 million households. The use of biomass for heating continues to grow as well. In Sweden, national use of biomass energy has surpassed that of oil. Transport fuels: Renewable biofuels have contributed to a significant decline in oil consumption in the United States since 2006. The 93 billion liters of bio-fuels produced worldwide in 2009 displaced the equivalent of an estimated 68 billion liters of gasoline, equal to about 5% of world gasoline production.

II SYSTEM DESCRIPTION: The proposed system consists of RES connected to the dc-link of a gridinterfacing inverter as shown in Fig. 1. The voltage source inverter is a key element of a DG system as it interfaces the renewable energy source to the grid and delivers the

About 16% of global final energy consumption presently comes from renewable resources, with 10% of all energy from traditional biomass, mainly used for heating, and 3.4% from hydroelectricity. New renewable

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

265

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

generated power. The RES may be a DC source or an AC source with rectifier coupled to dc-link. Usually, the fuel cell and photovoltaic energy sources generate power at variable low dc voltage, while the variable speed wind turbines generate power at variable ac voltage. Thus, the power generated from these renewable sources needs power conditioning (i.e., dc/dc or ac/dc) before connecting on dclink [6]-[8]. The dc-capacitor decouples the RES from grid and also allows independent control of converters on either side of dc-link.

ISBN: 378 - 26 - 138420 - 5

grid via the dc-link. The current injected by renewable into dc-link at voltage level can be given as

Fig. 2. DC-Link equivalent diagram.

1= Where RES.

…………………. (1) is the power generated from

The current flow on the other side of dclink can be represented as,

=

………. (2)

Where , and are total power available at grid-interfacing inverter side, active power supplied to the grid and inverter losses, respectively. If inverter losses are negligible then = B. Control of Grid Interfacing Inverter: Fig.1. Schematic of proposed renewable based distributed generation system A. DC-Link Voltage and Power Control Operation: Due to the intermittent nature of RES, the generated power is of variable nature. The dc-link plays an important role in transferring this variable power from renewable energy source to the grid. RES are represented as current sources connected to the dc-link of a gridinterfacing inverter. Fig. 2 shows the systematic representation of power transfer from the renewable energy resources to the

Fig.3. Block diagram representation of grid-interfacing inverter control.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

266

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

The control diagram of grid- interfacing inverter for a 3-phase 4-wire system is shown in Fig. 3. The fourth leg of inverter is used to compensate the neutral current of load. The main aim of proposed approach is to regulate the power at PCC during: 1) = 0; 2) PRES< total load power (PL); and 3) PRES > PL. While performing the power management operation, the inverter is actively controlled in such a way that it always draws/ supplies fundamental active power from/ to the grid. If the load connected to the PCC is non-linear or unbalanced or the combination of both, the given control approach also compensates the harmonics, unbalance, and neutral current. The duty ratio of inverter switches are varied in a power cycle such that the combination of load and inverter injected power appears as balanced resistive load to the grid. The regulation of dc-link voltage carries the information regarding the exchange of active power in between renewable source and grid. Thus the output of dc-link voltage regulator results in an active current Im. The multiplication of active current component (Im) with unity grid voltage vector templates (Ua, Ub and Uc) generates the reference grid currents (Ia*, Ib* and Ic*). The reference grid neutral current (In*) is set to zero, being the instantaneous sum of balanced grid currents. The grid synchronizing angle (θ) obtained from phase locked loop (PLL) is used to generate unity vector template.

( )

…………. (4)

= sin Ө +

…………. (5)

=

( )

( ) …..

(6)

The output of discrete-PI regulator at nth sampling instant is expressed as

………….… (7) Where and are proportional and integral gains of dcvoltage regulator. The instantaneous values of reference three phase grid currents are computed as ……………. (8) ………….. (9) …………. (10) The neutral current, present if any, due to the loads connected to the neutral conductor should be compensated by forth leg of grid-interfacing inverter and thus should not be drawn from the grid. In other words, the reference current for the grid neutral current is considered as zero and can be expressed as

= sin(Ө)………………. (3) = sin Ө −

ISBN: 378 - 26 - 138420 - 5

and in the generated reference current signals. The difference of this filtered dclink voltage and reference dc-link voltage (Vdc*) is given to a discrete- PI regulator to maintain a constant dc-link voltage under varying generation and load conditions. The dc-link voltage error (Vdcerr (n)) at nth sampling instant is given as:

…………………..… (11) The reference grid currents (Ia* , Ib* ,Ic* and In*)are compared with actual grid currents (Ia* , Ib* ,Ic* and In*) to compute the current errors as

The actual dc-link voltage (Vdc) is sensed and passed through a second-order low pass filter (LPF) to eliminate the presence of switching ripples on the dc-link voltage

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

267

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

………...…. (12)

ISBN: 378 - 26 - 138420 - 5

……………(23)

…….……… (13) ………….. (24)

……….……. (14) Similarly ……………. (15)

the

charging

currents

, and on dc bus due to the each leg of inverter can be expressed as

These current errors are given to hysteresis current controller. The hysteresis controller then generates the switching pulses (P1 to Pg ) for the gate drives of grid-interfacing inverter. The average model of 4-leg inverter can be obtained by the following state space equations

………….. (25) ………….. (26) …………… (27)

………... (16)

…………... (28)

…….….. (17)

The switching pattern of each IGBT inside inverter can be formulated On the basis of error between actual and reference current of inverter, which can be explained as:

………….. (18)

If switch

, then upper will be OFF

lower switch will be ON phase “a” leg of

…………. (19)

and in the inverter. If

, then upper switch will be ON

(20)

will be OFF “a” leg of inverter.

Where , and are the three-phase ac switching voltages generated on the output terminal of inverter. These inverter output voltages can be modeled in terms of instantaneous dc bus voltage and switching pulses of the inverter as

and lower switch in the phase

Where hb is the width of hysteresis band. On the same principle, the switching pulses for the other remaining three legs can be derived.

……….. (21)

………… (22)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

268

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

III SIMULATION RESULTS:

Fig:7 Simulation result of PQ-Grid

Fig:4 Simulation result of Grid Voltages

Fig:8 Simulation result of PQ-Load

Fig:5 Simulation result of Grid Current

Fig:9 Simulation result of Grid Voltages

Fig:6 Simulation result of unbalanced load currents

Fig:10 Simulation result of dc-link Voltages

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

269

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

IV CONCLUSION

ISBN: 378 - 26 - 138420 - 5

[4] U. Borup, F. Blaabjerg, and P. N. Enjeti, “Sharing of nonlinear load in parallel-connected three-phase converters,” IEEE Trans. Ind. Appl., vol. 37, no. 6, pp. 1817–1823, Nov./Dec. 2001.

Copious literature survey has been done with fruitful discussions about controlling of grid interfacing inverter to improve power quality. From literature survey it is found that hysteresis current control technique is well suited to meet the requirements. The proposed scheme is validated by connecting the controller to the 3-phase 4-wire system. This approach eliminates the need for additional power conditioning equipment to improve the quality of power at PCC.

[5] P. Jintakosonwit, H. Fujita, H. Akagi, and S. Ogasawara, “Implementation and performance of cooperative control of shunt active filters for harmonic damping throughout a power distribution system,” IEEE Trans. Ind. Appl., vol. 39, no. 2, pp. 556–564, Mar./Apr. 2003.

Simulations have been done to validate the performance of the proposed system in MATLAB/Simulink environment. The THD in grid current with grid interfacing inverter comes out to be 1.64% for unbalance non-linear load and 1.67% for unbalanced linear load which is well within of 5 percent limits laid down in IEEE std.

[6] J. P. Pinto, R. Pregitzer, L. F. C. Monteiro, and J. L. Afonso, “3-phase 4wire shunt active power filter with renewable energy interface,” presented at the Conf. IEEE Renewable Energy & Power Quality, Seville, Spain, 2007. [7] F. Blaabjerg, R. Teodorescu, M. Liserre, and A. V. Timbus, “Overview of control and grid synchronization for distributed power generation systems,” IEEE Trans. Ind. Electron., vol. 53, no. 5, pp. 1398–1409, Oct. 2006.

V REFERENCES [1] M. Singh , K. vinod , A. Chandra, and Rajiv K.varma. “ Grid interconnection of renewable energy sources at the distribution level with power-quality improvement features,” IEEE Transactions on power delivery, vol. 26, no. 1, pp. 307-315, January 2011

[8] J. M. Carrasco, L. G. Franquelo, J. T. Bialasiewicz, E. Galván, R. C. P. Guisado, M. Á. M. Prats, J. I. León, and N. M. Alfonso, “Power electronic systems for the grid integration of renewable energy sources: A survey,” IEEE Trans. Ind. Electron., vol. 53, no. 4, pp. 1002–1016,

[2] J. M. Guerrero, L. G. de Vicuna, J. Matas, M. Castilla, and J. Miret, “A wireless controller to enhance dynamic performance of parallel inverters in distributed generation systems,” IEEE Trans. Power Electron., vol. 19, no. 5, pp. 1205–1213, Sep. 2004.

Aug. 2006. [9] B. Renders, K. De Gusseme, W. R. Ryckaert, K. Stockman, L. Vandevelde, and M. H. J. Bollen, “Distributed generation for mitigating voltage dips in low-voltage distribution grids,” IEEE Trans. Power. Del., vol. 23, no. 3, pp. 1581–1588, Jul. 2008.

[3] J. H. R. Enslin and P. J. M. Heskes, “Harmonic interaction between a large number of distributed power inverters and the distribution network,” IEEE Trans. Power Electron., vol. 19, no. 6, pp. 1586– 1593, Nov. 2004.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

270

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

An Enhancement for Content Sharing Over

Smartphone-Based Delay Tolerant Networks P. Guru Tejaswini, K. Phalguna Rao Student, Audisankara institute of technology, Gudur, Nellore, Andhra Pradesh, India. tejaswini.guru582@gmail.com HOD, Audisankara institute of technology, Gudur, Nellore, Andhra Pradesh, India.

Abstract: From the last few years, the Smartphone users has swiftly increased so peer-to-peer ad hoc content sharing is liable to crop up frequently. As the usual data delivery schemes are not proficient for content sharing because of random connectivity amid Smartphone’s latest content sharing mechanisms should be developed. To achieve data delivery in such exigent environments, researchers have anticipated the use of encounter-based routing or store-carry-forward protocols, in this a node stores a message may be a note and carries it until a forwarding chance arises through an encounter with other node. Earlier studies in this field focused on whether two nodes would come across each other and the place and time of encounter. This paper proposes discover-predict-deliver as proficient content sharing scheme.Here we make use of a hidden markov model in order to predict the future mobility information of individual's .The existing system approximately consequences in a 2 percent CPU overhead , diminish the Smartphone battery lifetime by 15 percent .So to minimize energy consumption we propose the use of sensor scheduling schemes in an opportunistic context.

Key words: Encounter Based Routing, Content Sharing, sensor scheduling Schemes, hidden markov model

1.Introduction The Smartphone users have been rapidly increasing day-by day[1]. A Smartphone consists of more advanced computing capability and connectivity than basic phones. As interfaces of Smartphone are more handy and accessible users can share any type of contents like images, videos such multimedia content. But content sharing is bothersome. It involves numerous user activities. To minimize users burden we can depend upon an ad hoc technique of peer-to-peer content sharing. Mobile ad hoc network is characterized as multi-hop wireless communications between mobile device. Smartphone's consists of many network interfaces like Bluetooth and Wi-Fi so ad hoc networks can be easily constructed with them. The Connectivity among Smartphone's is likely to be alternating because of movement patterns of carriers and the signal transmission phenomena. A wide variety of Store-carry-forward protocols have been anticipated by researchers.

Delay Tolerant Network (DTN) routing protocols attain enhanced performance than usual ad hoc routing protocols. Over the anticipated DTN routing protocols, Epidemic routing is an vital DTN routing solution. In Epidemic routing by vahdat et al[2], messages are forwarded to each encountered node that does not have a replica of the same message. This solution exhibits the finest performance in terms of delivery pace and latency, but it involves abundant resources, such as storage, bandwidth, and energy. This paper spotlight mainly on efficiency of content discovery and its delivery to the targeted destination. Here we suggest recommendation based discover-predict-deliver(DPD) as efficient and effective content sharing scheme for smart phone based DTN’s. DPD suppose that smart phones can hook up when they are in close proximity that is where the Smartphone users reside for a longer period. Earlier studies have shown that Smartphone users stay indoors for a longer period where GPS cannot be accessed.

The objective of our work is to discover solutions to the problems in content sharing and to minimize the energy Routing in delay-tolerant networking concerns itself consumption using sensor scheduling schemes. with the ability to route, data from a source to a destination, which is a vital ability of all communication networks must have. In these 2. Related work exigent environments, mostly used or familiar ad hoc routing A delay tolerant network (DTN) is a mobile network protocols fail to launch routes. This is because , these protocols first where a existing source-destination path may not exist amid a pair of try to establish a complete route and then, once the route has been nodes and messages are forwarded in a store-carry-forward routing established forwards the actual data. Still, when immediate end-tohypothesis [6]. end paths are complicated or unfeasible to institute, routing protocols The objective of our work is to discover the should take to a "store and then forward" method or approach, where content sharing problem in Smartphone based DTN’s involves data or a message is moved and stored incrementally all over the minimizing energy consumption using sensor scheduling schemes. network in hops that it will finally arrive at its destination. A general technique used to maximize the likelihood of a message being effectively transferred is to duplicate many copies of the message in Content sharing in DTN'S involves the following problems: hops that one will be successful in reaching its destination.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

271

www.iaetsd.in


limit and query lifetime limit. We employ the- 26controlled INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND split TECHNOLOGY ISBN: 378 - 138420 - 5 replication-based [9] routing scheme that performs a single-copy scheme. This single-copy scheme turn both query lifetime and distance limits into random walk, and the scheme is not powerful as In this segment we examine the problem of soon as content-carrier nodes (i.e., destinations) are not eminent. By content sharing in delay tolerant networks and depict substitute distinguishing, the controlled replication-based scheme dispenses a solutions. As specified in the introduction, we spotlight on mobile set of message replicas and evade the excessive spread of messages. opportunistic networking scenarios where the nodes will be communicating using the DTN bundle protocol. A few devices in the 2.1.2. Content delivery network store content which they are ready to share with others. All When the query matching content is discovered, the nodes are willing to assist and provide a restricted amount of their local system resources (bandwidth, storage, and dispensation power) content carrying node should transmit only a subset of results. This to aid other nodes. Our objective is to permit users to issue queries constraint is needed to limit the amount of resources utilized both for content that is stored on the other nodes everywhere in the locally and globally for sending and storing the responses, and to network and consider the possibility of such a node to acquire the eliminate potential copies . The query originator sets a limit for both required information. To ease searching, we suppose that nodes are the number of replications or duplicates and the amount of content capable to carry out searches on their local storage and uncover the that should be produced. When nodes require to forward a query appropriate results for a given query. The content sharing process is message, characterized into two stages: the content discovery phase and the the limits incorporated in the query message are used to make the content delivery phase. In content discovery phase, the user inputs forwarding decision. If the amount of the content go beyond the or enters in a content sharing application requests for the content. response limit, the node wants to select which ones to forward. The application initially searches the content it its own or individual database and if not found, the application then creates a query that is forwarded based on the user’s request. The content delivery phase is 2.2. Mobility Prediction commenced, only when the content is found and the content is then Numerous studies have largely specified another forwarded to the query originator. problem of content sharing: mobility learning and prediction. Beacon Print discover meaningful places by constantly determining constant scans for a time period. Place Sense senses the arrival and exit from a place by utilizing invasive RF-beacons. The system uses a radio beacon’s retort rates to attain vigorous beacon conclusion. EnTracked is a position tracking system for GPS-enabled devices. The system is configurable to recognize different tradeoffs amid energy consumption and heftiness. Mobility prediction has been extensively studied in and out of the delay-tolerant networking area. Markovbased schemes, make the problem as a Hidden Markov or semiMarkov model and probabilistic prediction of human mobility. In contrast, neural network based schemes try to match the observed user behaviour with earlier observed behaviour and estimate the prospect based on the experimental patterns. 2.1. Content sharing

Figure: Processing of incoming query. 2.1.1 Content discovery In content discovery, mainly systems spotlight on how to formulate queries, that depends on assumptions about the format or layout of the content to be discovered. A common protocol should sustain various forms of queries and content, but we summarize from the actual similar or matching process in order to spotlight on discovering content in the network. The easiest strategy to discover and deliver the contents is Epidemic routing. But, due to resource limits, Epidemic routing is regularly extravagant, so we have to consider methods that limits the system resources used up on both content discovery and delivery. Preferably, a query should only be forwarded to neighbours that hold on the matching contents or those are on the pathway to other nodes having matching content . Different nodes should return no overlapping responses to the requester. As total knowledge or active coordination is not an alternative in our state, one node can merely make autonomous forwarding decisions. These autonomous forwarding decisions should attain a fine trade off amid discovery efficiency and necessary resources. Analogous limitations pertain to content delivery. A few methods anticipated by Pitkanen et al. may be used for restraining the distribution of queries. Additionally, we study two substitutes for restraining the distribution of queries:a query distance

Markov based schemes are suitable for resource- restricted devices, like smartphones, owing to their low computation overhead and reserved storage requirements. In our work, we have to develop a mobility learning and prediction method. This method has been built to offer coarse-grained mobility information with a less computation overhead. When the difficulty of mobility learning and prediction scheme can be mistreated, the schemes specified in can be worn to offer fine-grained mobility information.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

272

www.iaetsd.in


influence evaluation of encounter opportunity two nodes.- 5 INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING ANDthe TECHNOLOGY ISBN: amid 378 - 26 - 138420 3.Problem Definition In the existing system, the energy consumption is more so the battery lifetime will be reduced. By using sensor scheduling mechanisms energy consumption can be reduced and can increase the lifespan of the batteries.

For example, if two distinct places are recognized as identical ones, we may improperly estimate that two nodes will encounter each other when they visit two distinct places. Also, the accurate computation of and rely on the geographical location information of the nodes.

4. Proposed System

Figure: Mean energy consumption in a day

Figure: learning accuracy

Fig shows the daily energy utilization outline, which is calculated using the composed stationary and movement time from seven different users in four weeks . The scrutiny does not comprise the energy utilization of content swap as it mainly rely on the volume and communication pace of the nodes. The typical energy consumptions of GPS, Wi-Fi, and the accelerometer varies. The accelerometer do have the maximum energy consumption as it is used endlessly over 24 hours. Wi-Fi energy utilization is observed owed to the scanning of neighbour APs for place recognition. GPS has a huge discrepancy in energy consumption as this may not be available in all places. In order to minimize energy consumption or utilization we use sensor scheduling schemes or mechanisms[10]. Sensor systems have an wide-ranging diversity of prospective, functional and important applications. In any case, there are questions that have to be inclined for prolific procedure of sensor system frameworks in right applications. Energy sparing is one fundamental issue for sensor systems as most of the sensors are furnished with no rechargeable batteries that have constrained lifetime. To enhance the lifetime of a sensor set up, one vital methodology is to attentively schedule sensors' work sleep cycles (or obligation cycles). In addition, in cluster based systems, grouping heads are usually selected in a way that minimizes or reduces the aggregate energy utilization and they may axle among the sensors to fine-tune energy utilization. As a rule, these energy productive scheduling components or mechanisms (furthermore called topology arrangement components) required to accomplish certain application requirements while sparing energy. In sensor arranges that have various outline requirements than those in conventional remote systems. Distinctive instruments may make characteristic suspicions about their sensors together with identification model, sense zone, transmission scope, dissatisfaction or disappointment model, time management, furthermore the capability to get area and parting data.

5.Results

5.2.Discovery Efficiency The ratio of discovered contents to the generated queries within a specified period or time is discovery ratio or efficiency. DPD’s discovery performance is skewed to the two forwarding. In Epidemic routing , queries are forwarded to each and every node. In hops-10 and hops-5, a query message is then forwarded till its hop count achieve 10 and 5, correspondingly. When a query matching content is accessible only on a small number of nodes, the discovery methods illustrates a low discovery speed. With an rising query lifespan, both DPD and Epidemic demonstrate a high discovery ratio since with a longer time, each query is forwarded to more number of nodes.

Figure: Discovery Efficiency

5.3.Prediction accuracy

Mobility prediction is a main aspect in the estimation of utility function. Here, we estimate our prediction process according to trajectory deviation, prediction accuracy, as shown in the figure 5.1.Learning accuracy below. Trajectory deviation specify the abnormality of a user’s Learning accuracy demonstrates how capably and mobility. For this assessment, we mutate the existing mobility exactly the places were identified. The accuracy of place learning information with noise data. Thus, 10, 20, and 30 % of the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

273

www.iaetsd.in


meaningful places are chosen random INNOVATIONS locations for IN trajectory [7] J. Wu, Lu, and F. Li, “Utility-Based Opportunistic Routing INTERNATIONAL CONFERENCE ONatCURRENT ENGINEERING ANDM. TECHNOLOGY ISBN: 378 - 26 - 138420in- 5 deviations of 0.1, 0.2, and 0.3, correspondingly. So as the trajectory deviation raise, the prediction accuracy drop off. Prediction accuracy is calculated as the ratio of accurately predicted locations to the overall predicted locations.

MultiHop Wireless Networks,” Proc. *28th Int’l Conf. Distributed Computing Systems (ICDCS ’08), pp. 470-477,+ 2008. [8] T. Spyropoulos, K. Psounis, and C.S. Raghavendra, “Spray and Wait: An Efficient Routing Scheme for Intermittently Connected Mobile Networks,” Proc. ACM SIGCOMM Workshop DelayTolerant Networking (WDTN ’05), pp.. 252-259, 2005. [9] T. Spyropoulos, K. Psounis, and C. Raghavendra, *“Efficient Routing in Intermittently Connected Mobile Networks: The Single Copy Case,” IEEE/ACM Trans. Networking,/. vol. 16, no. 1, pp. 6376, Feb. 2008. [10] Ling Shi, Michael Epstein, Bruno Sinopoli and Richard.M.Murray," Effective Sensor Scheduling Schemes Employing Feedback in the Communication Loop"

Figure: Prediction accuracy

6.Conclusion In this paper we have proposed a proficient content sharing scheme for Smartphone based DTN’s. In this we have anticipated discover-predict-deliver as a effectual content sharing method which is capable of discovering the content and delivers it to the appropriate destination. The scheme also present the mobility information of individuals. We made an try to make use of the availability and communication technology of current Smartphone. We have also compared our proposed scheme with traditional schemes. In this paper we have also proposed sensor scheduling schemes to enhance the lifespan of a battery. By the effectiveness of the sensing in sensor scheduling we can reduce energy consumption of the smartphones. Finally, our system has still has room for improvement by considering the privacy issues.

6. References [1] T3IGroupLLc , Http:://www.telecomweb.com, 2010. [2] A. Vahdat and D. Becker, “Epidemic Routing for Partially Connected Ad-Hoc Networks,” technical reports, Dept. of Computer Science and engineering, Duke Univ.., Sept. 2000. [3] A. Balasubramanian, B.N. Levine, and A. Venkataramani., “DTN Routing as a .Resource Allocation Problem,” Proc. ACM SIGCOMM.., pp. 373-384, 2007. [4] R.C. Shah, S. Roy, S. Jain, and W. Brunette, “Data MulesModeling a Three-Tier Architecture for Sparse Sensor Networks,” Elsevier Ad Hoc Networks J...., vol. 1, pp. 215-233, Sept. 2003. [5] A. Lindgren, A. Doria, and O. Schelen, “Probabilistic -Routing in Intermittently Connected Networks,” SIGMOBILE Mobile Computer Comm....Rev., vol. 7, no. 3, pp. 19-20, 2003. [6] C. Liu and J. Wu, “An Optimal Probabilistic Forwarding Protocol in Delay Tolerant Networks,” Proc. ACM MobiHoc, pp. 14, 2009. INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

274

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

EFFICIENT RETRIEVAL OF FACE IMAGE FROM LARGE SCALE DATABASE USING SPARSE CODING AND RERANKING P. Greeshma1, K. Palguna rao2 1

2

Student, Audisankara institute of technology, Gudur, Nellore, Andhra Pradesh, India. greeshma.svgp@gmail.com

HOD, Dept of CSE, Audisankara institute of technology, Gudur, Nellore, Andhra Pradesh, India.

Abstract: Due to largely mounting of photo sharing in social network services, there is a strong need for large scale content-based face image retrieval which is enabling device for many emerging applications. It is very exigent to find a human face image from large scale database which might hold huge amount of face images of people. Existing methods regulate the location and illumination differences between the faces and abolish background contents while retrieving face images, which leads to decline the important context information. This paper intends to utilize automatically discovered human attributes that contain semantic signs of the face photos to recover large scale content-based face image retrieval. In this paper two methods are used to develop image retrieval in both offline and online stages, they are attribute-enhanced sparse coding (ASC) and attribute-embedded inverted indexing (AEI). Reranking is further used with these two methods to attain significant retrieval outcome. Proposed system exploits automatically discovered human attributes which balance the information loss and attain good retrieval performance compared to existing system.

Keywords: Content-based image retrieval, face image, human attributes, sparse coding, reranking. .

1. Introduction The unpredictable expansion of image data escorts to the need of explore and enlargement of Image Retrieval. Image retrieval is the field of study concerned with searching and retrieving digital images from a collection of database. However, Image retrieval survey moves from keyword, to low level features and then to semantic features. Compel towards semantic features is due to the problem of the keywords/text which can be much distorted and time consuming while low level features cannot always describe high level notions in the users’ mind.

Figure 1: Two different people face images might be similar in low level feature space due to lack of semantic description.

To deal with this problem, two methods are proposed named attribute-enhanced sparse coding (ASC) and attributeembedded inverted indexing (AEI). In this paper, low level features are integrated with high level attributes which provide semantic descriptions. Reranking is further used with these two methods to Large scale image search has recently fascinated significant discard forged images and retrieve specific image results. attentiveness due to easy accessibility of huge amount of data. Since databases enclose even billions of samples, such large-scale search insists extremely efficient and precise retrieval methods. CBIR has many applications in different areas. For example, in forensics, it can 2. Related Work help with crime investigation. The objective of face image retrieval is to resolve the ranking result from most to least related face images This thesis is interrelated to different research fields, including in a face image database for a specified query. For large scale CBIR, automatic discovery of human attributes and sparse coding. datasets, it is essential for an image search application to rank the Content-based image retrieval (CBIR) has concerned significant images such that the most relevant images are sited at the top. deliberation over the past decade. Instead of taking query words as input, CBIR techniques directly take an image as query and seek to However, all present CBIR systems tolerate deficient return similar images from a large scale database. Before CBIR, the generalization performance and accuracy as they are not capable to conventional image retrieval is typically based on text or keywords. produce a flexible relation between image features and high-level Keyword-based image retrieval has some boundaries, they are: concepts. Earlier schemes employ low level features (e.g., texture, Language and civilization variations always cause problems, the color, shape) to retrieve image, but low level features doesn’t afford same image is usually text out by many different ways, Mistakes semantic descriptions of face and human face images generally such as spelling error or spell difference escort to totally different consists of high level features (e.g., expression, posing). Therefore, results. retrieval results are intolerable as shown in figure 1. INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

275

www.iaetsd.in


In order to conquerONthese restrictions, CBIR IN was first INTERNATIONAL CONFERENCE CURRENT INNOVATIONS ENGINEERING AND TECHNOLOGY introduced by Kato [1]. The term, CBIR, is widely used for retrieving desired images from a large collection, which is based on extracting the features from images themselves. In general, the purpose of CBIR is to present an image conceptually, with a set of low-level optical features such as color, texture, and shape [2].

Large scale database

ISBN: 378 - 26 - 138420 - 5

Query image

Preprocessing

One major difficulty when creating CBIR system is to make a system general-purpose. CBIR for common function image databases is a highly exigent problem because of the enormous volume of the databases, the obscurity of accepting images both by people and computers, and concern of evaluating results properly. All these methods endure from low recall problems due to semantic gap. Semantic image presentations were initiated to bridge this semantic gap. In this paper, automatic discovery of human attributes are used to construct sparse codewords for face image retrieval operation, instead of using identity information which necessitate manual interpretation.

Face detection

Aligned face

Facial landmark detection

Face alignment

Attribute detection

Local patches Patch-level LBP features

3. Problem Definition Several schemes were introduced for face image retrieval, but all these techniques endure with some limitations. Previous mechanisms for face image retrieval stabilize the location and lighting differences between the faces and prohibit background contents. Such common approaches give up the important context information. Using automatic discovery of human attributes it is possible to balance such information loss. Existing methods uses low level features for face retrieval that have lack of semantic descriptions about face which escorts to poor results. Also all current retrieval systems struggle with low recall problems which diminishes the performance of the system. By concerning all these factors, this paper proposes two methods (ASC, AEI) which detect the human attributes automatically to resolve recall problems. Reranking method is united with these two methods to boost up system performance.

Attribute enhanced sparse coding

Patch-level sparse codewords

Reranking image results (no forged images)

Attribute embedded inverted indexing

Reranking

Image results (with forged face images)

Figure 2: proposed system framework

4. Proposed System This paper intends to employ automatic discovery of human attributes which contains semantic signs of face image for efficient large scale face image retrieval. To improve CBIR two methods are used named ASC and AEI that combines low level features with high level concepts to afford semantic descriptions of face. Another technique named reranking is used with these two methods to improve the performance of retrieval system.

4.1 Attribute Enhanced Sparse Coding (ASC)

The attribute-enhanced sparse coding is used in offline stage which describes the automatic detection of human attribute from the image and also generates the codewords to the image in the database by combining the low level features with high-level attributes to give semantic description about the face. By incorporating low level and high level attribute it is possible to get promising result to retrieve For each image in the database, initially apply Viola-Jones similar faces from large scale database. face detector [3] to discover the pose of faces. Active shape model [4] is applied to trace facial landmarks on the image. By using these 4.2 Attribute Embedded Inverted Indexing (AEI) facial landmarks, next align every face in the image with the face mean shape [5] using barycentric coordinate based mapping process. In on-line image retrieval, the user can submit a query image to the For each identified facial part, take out grids, each grid is retrieval system to search for desired images. Retrieval is performed represented as a square patch [6]. Extract an image patch from each by applying an indexing scheme to afford an efficient way of grid and examine LBP feature descriptor as a local feature. After searching the image database. In the end, the system indicates the attaining local feature descriptors, quantize all descriptors into search results and then returns the results that are related to the query codewords by attribute enhanced sparse coding. image. Attribute embedded inverted indexing collects the sparse codewords from the attribute- enhanced sparse coding and check the Attribute embedded inverted index is then construct for codewords with the online feature database and retrieve the related efficient retrieval. When a query image appears, it will experience images similar to the query image. the same procedure to attain sparse codewords and human attributes, and these codewords are used with binary attribute signature to retrieve images from large scale database, if those retrieving images contain forged images then apply reranking method to evade those forged images. Figure 2 demonstrate the framework of our system. INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

276

www.iaetsd.in


4.3 Reranking CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING automatic of human attributes for content-based face- 5 INTERNATIONAL AND discovery TECHNOLOGY ISBN: 378 - 26 - 138420 The fundamental proposal of reranking is to evade forged images present in database. A human face image employs very light and compulsive global signatures, so using reranking we can improve the accuracy but without trailing the scalability. Human images contain discrepancies provoked by changes in pose, expression, and illumination, taking all these intra-class variations into account, a technique called reranking is used to elude such discrepancies. Reranking is applied to set of retrieved human face images to evade forged images. Reranking technique is vibrant to erroneous images. Below figure shows how forged images are rejected after applying reranking method.

5. Experimental Results we estimate and visualize results using real examples. Figure 3 illustrates the results of ranking images. Red boxes in the figure specify forged images. After using the reranking method majority of images are correct. In figure 4 the graph demonstrates how efficient the image retrieval is in the proposed method. In proposed system individual attribute detection can be done within few milliseconds, because here we are using automatic attribute detection.

image retrieval. Attribute-enhanced sparse coding is used in the offline stage and present semantic descriptions of face. Attributeembedded inverted used in the online stage and ensures image retrieval. Finally, reranking method is used to discard forged images and obtain accurate image results.

References [1] M. Lew, N. Sebe, C. Djeraba and R. Jain, Content-based Multimedia Information Retrieval: State of the Art and Challenges, Transactions on Multimedia Computing, Communications, and Applications, 2006. [2] N. Krishnan, and C. Christiana, Content-based Image Retrieval using Dominant Color Identification Based on Foreground Objects, Fourth International Conference on Natural Computation, 2008. [3] Rapid object detection using boosted cascade of simple features, P. Viola and Jones, IEEE Conference on Computer Vision and Pattern Recognition, 2001. [4] Locating facial features using extensive active shape model, European Conference on Computer Vision, S. Milborrow and F. Nicolls, 2008. [5] Face image retrieval and matching with biometrics, IEEE Transactions on Information Forensics and Security, U. Park and A. K. Jain, 2010. [6] Scalable face image retrieval using identity based quantization and multi reference relevance ranking, Z. Wu, Q. Ke, and H.Y. Shum, IEEE Conference on Computer Vision and Pattern Recognition, 2010.

Figure 3: Reranking Image Results

Figure 4: Query processing time per image

6. Conclusion All current image retrieval systems standardize the position and illumination variations between the faces and abolish background contents while retrieving face images, which escorts to refuse the vital context information. In this paper, two schemes are projected and united to extend automatically detected human attributes to extensively develop content-based face image retrieval. This is the primary idea ofASSOCIATION uniting low-level features, high&level attributes and INTERNATIONAL OF ENGINEERING TECHNOLOGY FOR SKILL DEVELOPMENT

277

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

ELIMINATING HIDDEN DATA FROM AN IMAGE USING MULTI CARRIER-ITERATIVE GENERALISED LEAST SQUARES Author1:Ch.AnushaM.tech,Guide2:V.sireeshaM.tech (PhD) 1

Student, Computer science and engineering. Audisankara institute of technology Nellore, Andhra Pradesh, India Anusha3.ch@gmail.com

2

Associate professor, Computer Science and Engineering Audisankara institute of technology Nellore, Andhra Pradesh, India Sireesha80.dhoorjati@gmail.com

ABSTRACT:Data hiding and extraction schemes are growing in today’s communication world suitable to rapid growth of data tracking and tampering attacks. Data hiding, a type of steganography, embeds information into digital media for the reason of identification, annotation, and copyright. In this narrative techniques are used for addressing the data-hiding method and estimate these techniques in glow of three applications: copyright protection, tamper proofing, and augmentation data embedding. Thus we necessitate a proficient and vigorous data hiding schemes to defend from these attacks. In this project the blindly extraction method is measured. Blindly extraction means the novel host and the embedding carriers are not necessitate to be recognized. Here, the hidden data embedded to the host signal, via multicarrier SS embedding. The hidden data is extracted from the digital media like audio, video or image. The extraction algorithm used to extract the hidden data from digital media is Multicarrier Iterative Generalized Least Squares (M-IGLS).

KEY WORDS:Data hiding, Blind Extraction, Data tracking, Tampering attacks, Steganography. distorted or not. Equally the data extraction schemes also offer a good recovery of hidden data.This is the purpose of the protected communication.

1. INTRODUCTION Data hiding, while comparable to compression, is divergent from encryption. Its objective is not to limit or standardize admittance to the host signal, other than slightly to guarantee that embedded data continue inviolate and recoverable. Two significant uses of data hiding in digital media are to afford evidence of the copyright, and assertion of content integrity. Data tracking and tampering are hastily rising in all over the place like online tracking,Mobile tracking etc. hence we require a tenable communication scheme for transmitting the data. Forthat, we are having lots of data hiding schemes and extraction schemes. Data hiding schemes are primarily used in military communication systems similar to encrypted message, for finding the sender and receiver or it’s extremely subsistence. Originally the data hiding schemes are used for the copy write purpose.[1]Breakable watermarks are used for the certification purpose, i.e. to find whether the data has been

2. RELATED WORK The techniques used for data hiding contrast depending on the magnitude of data being hidden and the mandatory invariance of those data to exploitation. Since that no one method is proficient of achieving each and every one these goals, a group of processes is considered necessary to extent the variety of likely applications. The procedural challenges of data hiding are terrible. There are numerous data hiding and data extraction schemes are comes into existence. The key data hiding procedure is steganography. It is fluctuate from cryptography in the means of data hiding. The target of steganography is to conceal the data from a third party where the purpose of cryptography is to create data incomprehensible by a third party.

1 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

278

www.iaetsd.in


INTERNATIONAL ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY In this [2] CONFERENCE steganalysis process is used. The ambition of substitute the "intricate

ISBN: 378of - 26 - 138420 5 areas" on the bit planes the vessel- image with the secret data. The most significant feature of Steganography is that the embedding capability is incredibly huge. For a 'normal' image, approximately 50% of the data might be disposable with secret data earlier than image damage becomes perceptible.

[3] steganalysis is to decide if an image or additional carrier contains an embed message. To enhance the protection and payload speed the embedder will acquire multicarrier embedding model in the [4] spread spectrum communication is explained. Here a contracted band signal is transmitted above a lot better bandwidth such that the signal force nearby in any particular frequency is unnoticeable. Correspondingly in [5] SS embedding scheme, the secret data is extend over many samples of host signal by adding a low energy Gaussian noise progression. In[6] the Generalized Gaussian Distribution (GGD) has-been used to form the statistical performance of the DCT coefficients. In[7] there are many extraction measuresto search for the hidden data. But it is havingsome drawback. IterativeLeast Square Estimation (ILSE) is unaffordabledifficult even for judicious values. Pseudo-ILS (ILSP) algorithm is not definite to congregate in universal and also it afforddemonstrablybadoutcome.So, these two algorithms united and so called Decoupled weighted ILSP(DW-ILSP).But at this juncture also have an drawback like ,it cannot be applicable for huge N.

Fig: steganographic model 3.2 Multi-Carrier Spread Spectrum Embedding

3. PROPOSED SYSTEM

The procedure of spread spectrum may possibly permit partially to fulfill the above requirements. The embedding technique is intended to assure the perceptual limit and advance the perceive capability as well as the embedding charge. As a substitute of the pixel rate, the histogram can be customized to embed the data. If we observe distinctive histograms of DCT coefficients we will locate some trial include high amplitudes that the widespread Gaussian technique cannot effectively established. We will believe the DCT coefficients whose amplitude is beneath a confident threshold importance. In this embedding proposal, the hidden data is widen over various test of host signal or image by totaling the DCT coefficient as the carrier. Advantages of spread spectrum procedures are broadly well-known: Invulnerability against multipath alteration, no necessitate for frequency preparation, high elasticity and uneven data rate transmission. The propensity of diminishing multiple access interference in direct-sequence codedivision-multiple-access system is specified by the crosscorrelation properties of spreading codes. In the case of multi-path transmission the ability of distinctive one section from others in the complex received signal is obtainable by the auto-correlation properties of the scattering codes. The following figures show entered data, transform data using DCT, embedded image respectively.

The proposed method employs blind resurgence of data and it utilizes the DCT transform as a carrier for insert the data in digital media. Insert is achieved by using multicarrier SS embedding procedure. It uses M-IGLS algorithm for the removal of the concealed data. It is a low convolution algorithm and offer tough improvement performance. It achieves equal prospect of fault recovery to identified host and embedding carriers. It is used as a performance study tool for the data thrashing scheme. The proposed system includes 4 techniques: 1. Steganography 2. Multicarrier spread spectrum embedding 3. Image encryption and watermarking 4. Image decryption and extraction

3.1 Steganography Steganography can be used to hide a message deliberate for afterward reclamation by a definite person or collection. In this case the intent is to avoid the message being perceived by any other revelry.Steganography includes the cover up of information inside computer files..The other major area of steganography is copyright marking, where the message to be included is used to declare patent over a article. This can be further divided into watermarking and fingerprinting. In digital steganography, electronic communications may include steganographic coding inside of a transportlayer, such as a document file, image file, program or protocol Digital steganography can conceal top secret data (i.e. secret files) extremely strongly by embedding them into some media data known as "vessel data." The vessel data is also referred to as "carrier, cover up, or replica data". In Steganography images used for vessel data. The embedding action put into practice is to

Fig: Data entered

2 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

279

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY The host image

ISBN: 378 -grey 26 - 138420 5 is an 8-bit or privileged height- image which has to perfectly be the similar dimension as the basictext image or else resized consequently with the same magnitude.

Pre-conditioning the cipher and the complication practice are take on using a Discrete Cosine Transform (DCT).The output will comprise negative hovering point numbers ahead pleasing the real constituent of a intricate array. The array must be correct by totaling the biggest negative rate in the output array to the equivalent array prior to normalization. For color host images, the twofoldcoded text can be included into single or entire of the RGB components. The binary plaintext image should include homogeneous margins to minimize the special effects of buzzing due to ‘edge effects’ when dealing out the data using Cosine transform. The following figure shows the embedding of

Fig: DCT transformation

watermark and detecting watermark.

Fig: Watermarked Image 3.4 Image decryption and extraction Decryption is exactly the reverse procedure of encryption. When the receiver obtains encrypted image, extraction of the data from random values and flag values are to be done. This extraction of data from image is considered as the highlighting factor. The steps to perform M-IGLS algorithm for extracting data from an image is as follows: Initialize B^ irrationally and swap step wise stepamong (1) and (2) to accomplish at every pace conditionallyindiscriminate least squares rough of one matrix bound particular the further. The equations used above for computation are V^GLS=argV€RLxR||Rz-1/2(Y-VB)||2F =YBT(BBT)-1 (1) binary KxM B^ min ||Rz-1/2(Y-VB)||2F GLS=argB€{±1} ≈sgn {(VTRy-1V)-1VTRy-1Y} (2)

Fig: Embedded Image

3.3 Image encryption and watermarking

Encryption is the method of converting the information for its protection. Many image substance encryption algorithms have been projected. To create the data safe from a variety of assault and for the reliability of data we should encrypt the data prior to it is transmitted or accumulated. Government, military, financial institution, hospitals and private business covenant with confidential images about their patient (in Hospitals), geographical areas (in research ), enemy positions (in defense), product, financial status.

End when convergence is accomplished. Observe that (2) stimulateunderstanding of the autocorrelation matrix Ry, which can be conservative by figure averaging over the expected data interpretation, R^y = 1/M∑Mm=1y(m)y(m)T. The M-IGLS extraction algorithm is review in Table I. Superscripts signify iteration index. The computational density of every iteration of the M-IGLS algorithm is O(2K3+2LMK+K2(3L+M)+L2K) and, experimentally, the number of stepsis accomplished between 20 and 50 in broad-spectrum

Imperceptible digital watermarks are a innovative technology which could solve the “trouble” of make compulsory the patent of content transmitted across shared networks. They allow a patent holder to insert a concealed message (invisible watermark) within images, moving pictures, sound files, and even raw text. moreover, the author can supervise traffic on the shared network for the occurrence of his or her watermark via network system. Because this method obscure both at ease of the message (cryptography) and the occurrence of the message (steganography) an imperceptible watermark is very hard to eradicate.

Table-1 Multi-carrier iterative generalized least squares Algorithm

3 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

280

www.iaetsd.in


CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY ISBN: 378 - 26 - 138420 - which 5 1) d INTERNATIONAL := 0; initialize B^(0) 2 {±1}K×M arbitrarily. system is to afford a good quality extraction technique 2) d := d + 1; measured the blindly improvement of data. This technique uses the V^(d) := Y(B^(d-1))T[B^(d-1)(B^(d-1))T]-1; M-IGLS algorithm for the extraction. The data is entrenched via B^(d) := sgn{(V^(d))TRy^-1 (V^(d)))-1(V^(d))T Ry^-1 Y} DCT transform by multicarrier SS embedding. This extraction 3) Repeat Step 2 until B^(d) = B^(d−1). procedure will afford high signal to noise fraction and it will achieve the possibility of fault improvement equals to notorious host and embedding carriers. This method is improved by using harmony search algorithm where it offers small time utilization and RESULTS high assault confrontation.

The proposed technique is to remove the concealed data from the digital media. Here blindly improvement of data is measured. That is the original host end embedding carrier is not necessitating to be known. This technique uses multicarrier embedding and DCT transformation for the embedding the data into the host image. The M-IGLS algorithm is used for the extraction purpose. The following figure shows extracted data and graph for existing and proposed.

REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

F.A.P.Petitcolas, R.J.Anderson, and M.G.Kuhn.”Information hiding. A survey,”,Proc. IEEE, Special Issue on Identification and Protection ofMultimedia Information, vol. 87, no. 7, pp. 1062–1078, Jul. 1999. S. Lyu and H. Farid, “Steganalysis using higher-order image statistics,”IEEE Trans. Inf. Forensics Security, vol. 1, no. 1, pp. 111– 119, Mar.2006. G. Gul and F. Kurugollu, “SVD-based universal spatial domain image steganalysis,” IEEE Trans. Inf. Forensics Security, vol. 5, no. 2, pp.349– 353, Jun.2010. . M. Gkizeli, D. A. Pados, and M. J. Medley, “Optimal signature design for spread-spectrum steganography,” IEEE Trans. Image Process., vol.16, no.2, pp. 391–405, Feb. 2007 . C. Fei, D. Kundur, and R. H. Kwong, “Analysis and design of watermarking algorithms for improved resistance to compression,” IEEE Trans. ImageProcess., vol. 13, no. 2, pp. 126–144, Feb. 2004. C. Qiang and T. S. Huang, “An additive approach to transformdomaininformation hiding and optimum detection structure,” IEEE Trans. Multimedia, vol. 3, no. 3, pp. 273–284, Sep. 2001. T. Li andN.D. Sidiropoulos, “Blind digital signal separation using successive interference cancellation iterative least squares,” IEEE Trans Signal Process., vol. 48, no. 11, pp. 3146–3152, Nov. 2000

Fig: Extracted data

Fig: Graph for extracted data

CONCLUSION AND FUTURE WORK Data tracking and tampering is speedily growing in communication. So we have to lock the data from the trackers .Hence we require a vigorous and protected data hiding and extraction format. The most important accord of the proposed

4 INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

281

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Implementation of Context Features using ContextAware Information Filters in OSN A Venkateswarlu1, B Sudhakar2. 1

M.Tech , Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India.

2

Assoc Professor, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P

Abstract – Information filtering has becoming a

which specifies the filtering rules to filter the

key technology for current information systems.

unwanted messages.

The goal of information filter is to filter the unwanted messages from user walls in online social networks. In order to deal with huge amount of messages, the information filtering is very crucial. In the current system we study the filter walls a rule-based information filtering system in which we implement the filtered rules and discuss the blacklist rules in online social network such as facebook. But the current system not considers the context features of a message. To address this problem, in this paper we propose context aware information Filtering (CAIF) using the EAGER Algorithm. Index terms – Information Filtering, Contextaware information filtering, AGILE Algorithm,

But these methods or filtered rules are inefficient in identifying context updates of the message and message routing. To address this problem, this paper presents Context-aware Information Filters (CAIF). In contrast to the current information filters, a CAIF has two input streams: (i) an order of messages that need to be routed and (ii) a stream of context information updates. This way, a CIF provides a undefined solution to information delivery for the routing of

messages

and

to

handle the

context

information. The problem of implementing Context-aware Information Filters is that the two goals to route messages and record context updates efficiently

indexing.

are crucial. The current approaches to identifying I.

INTRODUCTION

profiles are very efficient in routing messages,

Information filter systems manage continuous

but are inefficient when it comes to processing

streams of messages that must be routed

context updates.

according to rules or so-called profiles. In order to

implement

information

filters,

several

methods have been proposed in the past. The focus of all that work was on the development of scalable rule-based system for filtering system

To close this gap, this paper presents AGILE. AGILE is generic way to extend existing structures in order to make them consistency to context updates and achieve a huge amount of message throughput at the same time.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

282

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

A) Usecases for CAIF

ISBN: 378 - 26 - 138420 - 5

The main issue for context-based information

To implement better understanding of what a context is and what the benefits of a contextware information filter are a few usecases are as

filters can be summarized as follows: “Given a large set of profiles, high message rates and varying rates of context updates, provide the best possible throughput of messages”. No message

follows:

must be dropped or sent to the wrong user Massage Broker with State

because a change in context has not yet been

A message broker routes messages to a specific

considered by the filter. This constraint rules out

application and location. Each message can

methods

change the state of the receivers and affects

periodically. In the following, we define the

future routing decisions dynamically.

terms context, profile and message.

Generalized Location-Based Services

i)

With an increased availability of mobile, yet

A context is set of attributes associated with an

network-connected devices, the possibilities for

entity the values of those attributes can change at

personalized

varying rates.

information

delivery

have

multiplied. So far, those services mostly use very little context information, such as the location of the device. A more solution is to extend those systems to a more elaborate context.

that

update

the

context

only

Context

Gathering context information is outside the scope of this paper and has been addressed, e.g., in the work on sensor networks, data cooking, context/sensor fusion, or the Berkeley HiFi

Stock Brokering

project.

Financial information systems require sending

ii) Messages

only the relevant market updates to specific applications or brokers.

A message is a set of attributes associated to values.

To sum up, some contexts have a high update

iii) Profiles

rate, others have a low update rate, but many have

varying,

“bursty”

update rates.

All

examples involve a high message rate and a large number of profiles.

A profile is a continuous query specifying the information

interests

of

a

subscriber.

Expressions in profiles can refer to a static condition or a dynamic context. Static conditions

Skipping updates in order to reduce update rates has to be avoided because it leads to costly errors in information filtering.

change relatively seldom, since they specify abstract

interests.

In

contrast,

context

information can change frequently. We define profiles as proposed in pub/sub-systems, using

II. PROBLEM STATEMENT

the disjunctive normal form (DNF) of atomic comparisons. This definition allows the use of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

283

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

context information in profiles in multiple ways:

Find all profiles that match the message m,

Message attributes can be compared with

considering the current state. Return this set of

constants and with context attributes. The latter

profiles.

is the novel aspect of this work and a significant

2. update_context(Context x, Attribute a, Value

extension to the way profiles are defined in

v):

traditional pub/sub systems and information filters. This extension enables easier modeling, richer profiles and, as we will see, several

III. CONTEXT – AWARE INFORMATION FILTERS This section introduces a processing model and architecture

for

v, i.e., c.a:=v. All profiles referencing this context must consider this new value.

opportunities for optimization.

reference

Set the attribute a of context c to the new value

Context-Aware

Information Filters (CIF).

b) CIF Architecture As shown also in Figure, a CIF has four main components:

(a)

context

management,

(b)

indexes, (c) merge, (d) postifier. A similar architecture without context management has also been devised for information filters. i) The

Context management first

component

manages

context

information. It stores the values of static attributes and values of context attributes which are used in predicates of profiles. Any context change is recorded by this component. Fig. 1 Architecture of a CIF a) CIF Processing Model

This component interacts heavily with indexes and postfiltering, which both consume this information. Both indexes and postfiltering

Figure 1 shows the processing of a CIF. The CIF

require values of constants and context attributes

keeps profiles of subscribers and context

in order to evaluate predicates of profiles for

information. The CIF receives two input streams:

incoming messages.

a message stream and a context update stream. These two streams are serialized so that at each

ii) Indexes

point in time either one message or one update is

Given a message, the information filter must find

processed.

all profiles that match. This filtering can be

In order to deal with the two input streams, a CIF must support the following methods:

accelerated by indexing the profiles or predicates of the profiles. The most important method supported by an index is probe, which is invoked

1. Handle_message(Message m):

by the CIF’s handle message method. Probe takes a message as input and returns a set of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

284

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

profiles that potentially match that message. Furthermore, an index provides insert and delete methods in order to register new profiles or delete existing profiles.

i)

ISBN: 378 - 26 - 138420 - 5

General Idea

The key idea of AGILE is to dynamically reduce the accuracy and scope of an index if context updates are frequent and to increase the accuracy

iii) Merge

and scope of an index if context updates are

Profiles are conjunctions and disjunctions of predicates. Since it is not efficient to use a highdimensional index to cover all conjunctions and disjunctions, an individual index typically only covers one type of predicate. Therefore, several potential indexes are probed in order to process a

seldom and handle message calls are frequent. This way, AGILE tries to act in the same way as the NOINDEX approach for update context operations and like the EAGER approach for handle message operations, thereby combining the advantages of both approaches. In order to do so,

message.

AGILE

generalizes

techniques

from

PARTIAL and GBU. Several optimization techniques for this merge operation exist. One idea is to optimize the order in which intersection and union operations are applied. Another class of optimizations involves the algorithms and data structures used to

The operation to reduce the accuracy is called escalation; it is triggered by context updates in order to make future context updates cheaper. The operation that increases the accuracy of an index is called de-escalation; it is triggered by

implement the intersect and union operations.

handle message events in order to make future iv) Postfilter

message

processing

more

efficient.

Both

The last step of processing a message eliminates

operations are carried out in the granularity of

false positives. This step is necessary if

individual index entries. This way, the index

inaccurate indexes are used or if the merge

remains accurate for profiles that are associated

operation does not involve all kinds of

with contexts that are rarely updated and the

predicates. The Postfilter operation takes a set of

index moves profiles that are associated with

profiles as input and checks which profiles

contexts that are frequently updated out of scope.

match

the

As a result, AGILE only escalates and

predicates of the profiles based on the current

deescalates as much as necessary and can get the

state of the context.

best level of accuracy for all profiles. AGILE is

the

message

by

reevaluating

IV. ADAPTIVE INDEXING : AGILE

generic and can be applied to any index structure; in particular, it can be used for the

This section presents the AGILE (Adaptive Generic Indexing algorithm.

with

Local

Escalations)

index

structures

devised

specifically

for

information filtering. It works particularly well for hierarchical index structures.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

285

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Escalation This operation is triggered by increasing the stock of Warehouse by one; i.e., a context update

ISBN: 378 - 26 - 138420 - 5

result of a handle message operation in order to reduce the number of false positives for future calls to handle message.

from two to three. Rather than carrying out an

The advantages of AGILE are that it effectively

insert and delete on the binary tree, the

combines the advantages of the NOINDEX and

escalation moves the Identifier up to the left set

EAGER approaches in a fine-grained way. It can

of the parent node.

deal with workloads in which certain contexts

De-escalation

are frequently updated by escalating the entries for those contexts: in the extreme case to the root

This operation is triggered if the handle message operation is called several times for orders of, say, three or four items and Warehouse A was returned by the index as a potential candidate

of the data structure or even outside of the scope of the index. Likewise, AGILE is suitable for workloads in which context updates are seldom and many messages need to be processed.

and had to be filtered out by the Postfilter step. If this happens, AGILE decides to deescalate the index entry for Warehouse in order to improve the performance of future calls to the handle message operation.

The basic idea of AGILE is not new. In some sense, R-Trees and other indexes for spatial data apply a similar approach. In R-Trees, identifiers are associated with bounding boxes which can also be interpreted as sets of keys.

ii) Properties of AGILE Indexes The difference is that AGILE uses escalatations The escalate and deescalate operations can be implemented on any index structure, and thus AGILE can be used to extend any index structure for context-aware information filtering. The insert and delete operations of an index are not modified and are the same as in the basic (non AGILE) version of the index. However, AGILE allows efficient implementations of the update operation that assigns a new value to an identifier.

and de-escalations in order to control the accuracy of the index, whereas an R-Tree does not adjust its accuracy depending on the update/probe workload. iii) AGILE Algorithm Based on the escalate and deescalate operations, we are now ready to present the algorithms for the

handle

message

and

update

context

operations. The algorithm for the handle message operation is almost the same as the

What are escalations and de-escalations in this

algorithm for the EAGER approach. The only

framework? Both operations re-assign a new set

difference is that a special implementation of the

of keys to an identifier. Escalations are carried

Postfilter operation is used (Lines 2 to 9). If a

out as a result of update context operations in

profile is detected to be a false positive (i.e., test

order to make future calls to update context

in Line 3 fails for one of the predicates of the

cheaper. De-escalations are carried out as a

profile), then a DEpolicy function (Line 5)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

286

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

determines whether to deescalate the index for

(4)

that profile. The function that tests the profile

(5) EndFor

(test) returns 0 if the profile matches the

(6) DataStore[c].a := v;

ISBN: 378 - 26 - 138420 - 5

EndIf

message. If not, test returns a reference to the index for the predicate that failed; this index CONCLUSION

returned the profile as a false positive and therefore is a candidate for de-escalation (Line

Information filtering has matured to a key

6). Since de-escalation is an expensive operation,

information processing technology. Various

it is not necessarily a good idea to carry it out

optimization techniques and index structures

whenever a false positive occurs.

have been proposed for various kinds of

The

algorithm

for

update

context

is

straightforward. In the first step (Lines 2 to 5), it checks whether an escalation is necessary. In the second step (Line 6), the context is updated, just as in every other CIF approach.

applications, data formats and workloads. This paper picked up this challenge by providing simple extensions to existing index structures for information filtering systems. We called this approach AGILE for Adaptive Generic Indexing with Local Escalations. The proposed extensions

Function AGILE.handle message Input: Message m

any index structure. The key idea is to adapt the

Output: Set of matching profiles RES (1) RES := merge(AGILEindex[1].probe(m),... AGILEindex[N].probe(m)) (2)

forEach p in RES

(3)

f := test(p, m)

(4)

if (f > 0)

(5)

RES := RES \ {p}

(6)

if (DEpolicy(p))

(7)

AGILEindex[f].deescalate(p)

(8) (9) (10)

are universal and can, in theory, be applied to

endIf

accuracy and scope of an index to the workload of a context-aware information filter or more generally, to information filtering and message routing with state. REFERENCES [1] S. Abiteboul. Querying Semi-Structured Data. In ICDT, 1997. [2] N. Beckmann, H. Kriegel, R. Schneider, and B. Seeger. The R*-Tree: An Efficient and

endIf

Robust Access Method for Points and

EndFor

Rectangles. In SIGMOD, 1990.

(11) return RES Procedure AGILE.update context

[3] R. R. Brooks and S. Iyengar. Multi-Sensor

Input: Context c, Attribute a, Value v

Fusion: Fundamentals and Applications in

(1) For (i:=1 to Att)

Software. Prentice Hall, 1997.

(2)

If (AGILEindex[i] indexes a ^ (c.a ∉ AGILEindex[i].probe(v))

(3)

AGILEindex[i].escalate(c, v)

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

287

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

[4] H.-J. Cho, J.-K. Min, and C.-W. Chung. An

ISBN: 378 - 26 - 138420 - 5

[8] P. T. Eugster, P. A. Felber, R. Guerraoui, and

Adaptive Indexing Technique Using Spatio-

A.-M.

Kermarrec.

Temporal Query Workloads. Information

Publish/Subscribe.

and Software Technology, 46(4):229–241,

35(2):114–131, 2003.

The

Many

ACM

Faces

Comput.

of

Surv.,

2004. [9] F. Fabret, H. A. Jacobsen, F. Llirbat, J. [5] O. Cooper, A. Edakkunni, M. J. Franklin, W.

Pereira, K. A. Ross, and D. Shasha. Filtering

Hong, S. R. Jeffery, S. Krishnamurthy, F.

Algorithms and Implementation for Very

Reiss, S. Rizvi, and E. Wu. HiFi: A Unified

Fast

Architecture for High Fan-in Systems. In

SIGMOD, 2001.

Publish/Subscribe

Systems.

In

VLDB, 2004. [6] A. K. Dey. Understanding and Using Context.

Personal

and

Ubiquitous

Computing Journal, 5(1):4–7, 2001. [7] Y. Diao, M. Altinel, M. J. Franklin, H. Zhang, and P. Fischer. Path Sharing and Predicate Evaluation for High-Performance XML

Filtering.

TODS,

28(4):467–516,

2003.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

288

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

ENHANCEMENT OF FACE RETRIVAL DESIGEND FOR MANAGING HUMAN ASPECTS Devarapalli Lakshmi Sowmya 1, V.Sreenatha Sarma2 Student, Dept of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India 2Associate Professor, Dept of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India 1 M.Tech

ABSTRACT: Traditional methods of content-based image retrieval make use of image content like color, texture as well as gradient to symbolize images. By combine low-level characteristics with high level human features, we are capable to discover enhanced feature representations and attain improved retrieval results. The recent effort explains automatic attribute recognition has sufficient quality on numerous different human attributes. Content-based face image retrieval is strongly associated to the problems of face recognition however they focus on finding appropriate feature representations in support of scalable indexing systems. To leverage capable human attributes automatically identified by attribute detectors in support of getting better content-based face image retrieval, we put forward two orthogonal systems named attribute enhanced sparse coding as well as attribute-embedded inverted indexing. Attribute-embedded inverted indexing believes human attributes of chosen query image in a binary signature as well as make available resourceful recovery in online stage. Attributeenhanced sparse coding make use of global structure and employ quite a lot of human attributes to build semantic-aware code words in offline stage. The projected indexing system can be effort less integrated into inverted index, consequently maintaining a scalable structure. Keywords: Content-based image retrieval, Attribute recognition, Feature representations, Binary signature, Semantic-aware code words. substantial computation outlay in support of dealing with high dimensional features as well as generate explicit models of classification, it is non-trivial to directly pertain it towards tasks of face retrieval. Even though images obviously have extremely high dimensional representation, those within similar class generally lie on a low dimensional subspace [1]. Sparse coding can make use of semantics of information and attain capable results in numerous different applications for instance image classification as well as face recognition. Even though these works accomplish significant performance on keyword based face image recovery as well as face recognition, we put forward to make use of effectual ways to merge lowlevel features and automatically noticed facial attributes in support of scalable face image retrieval [11]. Human attributes are high level semantic description concerning an individual. The recent effort explains

1. INTRODUCTION: In the recent times, human attributes of automatically detected have been revealed capable in various applications. To get better the value of attributes, relative attributes were applied. Multi-attribute space was introduced to standardize assurance scores from various attributes [4]. By means of automatically detected human attributes, excellent performance was achieved on retrieval of keyword based face image as well as face verification detectors in support of search of similar attribute. Due to increase of photo sharing or social network services, there increase tough needs for extensive content-based retrieval of face image. Content-based face image retrieval is strongly associated to the problems of face recognition however they focus on finding appropriate feature representations in support of scalable indexing systems [8]. As face recognition generally necessitate

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

289

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

automatic attribute recognition has sufficient quality on numerous different human attributes. Using human attributes, numerous researchers have attained capable results in various applications for instance face verification, identification of face, keyword based face image recovery as well as similar attribute search [3]. Even though human attributes have been revealed practical on applications associated towards face images, it is not trivial towards concerning in retrieval task of content-based face image due to quite a lot of reasons. Human attributes simply enclose limited dimensions. When there are more over numerous people in dataset, it loses discrimin ability as assured people may have comparable attributes [14]. Human attributes are represented as vector concerning floating points. It does not effort well with increasing extensive indexing methods, and consequently it suffers from slow reply and scalability concern when the data size is enormous. 2. METHODOLOGY: Traditional methods of content-based image retrieval make use of image content like color, texture as well as gradient to symbolize images [13]. Traditional methods in support of face image retrieval typically employ low-level features to correspond to faces but low-level characteristics are be short of of semantic meanings as well as face images typically include high intra class variance thus the recovery results are unacceptable [9]. When specified a face image query, retrieval of content-based face image effort to discover comparable face images from huge image database. It is an enabling knowledge for numerous applications includes automatic face annotation, investigation of crime. By combining lowlevel characteristics with high-level human features, we are proficient to discover enhanced feature representations and attain improved retrieval results [7]. To deal with extensive information, mainly two types of indexing systems are employed. Numerous studies have leveraged inverted indexing

ISBN: 378 - 26 - 138420 - 5

or else hashbased indexing pooled with bag-of-word representation as well as local features to attain well-organized similarity search. Even though these methods can attain high precision on rigid object recovery, they go through from low recall difficulty due to semantic gap [2]. The significance as well as sheer amount of human face photos makes manipulations of extensive human face images actually significant research difficulty and facilitate numerous real world applications. In recent times, some researchers have fixed on bridging semantic gap by discovery of semantic image representations to get better performance of content-based image retrieval [16]. To leverage capable human attributes automatically identified by attribute detectors in support of getting better content-based face image retrieval, we put forward two orthogonal systems namedchosen query image in a binary signature as well as make available resourceful recovery in online stage [12]. Attribute-enhanced sparse coding make use of global structure and employ quite a lot of human attributes to build semanticaware code words in offline stage. By incorporating these methods, we put up an extensive content-based face image retrieval scheme by taking benefits of lowlevel features as well as high-level semantic [5]. When a query image arrives, it will experience same process to get hold of sparse code words as well as human attributes, and make use of these code words by binary attribute signature to get back images in index system. By means of sparse coding, a mark is a linear grouping of column vectors of dictionary [15]. As learning dictionary with a huge vocabulary is lengthy we can presently make use of randomly sampled image patch as dictionary and pass over prolonged dictionary learning measure. To believe human attributes in sparse illustration, we initially put forward to make use of dictionary selection to compelte images with various attribute values to enclose

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

290

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

various codewords [10]. For a single human aspect, we separate dictionary centroids into two various subsets, images all the way through positive attribute scores will make use of subset as well as images by negative attribute scores will employ the other [6]. For cases of numerous attributes, we separate sparse representation into numerous segments based on number of features, and every section of sparse representation is produced depending on distinct aspect.

ISBN: 378 - 26 - 138420 - 5

indexing system can be effortlessly integrated into inverted index, consequently maintaining a scalable structure. Certain informative attributes were discovered in support of face retrieval across various datasets and these aspects are capable for other applications. Attribute-enhanced sparse codewords would additionally get better accurateness of retrieval of content-based face image. 4. CONCLUSION: Even though human attributes have been revealed practical on applications associated towards face images, it is not trivial towards concerning it in retrieval task of content based face image due to quite a lot of reasons. Traditional methods in support of face image retrieval typically employ low level features to correspond to faces but low-level characteristics are be short of semantic meanings as well as face images typically include high intra-class variance thus the recovery results are unacceptable. Using human attributes, numerous researchers have attained capable results in various applications for instance face verification, identification of face, keyword based face image recovery as well as similar attribute search. Numerous studies have leveraged inverted indexing or else hash based indexing pooled with bag-of word representation as well as local features to attain wellorganized similarity search. To believe human attributes in sparse illustration, we initially put forward to make use of dictionary selection to compel images with various attribute values to enclose various code words. The significance as well as sheer amount of human face photos makes manipulations of extensive human face images actually significant research difficulty and facilitate numerous real world applications. In recent times, some researchers have fixed on bridging semantic gap by discovery of semantic image representations to get better performance of content-based image retrieval. The experimental result illustrate that by means of codewords generated by

Fig1: An overview of structure of proposed system

3. RESULTS: Attribute-enhanced sparse coding make use of global structure and employ quite a lot of human attributes to build semanticaware code words in offline stage. Attribute embedded inverted indexing additionally believe local attribute signature concerning attribute-enhanced sparse coding as well as attributeembedded inverted indexing as shown in fig1. Attribute-embedded inverted indexing believes human attributes of query image and still make sure proficient recovery in online stage. The experimental result illustrate that by means of code words generated by projected coding system, we can decrease the quantization error as well as attain salient gains in face recovery on public datasets. The projected

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

291

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

projected coding system, we can decrease the quantization error as well as attain salient gains in face recovery on public datasets. REFERENCES: [1] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2009 [2] Scalable Face Image Retrieval using Attribute-Enhanced Sparse Code words Bor-Chun Chen, Yan-Ying Chen, Yin-Hsi Kuo, Winston H. Hsu,2013 [3] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller, “Labeled faces in the wild: A database for studying face recognition in unconstraine denvironments , ” University of Massachusetts, Amherst, Tech.Rep. 07-49, October 2007. [4] Y.-H. Kuo, H.-T. Lin, W.-H. Cheng, Y.-H. Yang, and W. H. Hsu ,“Unsupervised auxiliary visual words discovery for large scale image object retrieval,” IEEE Conference on Computer Vision and Pattern Recognition, 2011 [5] J. Yang, K. Yu, Y. Gong, and T. Huang, “Linear spatial pyramid matching using sparse coding for image classification,” IEEE Conference on Computer Vision and Pattern Recognition, 2009 [6] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar, “Describablevisual attributes for face verification and image search,” in IEEE Transactionson Pattern Analysis and Machine Intelligence (PAMI), SpecialIssue on Real-World Face Recognition, Oct 2011 [7] O. Chum, J. Philbin, J. Sivic, M. Isard and A. Zisserman, “Total Recall:Automatic Query Expansion with a Generative Feature Model for ObjectRetrieval,” IEEE International Conference on Computer Vision, 2007 [8] A. Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin, “Context base dvision system for place and object

ISBN: 378 - 26 - 138420 - 5

recognition,” International Conference on Computer Vision, 2003. [9] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Locality constrained linear coding for image classification,” IEEE Conference on Computer Vision and Pattern Recognition, 2010. [10] Z. Wu, Q. Ke, J. Sun, and H.-Y. Shum, “Scalable face image retrieval with identity-based quantization and multireference reranking,” IEEE Conference on Computer Vision and Pattern Recognition, 2010. [11] W. Scheirer and N. Kumar and P. Belhumeur and T. Boult, “Multi-Attribute Spaces: Calibration for Attribute Fusion and Similarity Search,” IEEE Conference on Computer Vision and Pattern Recognition, 2012. [12] J. Wang, S. Kumar, and S.-F. Chang, “Semi-supervised hashing for scalable image retrieval,” IEEE Conference on Computer Vision andPattern Recognition, 2010. [13] M. Douze and A. Ramisa and C. Schmid, “Combining Attributes and Fisher Vectors for Efficient Image Retrieval,” IEEE Conference on Computer Vision and Pattern Recognition, 2011 [14] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features :Spatial pyramid matching for recognizing natural scene categories,”IEEE Conference on Computer Vision and Pattern Recognition, 2006 [15] H. Jegou, M. Douze, and C. Schmid, “Hamming embedding and weak geometric consistency for large scale image search,” European Conference on Computer Vision, 2008 [16] W. Scheirer, N. Kumar, K. Ricanek, T. E. Boult, and P. N. Belhumeur, “Fusing with context: a bayesian approach to combining descriptive attributes,” International Joint Conference on Biometrics, 2011.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

292

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Design and Implementation of secure cloud systems using Meta cloud Perumalla Gireesh M.Tech 2nd year, Dept. of CSE, ASCET, Gudur, India Email:perum.giri7@gmail.com _____________________________________________________________________________________

Abstract – Cloud computing has recently emerged

way from existing schemes to solving this problem

as a new paradigm for hosting and delivering

effectively. But it does not consider the users data

services over the Internet. Cloud computing is

privacy in transforming Meta cloud. To address

attractive to business owners as it eliminates the

this problem, we introduce Business Continuity

requirement

Management (BCM). This is defined as a holistic

for

users

to

plan

ahead

for

provisioning, and allows enterprises to start from

management

the small and increase resources only when there

organization and reduces the impacts of data

is a rise in service demand. However, despite the

leakage issues.

fact

Index terms – Meta cloud, Cloud Privacy, private

that

cloud

computing

offers

huge

opportunities to the IT industry, the development

process

that

identifies

to

an

clouds, security.

of cloud computing technology is currently at its

I. INTRODUCTION

infancy, with many issues still to be addressed. In With the rapid development of processing and

this paper, we present a survey of cloud computing,

highlighting

its

architectural

principles,

and

key

storage technologies and the success of the

concepts,

Internet, computing resources

state-of-the-art

have

become

cheaper, more powerful and more ubiquitously

implementation as well as research challenges.

available than ever before. This technological trend has enabled the realization of a new

Meta cloud based on a combination of existing

computing model called cloud computing, in

tools, concepts and provides the convenient to

which resources (e.g., CPU and storage) are

organize the private clouds. This can consider the

provided as general utilities that can be leased and

only vendor lock-in problem of different vendors

released by users through the Internet in an on-

in cloud. For that Meta cloud provides an abstract

demand fashion.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

293

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

The cloud computing paradigm has achieved

a) The Key Challenges

widespread adoption in recent years. Its

Being virtual in concept, the cloud environment

success is due largely to customers’ ability to

generates several questions in the minds of users

use services on demand with a pay-as-you go

with respect to confidentiality, integrity and

pricing model, which has proved convenient

availability. The key challenges for the adoption

in many respects. Low costs and high

of the cloud are as given below:

flexibility make migrating to the cloud

Assurance of the privacy and security

compelling. Despite its obvious advantages,

The cloud users are wary of the security and

however, many companies hesitate to “move

privacy

to the cloud,” mainly because of concerns

environment of the cloud is causing concerns

related to service availability, data lock-in,

amongst enterprises. As the same underlying

data security and legal uncertainties.

hardware may be used by other companies and

A previous study considers the data lock-in

competitors, it may lead to a breach of privacy.

problem and provides a convenient way to

Moreover, any data leakage or virus attack would

of

their

data.

The

multi-tenant

have a cascading effect on multiple organizations.

solve this using Meta cloud. The problem is that once an application has been developed

Reliability and availability

based on one particular provider’s cloud

Instances of outages at the facilities of the cloud

services and using its specific API, that

service providers have raised concerns over the

application is bound to that provider;

reliability of the cloud solutions. Enterprises are

deploying it on another cloud would usually

recognizing that they would have to deal with

require completely redesigning and rewriting

some level of failures while using commoditybased solutions. Also, the cloud providers cannot

it. Such vendor lock-in leads to strong

give an assurance on the uptime of their external

dependence on the cloud service operator.

internet connection, which cloud shut all access to

The Meta cloud framework contains the

the cloud.

following components: Meta cloud API, Meta Data Security is Key Concern

cloud proxy, resource monitoring and so on. But sometimes, transforming cloud as meta

There are a number of concerns surrounding the

cloud data security issues are raised which are not

adoption of the cloud especially because it is a

consider in the previous study.

relatively new concept. Assuring customers of data security would be one of the biggest challenges for cloud vendors to overcome. The

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

294

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

II.

Figure 1 shows the chart of the key barriers to cloud adaptation.

ISBN: 378 - 26 - 138420 - 5

PROPOSED WORK

In this section we introduce a novel solution Business continuity management (BCM) and provide the overview of the Business Continuity Management. a) Business Continuity Management (BCM) The BCMS will use to Plan Do Check Act approach. The PDCA approach can be applied to every element of the BCM lifecycle. Business Continuity leads (BC leads)

Figure 1 Chart of the key barriers

Leads for business continuity management will be

To address this problem this paper introduce

appointed in each directorate, regional, area team

the Business Continuity Management (BCM)

and hosted bodies within the strategy.

is defined as a holistic management process that

 BC leads will perform the following:

identifies to an organization and reduces the  Promote business continuity Management

impacts of data leakage issues. This contains following stages Project initiation, understand the

 Receive BC training

organization, BC strategies, develop Business continuity planning, and Apply BCP. The

 Facilitate the completion of BIAs

Business Continuity Planning is shown in the

 Develop BCPs

following Figure 2.  Ensure that BCPs are available during incident response  Ensure that incident responders receive training appropriate to their role  Ensure that plans are tested, reviewed and updated  Participate in the review and development of the BCMS. Figure 2 Business Continuity Management Overview

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

295

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Business Continuity working Groups

Stage 1- Understanding the Organization

Working groups may be established to:

Understanding the

Take control of resource allocation Set priorities Set

continuity strategies

organization’s

objectives

in line and

ISBN: 378 - 26 - 138420 - 5

business

is

essential

in

developing an appropriate BCM Programme. A

with the

detailed understanding of what processes are

responsibilities

essential to ensure continuity of prioritized

Establish the measures that will be used to assure

activities to at least the minimum business

the BCMS remains current and relevant Report to

continuity objective level will be achieved by

top management on the performance of the

undertaking BIA. The BIA will incorporate

BCMS.

continuity requirements analysis which may

Emergency preparedness resilience and response

include

(EPRR)

qualifications required for prioritized activities.

The business continuity program will have close

BIAs will describe as follows:

links to EPRR because both desciplines aim to

the

staff

skills,

competencies

and

 The prioritized activities of departments/

ensure the organization is resilient and able to

teams;

respond to threats and hazards. The BCMS described in this strategy will ensure that the

 The impact that the incidents will have on

organization is able to manage risks and incidents

prioritized activities

that directly impact on its ability to deliver

 How long we could continue using the

business as usual.

emergency measures before we would have to

Assurance

restart our normal activities;

The National support centre will maintain an

 A description of the emergency measures

overview of the BCMS. BC leads will be reuired

we have in place to deal with an incident;

to report on progress within their areas.  The threats to the continued delivery of BCM Documentation

priority activate.

The National Support Centre will be given access Stage 2 – Determining BCM strategy

to related documentation by areas within the scope, such as BCPs, training records, incident

 BIAs will create a picture of the

records and exercises to facilitate the sharing of

organizations

good practice throughout the organization. The

and business continuity risks. This information

Business

will be used to:

Continuity

management

has

the

dependencies,

vulnerabilities

following stages:

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

296

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

 To assist in deciding the scope of the

continuity plans will be based on different levels

BCM programme.

of response and escalation.

 To provide the information from which

Business Continuity Plans

continuity options can be identified and

Various plans will continue to be developed to

evaluated.

identify the actions that are necessary and the resources which are needed to enable business

 To assist the preparation of detailed plans

continuity. Plans will be based upon the risks Decisions that determine business continuity

identified, but will allow for flexibility.

strategies will be made at an appropriate level

Prioritized activity recovery plans (PARPs)

 Recovery

Priority activities are those activities to which

 People

priority must be given following an incident in order to mitigate the impact. Activities of the

 Premises

highest priority are those that if disrupted, impact the organization to the greatest extent and in the

 Technology and information

shortest possible time.  Suppliers and partners

Stage 4 – Exercise, Audit, Marinating and reviewing

Stage 3 – Developing and implementing a BCM

Exercises

response

It is essential that regular BC exercises are carried

This stage considers the incident reporting

out to ensure that plans are tested and continue to

structure,

be effective and fit-for-purpose as operational

business

continuity

plans,

and

Prioritized activity recovery plans.

processes and technology configurations are constantly changing. Exercise will rise awareness of BCM procedures.

Incident Reporting Structure There

are

various

sources

of

information

Audit

pertaining to business continuity threats such as

 To

severe, flooding and soon.

organizations BCM polices and standards

The impact of all incidents will vary. It is

 To

important that the response to an incident is

solutions

appropriate to the level of impact and remains

 To validate the organizations BCM plans

flexible as the situation develops. Business

 To verify that appropriate exercising and

validate

review

compliance

the

with

organizations

the

BCM

maintenance activities are taking place. To

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

297

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

 Business continuity exercises

highlight decencies and issues and ensure their resolution

CONCLUSION

Management Review An annual review of this strategy will be

In this paper we introduce a novel solution to

undertaken. However, events may prompt more

provide a convenient way to process to identify the

frequent re-examination, such as:

various security threats. This paper considers a

 A

BIA

substantive

revision changes

which in

survey of Business continuity management (BCM)

identifies

processes

to avoid the security risks.

and

priorities;  A

REFERNCES

significant

assessment

and/or

change

in

the

risk

appetite

threat of

[1] ISO 22301 Societal Security - Business

the

Continuity

organization  New

Management

Systems

Requirements. regulatory

or

legislative

[2] NHS England Core Standards for Emergency

requirements.

Preparedness,

Resilience

and

Response

(EPRR).

 Embedding BCM in the Organization’s culture

[3] J. Skene, D.D. Lamanna, and W. Emmerich, “Precise Service Level Agreements,” Proc.

 BCM must be an accepted management

26th Int’l Conf. Software Eng. (ICSE 04),

process, full endorsed and actively promoted

IEEE CS Press, 2004, pp. 179–188.

by directors. The communication of high-level

[4] Q. Zhang, L. Cheng, and R. Boutaba, “Cloud

endorsement to all is essential. There are various ways in which this be achieved:

Computing: State-of-the-Art and Research Challenges,”

 Business continuity will be part of the

J.

Internet

Services

and

Applications, vol. 1, no. 1, 2010, pp. 7–18.

organization’s induction for new starters

[5] The Route Map to Business Continuity

 Participation in BIA and writing BCPs

Management: Meeting the Requirements of  Communication

of

risks,

alerts

and

ISO 22301.

incidents  Business continuity information will be available on the staff intranet  Business continuity training

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

298

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

AUTHORS

Mr.P.Gireesh received the Vaishnavi

Instutiate

Technology,

of

Tirupathi,

B.Tech degree in computer science & engineering from the Jawaharlal Nehru technological university Anantapur, in 2011, and received the Audisankara College of Engineering and Technology, Nellore M.Tech degree in computer science engineering from

the

Jawaharlal

Nehru

technological

university Anantapur in 2014, respectively. He Participated National Level Paper Symposiums in different Networks,

Colleges. Mobile

He

interests

Computer

Computing,

Network

Programming, and System Hardware. He is a member of the IEEE.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

299

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 1

ISBN: 378 - 26 - 138420 - 5

ASYNCHRONOUS DATA TRANSACTIONS ON SoC USING FIFO BETWEEN ADVANCED EXTENSIBLE INTERFACE 4.0 AND ADVANCED PERIPHERAL BUS 4.0 A VIJAY KUMAR*, T VINAYSIMHA REDDY**,M SANTHOSHI***. *

ECE DEPARTMENT, MRCET, INDIA. ECE DEPARMENT, MRCET, INDIA. *** ECE DEPARMENT, CVRCC, INDIA. **

ABSTRACT : Recently, VLSI technology has improved significantly and more transistors can be integrated into a chip. This makes the ideal of system-on-a-chip (SoC) more of an achievable goal than an abstract dream. The on-chip-bus (OCB) which connects silicon intellectual property (SIP) in a system-ona-chip (SoC) plays a key role in affecting the system performance. Advanced Microcontroller Bus Architecture (AMBA) bus protocol has been proposed by ARM community to justify the uneven demand of integrity. Recently, a new generation of packet-based OCB protocol called Advance extensible Interface 4.0 (AXI4.0) has been proposed. AMBA AXI4.0 protocol system supports 16 masters and 16 slaves interfacing. It supports for removal of locked transactions. . This paper presents a proj e ct aimed to do data transactions on SoC to low speed APB4.0 from the AXI4.0 using asynchronous FIFO.A asynchronous FIFO has been considered to avoid the complex hand shaking mechanism. By setting write pointer, read pointer, empty flags & full flags for the read operation and write operation the data will be transmitted between them. This paper modelled in a Verilog hardware description language (HDL) and simulation results for read and write operation of data and address are shown in ISE simulator. .Keywords - AMBA Bus Protocol, APB 4.0, AXI4.0, Asynchronous FIFO, FPGA Vertex 3, SoC , VerilogHDL, and ISE Simulator. 1.INTRODUCTION Recently due to the miniaturization of semiconductor process technology and computation for survival in the current market conditions constant customization is required. VLSI technology has improved significantly and more transistors can be integrated into a chip. This makes the ideal of system-on-a-chip (SoC) It may consists all intellectual property blocks on a single chip substrate. IP is an inevitable choice with size constraint. A SoC platform usually consists of various design components dedicated to specified application domains SoC buses are used to interconnect an Intellectual Property (IP) core to the other intellectual property core. They are reside in Field Programmable Gate Array (FPGA). The AMBA (Advanced Microcontroller Bus Architecture) on-chip interconnect system is an established open specification that details a strategy on the interconnection and management of functional blocks that makes up a SoC. AMBA was introduced in the year 1996 by ARM limited. AMBA has four generations of buses. The details of the generations and their interfaces are as given below:  AMBA specification

Advanced System Bus (ASB) Advanced Peripheral Bus (APB) 

AMBA 2.0 Advanced System Bus (ASB) Advanced Peripheral Bus (APB2 or APB)

AMBA 3.0 Advanced eXtensible Interface ( AXI v1.0) Advanced High-performance Bus Lite

(AHB-Lite v1.0) Advanced Peripheral Bus (APB3 v1.0) Advanced Trace Bus (ATB v1.0) 

AMBA 4.0 AXI Coherency Extensions (ACE) AXI Coherency Extensions Lite (ACE-Lite) Advanced eXtensible Interface 4 (AXI4) Advanced eXtensible Interface 4 Lite (AXI4Lite) Advanced eXtensible Interface 4 Stream (AXI4-Stream v1.0) Advanced Trace Bus (ATB v1.1) Advanced Peripheral Bus (APB4 v2.0) AMBA defines both a bus specification and a technologyindependent methodology for designing, high-integration embedded controllers. Advanced eXtensible Interconnect (AXI4.0) was introduced in AMBA 3.0 as the successor on-chip bus protocol of the AHB in AMBA 2.0. The AXI4.0 protocol is a high-performance and high bandwidth bus includes a number of features that make it suitable for high-speed submicron’s interconnect. 1.1 AMBA axi4.0 Architecture AMBA AXI4 supports burst and unaligned data transfers. In AMBA AXI4.0 system i n t e r fa c e 16 masters to slaves. Each master and slave has their own 4 bit identification tags. . AMBA AXI4 system c o n s i s t s o f m a s t e r , s l a v e a n d b u s i n t e r c o n n e c t s . The AXI4.0 consists of five channels namely write address, write data, read data, read address, a n d w r i t e r e s p o n s e ch an n el s. The A X I 4 . 0 protocol supports the following mechanisms:  Burst and unaligned data transfers and up-dated write r e s p on s e a ckn owl e d g m en t .  A burst data bits wide 8, 16, 32, 64, 128, 256,512 or 1024 bits.  Updated AWCACHE and ARCACHE signalling details.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

300

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 2

The AMBA AXI4.0 specification satisfies four key requirements:  To design high speed embedded micro controller products with one or more CPUs signal processors  System macro-cells and highly reusable peripheral can be migrated across a diverse range IC processes and be appropriate for full-custom ,standard cell and gate array technologies  To improve independence of processor ,providing a development road-map for advanced cached CPU cores and the development of peripheral libraries  For success efficient on-chip and off-chip communication To encourage modular system design to improve processor independence, providing a development road- The AMBA 4.0 specification defines five buses/interfaces.  Advanced eXtensible Interface (AXI)  •Advanced High-performance Bus (AHB)  Advanced System Bus (ASB)  Advanced Peripheral Bus (APB)  Advanced Trace Bus (ATB)

ISBN: 378 - 26 - 138420 - 5

arid, arcache, arlock, arprot, arburst]. The addresses of read and write operations are validated by VALID signals and sent to interface unit. The acknowledgement signals of slave will become incoming signals to master.

Fig.3 Write address and data burst The write operation process starts when the master sends an address and control information on the write address channel as shown in figure.3. The master then sends each item of write data over the write data channel. The VALID signal is low by master until the write data available in write data channel .The WLAST signal goes HIGH for the master last data. Slave drives a write response signal BRESP[1:0] back to the master to indicate that the write transaction is complete When the it has accepted all the data items, . The completion of the write transaction indicated by this signal. OKAY, EXOKAY, SLVERR, and DECERR are the allowable responses by the slave to the master.

Fig.1 Block diagram of AMBA AXI4.0 bus interconnect. Fig.4 Block diagram of AXI4.0 slave The data transfer occurs on the read data channel when the read address appears on the address bus hence the data transfer occurs on the read data channel as shown in figure 5. The VALID signal LOW by the slave until the read data is present in read data channel. For the last data transfer of the burst signal, the slave provides the RLAST signal to show that the last data item is being moved. The status of the read transfer indicated by the RRESP[1:0] signal ,OKAY, EXOKAY, SLVERR, and DECERR are the allowable responses.. Fig.2 Block diagram of AXI 4.0 master To perform write address and data operation the transaction is initiated with concatenated input of [awaddr, awid, awcache, awlock, awprot, awburst]. On the same lines for read address and data operations the concatenated input is [araddr,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

301

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 3

ISBN: 378 - 26 - 138420 - 5

Fig.7 Finite state machine of axi4.0 read and write operation.

Fig.5: Read address and data burst

1 . 2 AMBA AXI4.0 Interconnect The interconnect block consists of decoder and arbiter. When more than one master initiates a transaction simultaneously, the priority is given by the arbiter to access the bus. The address sent by any of the master is decoded by the decoder and the control goes to one slave out of 16 and also address goes by one of the salve out of 16. The AMBA AXI4.0 interface decoder is centralized block.

2. PROPOSED WORK The AXI4.0 to APB4.0 Bridge provides an interface between the high-performance AXI domain and In this proposed work data transactions can be done between high speed AXI4.0 bus to low power APB4.0 bus using asynchronous FIFO.Which act as a interface between them. Read and write transfers on the AXI4.0 bus are converted into corresponding transfers on the APB4.0. 2.1 Asynchronous FIFO Asynchronous FIFOs are widely used in the computer networking industry to receive data at a particular clock and transmit them at another clock. An asynchronous FIFO has two different clocks one for read and one for write. There are issues that arise when passing data over asynchronous clock values. Data could be over-written and hence lost when the write clock is faster than the read clock, . In order to overcome these problems, control signals like write pointer, read pointer, empty and full flags are required.

Fig 6: Signals used to design AMBA AXI4.0 modules

1.3 Finite state machine of AXI 4.0

Fig.8 Asynchronous FIFO state machine

Write and read operations between the channels can be explained by using below finite state machine.

2.1.1 Functional description of A Asynchronous FIFO Functionally the FIFO works as follows: At reset, the write and read pointers are both at null value. This is the empty condition of the FIFO, and empty is pulled high (we use the active high convention) and full is low. At null or empty, read operations are stopped and so the only write operations are possible in to the FIFO. Location 0 of the RAM nothing but the array loaded by Write and increments the write pointer to. This results the empty flag to go LOW. Consider that there are subsequent cycles only write to the FIFO and no reads, there will be a certain time when

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

302

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 4

the write pointer will equal array _size -1. This means that the last location in the array is the next location that will be written to. At this condition, the write pointer to become 0 due to write, and set full. Note that in this condition the write pointer and read pointers are equal, and not empty or null but the FIFO said to be full. This implies that the FIFO full/empty decision depends on the write pointer and read pointers when they will become equal due to the read or write operation between them but not based on the pointer values alone. When the pointer equality is a reset or a read, the FIFO said to be empty; if the cause is a write, the FIFO said to be full. Now consider that we begin a multiple reads. The read pointer is incremented by each read operation till the point where the read pointer equals array_ size -1. At this point, the data from this location of RAM or array size is available on the output bus of FIFO.Succeeding logic reads this data and provides a read signal (active for one clock). And hence the read pointer to become equal to the write pointer again (after both pointers have completed one cycle through the array). However, empty is set since pointers are equal due to read operation.

ISBN: 378 - 26 - 138420 - 5

2.4 Mechanisms of AXI4.0 & APB4.0 2.4.1 Flow chart representation A diagram of the sequence of movements or actions of people or things involved in a complex system or activity. A graphical representation of a computer program in relation to its sequence of functions (as distinct from the data it processes).It can be applicable for all the engineering branches. The flow of moving data from FIFO between protocols is as follows  If the FIFO is empty then empty flag indicates to the write pointer to write data into the FIFO and then write pointer incremented.  If the FIFO is full then full flag indicates to the read pointer to readout data from the FIFO to the APB master.

Fig.10 Flowchart of general FIFO

2.4.2 Finite state machine representation of Asynchronous FIFO

Fig.9 Circuit for write and read pointers of FIFO

2.2 AXI4.0 Slave Interface The AXI4.0 Slave Interface module provides a bi-directional slave interface to the AXI. The AXI address and data bus widths are always fixed to 32-bits and 1024bits.When both write and read transfers are simultaneously requested on AXI4.0, more priority is given for read request and less priority is given for the write request. This block also contains the data phase time out logic for generating OK response on AXI interface when APB4.0 does not respond.

2.3 APB4.0 Master Interface The APB4.0 module provides the APB master interface on the APB. This interface can be APB2 or APB3 or APB4, these are chosen by setting the generic C_M_APB_PROTOCOL. When C_M_APB_PROTOCOL=apb4. At the APB Interface M_APB_PSTRB and M_APB_PPROT signals are driven. The APB address bus and data bus widths are fixed to 32-bits.

FSM is a mathematical tool used to design system programs and digital circuits. It is a behaviour model composed of a finite number of states, transitions among all states, similar to a flow graph in which one can inspect the way logic runs when certain conditions are met. It is considered as an abstract machine, which is in only one state at a time. The current state is a state which is present at particular time .By initiated a triggering event or condition, It can change from one state to another state is called a transition for that state. A finite state machine is a mathematical abstraction model .According AXI4.0 specification, the read address channel, write address channel, read data channel and write data channel are completely independent. A read and a write requests may be issued simultaneously from AXI4, the AXI4.0 to APB Bridge of asynchronous will give more priority to the read request and less priority to the write request. That is, the write request is initiated on APB4.0 after the read is requested on APB4.0 when both write and read requests are valid.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

303

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 5

ISBN: 378 - 26 - 138420 - 5

Fig.11 FSM of Asynchronous FIFO

2.5 Advanced Peripheral Bus (APB4.0) Protocol The Advanced Peripheral Bus (APB) is a part of AMBA Architecture protocol family. It t is optimized for minimal power consumption and reduced interface complexity. It defines a lowcost interface. To enable the integration of APB4.0 peripherals easily into any design flow all signal transitions are only related to the rising edge of the clock. Every transfer takes At least two cycles are required for the every data transfer by APB4.0.The APB can interface with the AMBA Advanced High-performance Bus Lite (AHB-Lite) and AMBA AXI4.0 Advanced Extensible bus.

2.6 Handshake Mechanism of AXI4.0 & APB4.0 In AXI 4.0 specification, VALID and READY signals are present in each and every channel for hand shaking mechanism. When the control information or data is available then source asserts VALID and when destination can accept the control information or data then it asserts READ signal. When both the VALID and READY signals are asserted then only transfer takes place. Note that when source asserts VALID, the corresponding control information or data must also be available, at the same time at the positive edge of clock transfer takes place. Therefore, the source and destination need register inputs to sample the READY and VALID signals. Therefore source and destination should use combinational circuit as output. In short, AXI4.0 protocol is suitable input register and combinational output circuit. The APB4.0 Bridge buffers address, control and data from AXI4.0, drives the APB4.0 peripherals and returns data and response signal to the AXI4.0.By using internal address map it decodes the address to select the peripheral. The bridge is designed to operate when the APB4.0 and AXI4.0have independent clock frequency and phase by using asynchronous FIFO. For every AXI4.0 channel, invalid commands are not forwarded and an error response generated. the APB Bridge will generate DE CERR response through the response channel (read or write) if peripheral accessed does not exist. And it asserts PSLVERR if the target peripheral exists, but, it will give a SLVERR response.

Fig.12 Block diagram of signal connections between AXI4.0 and APB4.0

3. SIMULATION AND SYNTHESIS RESULT

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

304

Fig.13 Channels in AXI 4.0

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 6

ISBN: 378 - 26 - 138420 - 5

Fig.14 Burst data transactions between AXI4.0 master and slave Fig.18 AXI 4.0 to APB4.0 RTL schematic view

Fig.15 AXI4.0 RTL Schematic view Fig.19 Top module RTL schematic view of AXI4.0 to APB4.0

Fig.16 AXI 4.0 Technology schematic view

Fig.20 AXI 4.0 to APB4.0 tech schematic view

3. CONCLUSION 

Fig.17 Data transaction between AXI4.0 and APB4.0

The implementation asynchronous data transactions on SoC using FIFO between Advanced eXtensible Interface and Advanced peripheral Bus is designed .Verilog HDL has been used for implementing bridge between them.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

305

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY 7

The bridge defines low cost interface optimized for minimal power consumption and reduced interface complexity.

4. FEATURES      

32-bit AXI4.0 slave and APB4.0 master interfaces. PCLK clock domain completely independent of clock domain. Support up to 16 APB4.0 peripherals Burst length is 32 bits PREADY signal is supported, which translates to wait states on AXI. An error on any transfer results in SLVERR as the AXI4.0 read/write response.

5. FUTURE SCOPE  

The design will be extended by developing a total system around it. This work provides an ideal platform for enhancement or further development of the Bus Bridge Design between AXI4.0 and APB4.0 protocols. When read request and write request are simultaneously occurs the bridge gives the high priority to read request. this condition creates the race condition in APB4.0 Bridge. For date item from the FIFO and now the read date FIFO was empty. Again read and write requests are example FIFO has only one date item to read operation. If read and write requests are simultaneously occurs the bridge first execute the read request and it read the single simultaneously occurs again it execute read request but there is no data item in read data FIFO so transaction will fail. This situation is called race condition.

ISBN: 378 - 26 - 138420 - 5

[5] http://www.arm.com/products/system-ip/amba/ambaopenspecifications.php [6] ARM, "AMBA Protocol Specification 4.0", www.arm.com, 2010 ARM,AMBA Specification [7] LogiCORE IP AXI to APB Bridge (v1.00a) DS788 June 22, 2011 Product Specification. [8] Simulation and Synthesis Techniques for Asynchronous FIFO Design Clifford E.Cummings, Sunburst Design, Inc. SNUG San Jose 2002 Rev 1.2.,FIFO Architecture, Functions, and Applications SCAA042A November 1999. [9] Lahir, K., Raghunathan A., Lakshminarayana G., “LOTTERYBUS: a new high-performance communication architecture for system-on- chip deisgns,” in Proceedings of Design Automation Conference, 2001. [10] Ying-Ze Liao, "System Design and Implementation of AXI Bus", National Chiao Tung University, October 2007. AUTHOR BIOGRAPHY

A VIJAY KUMAR received B. Tech degree in Electronics and Communication Engineering from JNTU affiliated college in 2009 and Pursuing M. Tech in VLSI & Embedded systems from JNTU affiliated college.

6. ACKNOWLEDGMENT I A VIJAY KUMAR would like to thank Assistant Prof T. Vinaysimha Reddy, who had been guiding through out to complete the work effectively and successfully, and would also like to thank the Prof.P Sanjeeva Reddy HOD, ECE Department and other Professors for extending their help & support in giving technical ideas about the paper and motivating to complete the work effectively & successfully.

7. REFERENCES ARM, AMBA Specifications (Rev2.0). [Online]. Available at http://www.arm.com, 1999 [2] ARM, AMBA AXI Protocol Specification (Rev 2.0), Available at http://www.arm.com, March 2010 [3] Shaila S Math, Manjula R B, “Survey of system on chip buses based on industry standards”, Conference on Evolutionary Trends in Information Technology(CETIT), Bekgaum,Karnataka, India, pp. 52, May 2011

T VINAYSIMHA REDDY received M.Tech in VLSI System Design from JNTU affiliated college in 2010 from JNTU affiliated college. And his interested areas are in Embedded systems and VLSI Design .Now he is working as an Asst. Prof. in JNTU affiliated college, Hyderabad, INDIA.

[1]

[4] Design and Implementation of APB Bridge based on AMBA 4.0 (IEEE 2011), ARM Limited.

M. Santhoshi received M.Tech in VLSI System Design from JNTU affiliated college in 2012 from JNTU affiliated college. And her interested areas are in Image and video processing .Now she is working as an Asst. Prof. in JNTU affiliated college, Hyderabad, INDIA.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

306

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Appliances of Harmonizing Model in Cloud Computing Environment Abstract Introducing a system that uses virtualization technology to assign data center resources dynamically based on request demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional store utilization of a server. Minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. Developing a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance. This article introduces a better load balance model for the public cloud based on the cloud partition concept with a switch mechanism to choose different strategies for different situations. The algorithm applies the game theory to the load balancing strategy to improve the efficiency in the public cloud environment.

inside the VMs. The algorithm can capture the rising trend of resource usage patterns and help reduce the placement churn significantly.Since the job arrival pattern is not predictable and the capacities of each node in the cloud differ, for load balancing problem, workload control is crucial to improve system presentation and maintain stability. Load balancing schemes depending on whether the system dynamics are important can be either static and dynamic . Static schemes do not use the system information and are less complex while dynamic schemes will bring additional costs for the system but can change as the system status changes. A dynamic scheme is used here for its flexibility. The model has a main controller and balancers to gather and analyze the information. Thus, the dynamic control has little influence on the other working nodes. The system status then provides a basis for choosing the right load opposite strategy. The load balancing model given in this article is aimed at the public cloud which has numerous nodes with dispersed computing resources in many different geographic locations. Thus, this model divides the public cloud into several cloud partitions. When the environment is very large and complex, these divisions simplify the load balancing. The cloud has a main manager that chooses the suitable partition for arriving jobs while the balancer for each cloud partition chooses the best load balancing strategy.

1. Introduction We present the design and execution of an automated resource management system that achieves a good balance between the two goals. We make the following contributions: We develop a resource allocation system that can avoid excess in the system effectively while minimizing the number of servers used. We introduce the concept of “skewness” to compute the uneven utilization of a server. By minimizing skewness, we can improve the overall utilization of servers in the face of multidimensional resource constraints. We design a load prediction algorithm that can capture the future resource usages of applications accurately without looking

2. Related Work There have been many studies of load balancing for the cloud environment. Load balancing in cloud computing was describe in a white paper written by Adler who

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

307

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

introduced the tools and techniques usually used for load balancing in the cloud. However, load balancing in the cloud is still a new problem that needs new architectures to adapt to many changes. Chaczko et al. described the role that load balancing plays in improving the performance and maintaining stability. There are many load balancing algorithms, such as Round Robin, Equally Spread Current Execution Algorithm, and Ant Colony algorithm. Nishant et al.used the ant colony optimization method in nodes load balancing. Randles et al.gave a compared analysis of some algorithms in cloud computing by checking the performance time and cost. They concluded that the ESCE algorithm and throttled algorithm are better than the Round Robin algorithm. Some of the classical load balancing methods are similar to the allocation method in the in service system, for example, the Round Robin algorithm and the First Come First Served (FCFS) rules. The Round Robin algorithm is used here because it is fairly simple.

ISBN: 378 - 26 - 138420 - 5

details of our algorithm. Analysis of the algorithm is obtainable in the additional file, which can be

Comparison of SPAR and FUSD.

found on the ComputerSociety Digital Libraryhttp://doi.ieeecomputersociety.org/10 .1109/TPDS.2012.283.

3.1 Hot and Cold Spots Our algorithm executes sometimes to evaluate the resource allocation status based on the predicted future resource demands of VMs. We define a server as a hot spot if the operation of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. We define the temperature of a hot spot p as the square sum of its resource utilization beyond the hot threshold:

3. The Skewness Algorithm We introduce the concept of skewness to quantify the unevenness in the utilization of multiple resources on a server. Let n be the number of resources we consider and be the utilization of the i th resource. We define the resource skewness of a server p as

where R is the set of overloaded resources in server p and r is the hot threshold for resource r. (Note that only overloaded resources are considered in the calculation.) The temperature of a hot spot reflects its degree of overload. If a server is not a hot spot, its temperature is zero.We define a server as a cold spot if the utilizations of all its resources are below a cold threshold. This indicates that the server is mostly idle and a probable candidate to turn off to save energy. However, we do so only when the

where is the average use of all resources for server p. In practice, not all types of resources are performance dangerous and hence we only need to consider bottleneck resources in the above calculation. By minimizing the skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. In the following, we describe the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

308

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

average resource utilization of all actively used servers (i.e., APMs) in the system is below a green computing threshold. A server is actively used if it has at least one VM running. Otherwise ,it is inactive. Finally, we define the warm threshold to be a level of resource utilization that is adequately high to justify having the server running but not so high as to risk becoming a hot spot in the face of temporary fluctuation of application resource demands. Different types of resources can have different thresholds. For example, we can define the hot thresholds for CPU and memory resources to be 90 and 80 percent, respectively. Thus a server is a hot spot if either its CPU usage is above 90 percent or its memory usage is above 80 percent. 3.2 Hot Spot Mitigation

ISBN: 378 - 26 - 138420 - 5

we can find a destination server for any of its VMs, we consider this run of the algorithm a success and then move onto the next hot spot. Note that each run of the algorithm migrates away at most one VM from the overloaded server. This does not essentially eliminate the hot spot, but at least reduces its temperature. If it remains a hot spot in the next decision run, the algorithm will repeat this process. It is possible to design the algorithm so that it can migrate away multiple VMs during each run. But this can add more load on the related servers during a period when they are already overloaded. We decide to use this more conservative approach and leave the system some time to react before initiating additional migrations. 3.3 Green Computing When the resource utilization of active servers is too low, some of them can be turned off to save energy. This is handled in our green computing algorithm. The test here is to reduce the number of active servers during low load without sacrificing performance either now or in the future. We need to avoid oscillation in the system. Our green computing algorithm is invoked when the average utilizations of all resources on active servers are below the green computing threshold. We sort the list of cold spots in the system based on the ascending order of their memory size. Since we need to migrate away all its VMs before we can shut down an underutilized server, we define the memory size of a cold spot as the aggregate memory size of all VMs running on it. Recall that our model assumes all VMs connect to a shared back-end storage. Hence, the cost of a VM live migration is resolute mostly by its memory footprint. This Section in the supplementary file explains why the memory is a good measure in depth. We try to eliminate the cold spot with the lowest cost first. For a cold spot p, we check if we can migrate all its VMs

We sort the list of hot spots in the system in descending temperature (i.e., we handle the hottest one first). Our goal is to remove all hot spots if possible. Otherwise, keep their temperature as low as possible. For each server p, we first decide which of its VMs should be migrated away. We sort its list of VMs based on the resulting temperature of the server if that VM is migrated away. We aim to travel away the VM that can reduce the server’s temperature the most. In case of ties, we select the VM whose removal can reduce the skewness of the server the most. For each VM in the list, we see if we can find a destination server to accommodate it. The server must not become a hot spot after accepting this VM. Among all such servers, we select one whose skewness can be reduced the most by accepting this VM. Note that this reduction can be negative which means we select the server whose skewness increase the least. If a destination server is found, we record the migration of the VM to that server and update the predicted load of related servers. Otherwise, we move onto the next VM in the list and try to find a destination server for it. As long as

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

309

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

somewhere else. For each VM on p, we try to find a target server to accommodate it. The resource utilizations of the server after accepting the VM must be below the warm threshold. While we can save energy by consolidating underutilized servers, overdoing it may create hot spots in the future. The warm threshold is designed to prevent that. If multiple servers satisfy the above criterion, we prefer one that is not a present cold spot. This is because increasing load on a cold spot reduces the likelihood that it can be eliminated. However, we will accept a cold spot as the destination server if necessary. All things being equal, we select adestination server whose skewness can be reduced the most by accepting this VM. If we can find destination servers for all VMs on a cold spot, we record the sequence of migrations and update the predict load of related servers. Otherwise, we do not migrate any of its VMs. The list of cold spots is also efficient because some of them may no longer be cold due to the proposed VM migrations in the above process. The above consolidation adds extra load onto the related servers. This is not as serious a problem as in the hot spot improvement case because green computing is initiated only when the load in the system is low. Nevertheless, we want to bound the extra load due to server consolidation. We restrict the number of cold spots that can be eliminated in each run of the algorithm to be no more than a certain percentage of active servers in the system. This is called the consolidation limit. Note that we eliminate cold spots in the system only when the average load of all active servers (APMs) is below the green computing threshold. Otherwise, we leave those cold spots there as potential destination equipment for future offloading. This is consistent with our philosophy that green computing should be conducted conservatively. 3.4 Consolidated Movements

ISBN: 378 - 26 - 138420 - 5

The movements generated in each step above are not executed until all steps have finished. The list of actions are then consolidated so that each VM is moved at most once to its final destination. For example, hot spot mitigation may dictate a VM to move from PM A to PM B, while green computing dictates it to move from PM B to PM C. In the actual execution, the VM is moved from A to C directly.

4.System Model There are several cloud computing categories with this work focused on a public cloud. A public cloud is based on the standard cloud compute model, with service provided by a service provider. A large public cloud will include many nodes and the nodes in different geographical locations. Cloud partition is used to manage this large cloud. A cloud partition is a subarea of the public cloud with divisions based on the geographic locations. The architecture is shown in Fig.1.The load opposite strategy is based on the cloud partitioning concept. After creating the cloud partitions, the load balancing then starts: when a job arrives at

Fig. 1 Typical cloud partitioning. the system, with the main controller deciding which cloud partition should receive the job. The partition load balancer then decides how to assign the jobs to the nodes. When the load status of a cloud partition is normal, this partition can be talented locally. If the cloud partition load status is not normal, this job should be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

310

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

transferred to another partition. The whole process is shown in Fig.2. 4.1 Main controller and balancers The load balance solution is done by the main controller and the balancers. The main controller first assigns jobs to the suitable cloud partition and then communicates with the balancers in each partition to refresh this status information. Since the main manager deals with information for each partition, smaller data sets will lead to the higher processing rates. The balancers in each partition gather the status in order from every node and then choose the right strategy to distribute the jobs. The relationship between the balancers and the main controller is shown in Fig.3. 4.2 Assigning jobs to the cloud partition The cloud partition status can be divided into three types: (1) Idle: When the percentage of idle nodes exceeds status.

ISBN: 378 - 26 - 138420 - 5

Fig. 3 Relationships between the main controllers, the balancers, and the nodes.

(2) Normal: When the percentage of the normal nodes exceeds , change to normal load status. (3) Overload: When the percentage of the overloaded nodes exceeds , change to

change to idle

overloaded status. The parameters , , and are set by the cloud partition balancers. The main controller has to converse with the balancers frequently to refresh the status information. The main controller then dispatches the jobs using the following strategy: When job i arrives at the system, the main controller queries the cloud divider where job is located. If this location’s status is idle or normal, the job is handled locally. If not, another cloud partition is found that is not overloaded. The algorithm is shown in Algorithm 1. 4.3 Assigning jobs to the nodes in the cloud partition

Fig. 2 Job assignment strategy.

The cloud partition balancer gathers load in order from every node to evaluate the cloud partition status. This evaluation of each node’s load status is very important. The first task is to define the load degree of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

311

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

different

situations

ISBN: 378 - 26 - 138420 - 5

based

on

the

Step 4 Three nodes load status levels are then defined as: _ Idle When there is no job being processed by this node so the status is charged to Idle. _ Normal For the node is normal and it can process other jobs. _ Overloaded When each nodes. The node load degree is related to various static parameter and dynamic parameters. The static parameters include the number of CPU’s, the CPU dispensation speeds, the memory size, etc. Dynamic parameters are the memory utilization ratio, the CPU utilization ratio, the network bandwidth, etc. The load degree is computed from these parameters as below: Step 1 Define a load paramet

the node is not available and can not receive jobs until high it returns to the normal. The load degree results are input into the Load Status Tables created by the cloud partition balancers. Each balancer has a Load Status Table and refreshes it each fixed period T . The table is then used by the balancers to calculate the partition status. Each partition status has a different load balancing solution. When a job arrives at a cloud partition, the balancer assigns the job to the nodes based on its current load strategy. This strategy is changed by the balancers as the cloud partition status changes.

with

each parameter being either static or dynamic. m represents the total number of the parameters.

5 Cloud Partition Balancing Strategy

Step 2 Compute the load degree as:

5.1 Motivation Good load balance will improve the presentation of the entire cloud. However, there is no common method that can adapt to all possible different situations. Various methods have been developed in improving existing solutions to resolve new problems. Each exacting method has advantage in a particular area but not in all situations. Therefore, the current model integrates several methods and switches between the load balance method based on the system status. A relatively simple method can be

are weights that may differ for different kinds of jobs. N represents the current node. Step 3 Define estimate benchmarks. Calculate the average cloud partition degree from the node load degree statistics as: Th e bench mark

Load

is then set for

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

312

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

used for the divider idle state with a more complex method for the normal state. The load balancers then switch methods as the status changes. Here, the idle status uses an improved Round Robin algorithm while the normal status uses a game theory based load balancing strategy.

ISBN: 378 - 26 - 138420 - 5

period T . When the balance table is refreshed, at this moment, if a job arrives at the cloud partition, it will bring the inconsistent problem. The system status will have changed but the information will still be old. This may lead to an erroneous load strategy choice and an erroneous nodes order. To resolve this problem, two Load Status Tables should be created as: Load Status Table 1 and Load Status Table 2. A flag is also assigned to each table to indicate Read or Write. When the flag = “Read”, then the Round Robin based on the load degree evaluation algorithm is using this table. When the flag = “Write”, the table is being refreshed, new information is written into this table. Thus, at each moment, one table gives the correct node locations in the queue for the improved Round Robin algorithm, while the other is being prepared with the updated information. Once the data is refreshed, the table flag is changed to “Read” and the other table’s flag is changed to “Write”. The two tables then alternate to solve the inconsistency. The process is shown in Fig.4.

5.2 Load balance strategy for the idle status When the cloud partition is idle, many computing resources are available and relatively few jobs are arriving. In this situation, this cloud partition has the ability to process jobs as quickly as possible so a simple load opposite method can be used. There are many simple load balance algorithm methods such as the Random the Weight Round Robin, and the Dynamic Round Robin. The Round Robin algorithm is used here for its simplicity. The Round Robin algorithm is one of the simplest load balancing algorithms, which passes each new request to the next server in the queue. The algorithm does not record the status of each connection so it has no status information. In the regular Round Robin algorithm, every node has an equal opportunity to be chosen. However, in a public cloud, the configuration and the performance of each node will be not the same; thus, this method may overload some nodes. Thus, an improved Round Robin algorithm is used , which called “Round Robin based on the load degree evaluation”. The algorithm is still fairly simple. Before the Round Robin step, the nodes in the load balancing table are ordered based on the load degree from the lowest to the highest. The system builds a circular queue and walks through the queue again and again. Jobs will then be assign to nodes with low load degrees. The node order will be changed when the balancer refreshes the Load Status Table. However, there may be read and write inconsistency at the refresh

5.3 Load balancing strategy for the normal status When the cloud partition is normal, jobs are arriving much faster than in the idle state and the situation is far more complex, so a different strategy is used for the load balancing. Each user wants his jobs completed in the shortest time, so the public cloud needs a method that can complete the jobs of all users with reasonable response time. Penmatsa and Chronopoulos proposed a static load balancing strategy based on game theory for distributed systems. And this work provides us with a new review of the load balance problem in the cloud environment. As an implementation of distributed system, the load balancing in the cloud computing environment can be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

313

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

viewed as a game. Game theory has noncooperative games and cooperative games. In cooperative games, the decision makers eventually come to an agreement which is called a binding agreement. Each decision maker decides by comparing notes with each others. In non-cooperative games, each decision maker makes decisions only for his own benefit. The system then reachs the Nash equilibrium, where each decision maker makes the optimized decision. The Nash equilibrium is when each player in the game has chosen a strategy and no player can benefit by changing his or her strategy while the other players strategies remain unchanged.

ISBN: 378 - 26 - 138420 - 5

environment are also distributed system, these algorithms can also be used in grid computing and cloud computing environments. Previous studies have shown that the load balancing strategy for a cloud partition in the normal load status can be viewed as a non supportive game, as described here. The players in the game are the nodes and the jobs. Suppose there are n nodes in the current cloud partition with N jobs arriving, then define the following parameters:

In this model, the most important step is finding the appropriate value of . The current model uses the method of Grosu et al. called “the best reply” to calculate of each node, with a greedy algorithm then used to calculate for all nodes. This procedure gives the Nash equilibrium to minimize the response time of each job. The strategy then changes as the node’s statuses change. Fig. 4 The solution of inconsistently problem.

There have been many studies in using game theory for the load balancing. Grosu et al.proposed a load balancing strategy based on game theory for the distributed systems as a non-cooperative game using the distributed structure. They compared this algorithm with other traditional methods to show that their algorithm was less complexity with better performance. Aote and Kharat gave a energetic load balancing model based on game theory. This model is related on the dynamic load status of the system with the users being the decision makers in a non-cooperative game. Since the grid computing and cloud computing

6.Future Work Since this work is just a conceptual framework, more work is needed to implement the framework and resolve new problems. Some important points are: (1) Cloud division rules: Cloud division is not a simple problem. Thus, the framework will need a complete cloud division methodology. For example, nodes in a cluster may be far from other nodes or there will be some clusters in the same geographic area that are still far apart. The division rule should simply be based on the geographic location (province or state).

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

314

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

(2) How to set the refresh period: In the data statistics analysis, the main controller and the cloud partition balancers need to refresh the information at a fixed period. If the period is too short, the high incidence will influence the system performance. If the period is too long, the information will be too old to make good decision. Thus, tests and statistical tools are needed to set a reasonable refresh periods. (3) A better load status evaluation: A good

ISBN: 378 - 26 - 138420 - 5

[7] B. Adler, Load balancing in the cloud: Tools, tips and techniques, http://www.rightscale. com/info center/whitepapers/Load-Balancing-in-the Cloud.pdf, 2012 [8] Z. Chaczko, V. Mahadevan, S. Aslanzadeh, and C. Mcdermid, Availability and load balancing in cloud computing, presented at the 2011 International Conference on Computer and Software Modeling, Singapore, 2011. [9] K. Nishant, P. Sharma, V. Krishna, C. Gupta, K. P. Singh, N. Nitin, and R. Rastogi, Load balancing of nodes in cloud using ant colony optimization, in Proc. 14th International Conference on Computer Modelling and Simulation (UKSim), Cambridgeshire, United Kingdom, Mar. 2012, pp. 28-30. [10] M. Randles, D. Lamb, and A. TalebBendiab, A comparative study into distributed load balancing algorithms for cloud computing, in Proc. IEEE 24th International Conference on Advanced Information Networking and Applications, Perth, Australia, 2010, pp. 551-556. [11] A. Rouse, Public cloud, http://searchcloudcomputing.techtarget.com/ definition/public-cloud, 2012. [12] D. MacVittie, Intro to load balancing for developers The algorithms, https://devcentral.f5.com/blogs/us/introtoloa d-balancing-for-developers-ndash-thealgorithms,2012. [13] S. Penmatsa and A. T. Chronopoulos, Game-theoretic static load balancing for distributed systems, Journalof Parallel and Distributed Computing, vol. 71, no. 4, pp. 537-555, Apr. 2011. [14] D. Grosu, A. T. Chronopoulos, and M. Y. Leung, Load balancing in distributed systems: An approach using cooperative games, in Proc. 16th IEEE Intl. Parallel and Distributed Processing Symp., Florida, USA, Apr. 2002, pp. 52-61.

algorithm is needed to set and and the evaluation mechanism needs to be more comprehensive. (4) Find other load balance strategy: Other load balance strategies may provide better results, so tests are needed to compare different strategies. Many tests are needed to guarantee system availability and efficiency. References [1] R. Hunter, The why of cloud, http://www.gartner.com/DisplayDocument?d oc cd=226469&ref= g noreg, 2012. [2] M. D. Dikaiakos, D. Katsaros, P. Mehra, G. Pallis, and A. Vakali, Cloud computing: Distributed internet computing for IT and scientific research, Internet Computing, vol.13, no.5, pp.10-13, Sept.-Oct. 2009. [3] P. Mell and T. Grance, The NIST definition of cloud computing, http://csrc.nist.gov/publications/nistpubs/800 145/SP800-145.pdf, 2012. [4] Microsoft Academic Research, Cloud computing,http://libra.msra.cn/Keyword/605 1/cloudcomputing?querycloud%20computin g, 2012. [5] Google Trends, Cloud computing, http://www.google.com/trends/explore#q=cl oud%20computing, 2012. [6] N. G. Shivaratri, P. Krueger, and M. Singhal, Load distributing for locally distributed systems, Computer, vol. 25, no. 12, pp. 33-44, Dec. 1992.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

315

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[15] S. Aote and M. U. Kharat, A gametheoretic model for dynamic load balancing in distributed systems, in Proc. The International Conference on Advances in Computing, Communication and Control (ICAC3 ’09), New York, USA, 2009, pp. 235-238.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

316

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Similarity Search in Information Networks using Meta-Path Based between Objects Abstract – Real world physical and abstract data

in many applications. For example, in spatial

objects are interconnected, forming enormous,

database, people are interested in finding the k

interconnected networks. By structuring these

nearest neighbors for a given spatial object.

data objects and interactions between these

Object similarity is also one of the most

objects into multiple types, such networks

primitive concepts for object clustering and

become

many other data mining functions.

semi-structured

heterogeneous

information networks. Therefore, the quality In a similar context, it is critical to provide

analysis of large heterogeneous information

effective

networks poses new challenges. In current

search for the most similar pictures for a given

distance, connectivity and co-citation. By using

relationships

between

we

measure

objects

rather

in

such as flicker, a user may be interested in

Wikipedia by reflecting all three concepts:

approach

functions

a given entity. In a network of tagged images

introduced for measuring the relationship on

current

search

information networks, to find similar entities for

system, a generalized flow based method is

the

similarity

picture. In an e-commerce system, a user would

only

be interest in search for the most similar

than

products for a given product. Different attribute-

similarities. To address these problems we

based similarity search, links play an important

introduce a novel solution meta-path based

role for

similarity searching approach for dealing with

similarity search in

information

networks, especially when the full information

heterogeneous information networks using a

about attributes for objects is difficult to obtain.

meta-path-based method. Under this framework, similarity search and other mining tasks of the

There are a few studies leveraging link

network structure.

information in networks for similarity search, but most of these revisions are focused on

Index terms – similarity search, information

homogeneous or bipartite networks such as P-

network, and meta-path based, clustering.

PageRank and SimRank. These similarity measures disregard the subtlety of different types among objects and links. Adoption of such I. INTRODUCTION

measures

to

heterogeneous

networks

his

Similarity search, which aims at locating the

significant drawbacks: even if we just want to

most relevant information for a query in large

compare objects of the same type, going through

collections of datasets, has been widely studied

link paths of different types leads to rather

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

317

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

different semantics meanings, and it makes little

connections represent different relationships

sense to mix them up and measure the similarity

between authors, each having some different

without distinguishing their semantics.

semantic meaning.

To systematically distinguish the semantics

Now the questions are, given an arbitrary

among paths connecting two objects, we

heterogeneous information network, is there any

introduce

similarity

way systematically identify all the possible

framework for objects of the same type in a

connection type between two objects types? In

heterogeneous network. A meta-path is a

order to do so, we propose two important

sequence of relations between object types,

concepts in the following.

a

meta-path

based

which defines a new composite relation between a) Network Schema And Meta-Path

its starting type and ending type. The meta-path framework provides a powerful mechanism for a

First,

user to select appropriate similarity semantics,

information network, it is necessary to provide

by choosing a proper meta-path, or learn it from

its

a set of training examples of similar objects.

understanding the network. Therefore, we

for

level

complex

description

heterogeneous

for

better

describe the Meta structure of a network.

relate it to two well-known existing link-based functions

Meta

a

propose the concept of network scheme to

The meta-path based similarity framework, and

similarity

given

homogeneous

The concept of network scheme is similar to that

information networks. We define a novel

of the Entity – Relationship model in database

similarity measure, PathSim that is able to find

systems, but only captures the entity type and

peer objects that are not only strongly connected

their binary relations, without considering the

with each other but also share similar visibility

attributes for each Entity type. Network schema

in the network. Moreover, we propose an

serves as a template for a network, and tells how

efficient algorithm to support online top-k

many types of objects there are in the network

queries for such similarity search.

and where the possible links exist.

II. A META-PATH BASED SIMILARITY

b) Bibliographic Scheme and Meta-Path

MEASURE For the bibliographic network scheme, where an The similarity between two objects in a link-

explicitly shows the direction of a relation.

based similarity function is determined by how III. META-PATH BASED SIMILARITY

the objects are connected in a network, which

FRAMEWORK

can be described using paths. In a heterogeneous information network, due to the heterogeneity of

Given

the types of links, the way to connect two

similarity measures can be defined for a pair of

objects can be much more diverse. The schema

objects, and according to the path instances

a user-specified meta-path,

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

318

several

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

between them following the met-path. There are

their movie styles and productivity and finding

several

similar product.

straightforward

measures

in

the

following. This motivated us to propose a new, meta-path Path count: the number of path instances

based similarity measure, call PathSim that

between objects.

captures the subtle of peer similarity. The insight behind it is that two similar peer objects should

Random Walk: s(x, y) is the probability of the

not only be strongly connected, but also share

random walk that starts from x and ends with y

comparable observations. As the relation of peer

following meta-path P, which is the sum of the

should be symmetric, we confine PathSim to

probabilities of all the path instances.

symmetric meta-paths. The calculation of

Pair wise random walk: for a meta-path P that

PathSim between any two objects of the same

can be decomposed into two shorter meta-paths

type given a certain meta-path involves matrix

with the same length is then the pair wise

multiplication.

random walk probability starting from objects x

In this paper, we only consider the meta-path in

and y and reaching the same middle object.

the round trip from, to guarantee its symmetry

In general, we can define a meta-path based

and therefore the symmetry of the PathSim

similarity framework for two objects x and y.

measure.

Note that P-PageRank and SimRank, two wellknown

network

similarity

functions,

Properties of PathSim

are

weighted combinations of random walk measure

1. Symmetric.

or pair wise random walk measure, respectively,

2. Self-maximum

over meta-paths with different lengths in

3. Balance of visibility

homogeneous networks. In order to use PAlthough using meta-path based similarity we

PageRank and SimRank in heterogeneous

can define similarity between two objects given

information networks.

any round trip meta-paths. a) A Novel Similarity Measure As primary eigenvectors can be used as There have been several similarity measures are

authority ranking of objects, the similarity

presented and they are partially to either highly

between two objects under an infinite meta-path

visible objects or highly concentrated objects but

can be viewed as a measure defined on their

cannot capture the semantics of peer similarity.

rankings. Two objects with more similar

However, in many scenarios, finding similar

rankings scores will have higher similarity. In

objects in networks is to find similar peers, such

the next section we discuss online query

as finding similar authors based on their fields

processing for ingle meta-path.

and reputation, finding similar actors based on

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

319

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

IV.

ISBN: 378 - 26 - 138420 - 5

check every possible object. This will be much

QUERY PROCESSING FOR SINGLE META-

more

PATH

efficient

than

Pairwise

computation

between the query and all the objects of that Compared with P-PageRank and SimRank, the

type. We call baseline concatenation algorithm

calculation is much more efficient, as it is a local

as PathSim-baseline.

graph measure. But still involves expensive matrix multiplication operations for top –k

The PathSim-baseline algorithm is still time

search functions, as we need to calculate the

consuming if the candidate set is large. The time

similarity between a query and every object of

complexity of computing PathSim for each

the same type in the network. One possible

candidate, where is O(d) on average and O(m) in

solution is to materialize all the meta-paths.

the worst case. We now propose a co-clustering based top-k concatenation algorithm, by which

In order to support fast online query processing for

large-scale

networks,

we

propose

non-promising target objects are dynamically

a

filtered out to reduce the search space.

methodology that partially materializes short length meta-paths and then concatenates them online

to

derive

longer

b) Co-Clustering-Based Pruning

meta-path-based In the baseline algorithm, the computational

similarity. First, a baseline method is proposed,

costs involve two factors. First, the more

which computes the similarity between query

candidates to check, the more time the algorithm

object x and all the candidate object y of the

will take; second, for each candidate, the dot

same type. Next, a co-clustering based pruning

product of query vector and candidate vector

method is proposed, which prunes candidate

will at most involve m operations, where m is

objects that are not promising according to their

the vector length. Based on the intuition, we

similarity upper bounds. Both algorithms return

propose, we propose a co-clustering-based path

exact top-k results the given query.

concatenation method, which first generates coclusters of two types of objects for partial

a) Baseline

relation matrix, then stores necessary statics for Suppose we know that the relation matrix for

each of the blocks corresponding to different co-

meta-path and the diagonal vector in order to get

cluster pairs, and then uses the block statistics to

top-k objects with the highest similarity for the

prune the search space. For better picture, we

query, we need to compute the probability of

call cluster of type as target clusters, since the

objects. The straightforward baseline is: (1) first

objects are the targets for the query and call

apply vector matrix multiplication (2) calculate

clusters of type as feature clusters. Since the

probability of objects (3) sort the probability of

objects serve as features to calculate the

objects and return top-k list in the final step.

similarity between the query and the target

When a large matrix, the vector matrix

objects. By partitioning into different target

computation will be too time consuming to

clusters, if a whole target cluster is not similar to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

320

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

the query, then all the objects in the target

extracted from DBLP and Flicker in the

cluster are likely not in the final top-k lists and

experiments.

can be pruned. By partitioning in different The PathSim algorithm significantly improves

feature clusters, cheaper calculations on the

the query processing speed comparing with the

dimension-reduced query vector and candidate

baseline algorithm, without affecting the search

vectors can be used to derive the similarity upper

bounds.

The

PathSim-Pruning

quality.

can

significantly improve the query processing speed

For additional case studies, we construct a

comparing with the baseline algorithm, without

Flicker network from a subset of the Flicker data

affecting the search quality.

which contains four types of objects such as images, users, tags, and groups. We have to

c) Multiple Meta-Paths Combination

show that our algorithms improve similarity In the previous section, we presented algorithms

search between object based on the potentiality

for similarity search using single meta-path.

and correlation between objects.

Now, we present a solution to combine multiple VI. CONCLUSION

meta-paths. The reason why we need to combine several meta-paths is that each meta-path

In this paper we introduced novel similarity

provides a unique angle to view the similarity

search using meta-path based similarity search

between objects, and the ground truth may be a

using baseline algorithm and co-clustering based

cause of different factors. Some useful guidance

pruning algorithms to improve the similarity

of the weight assignment includes: longer meta-

search based on the strengths and relationships

path utilize more remote relationship and thus

between objects.

should be assigned with a smaller weight, such REFERENCES

as in P-PageRank and SimRank and meta-paths with more important relationships should be

[1]

Jiawei Han, Lise Getoor, Wei Wang,

assigned with a higher weight. For automatically

Johannes Gehrke, Robert Grossman "Mining

determining the weights, users cloud provides

Heterogeneous

training examples of similar objects to learn the

Principles and Methodologies"

weights of different meta-paths using learning

Information

Networks

[2] Y. Koren, S.C. North, and C. Volinsky,

algorithm.

“Measuring and Extracting Proximity in Networks,” Proc. 12th ACM SIGKDD Int’l

V. EXPECTED RESULTS

Conf. Knowledge Discovery and Data To show the effectiveness of the PathSim

Mining, pp. 245-255, 2006.

measure and the efficiency of the proposed [3] M. Ito, K. Nakayama, T. Hara, and S. Nishio,

algorithms we use the bibliographic networks

“Association

Thesaurus

Construction

Methods Based on Link Co-Occurrence

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

321

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Analysis for Wikipedia,” Proc. 17th ACM Conf.

Information

and

Knowledge

Management (CIKM), pp. 817-826, 2008. [4] K. Nakayama, T. Hara, and S. Nishio, “Wikipedia Mining for an Association Web Thesaurus Construction,” Proc. Eighth Int’l Conf. Web Information Systems Eng. (WISE), pp. 322-334, 2007. [5] M. Yazdani and A. Popescu-Belis, “A Random Walk Framework to Compute Textual Semantic Similarity: A Unified Model for Three Benchmark Tasks,” Proc. IEEE

Fourth

Int’l

Conf.

Semantic

Computing (ICSC), pp. 424-429, 2010. [6] R.L. Cilibrasi and P.M.B. Vita´nyi, “The Google Similarity Distance,” IEEE Trans. Knowledge and Data Eng., vol. 19, no. 3, pp. 370-383, Mar. 2007. [7] G. Kasneci, F.M. Suchanek, G. Ifrim, M. Ramanath,

and

G.

Weikum,

“Naga:

Searching and Ranking Knowledge,” Proc. IEEE 24th Int’l Conf. Data Eng. (ICDE), pp. 953-962, 2008. [8] R.K. Ahuja, T.L. Magnanti, and J.B. Orlin, Network Flows: Theory, Algorithms, and Applications. Prentice Hall, 1993.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

322

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Pinpointing performance Deviations of Subsystems in Distributed Systems with cloud infrastructures 1

Hareesh Kamisetty M.Tech,

(Audisankara College Of Engineering And Technology,Gudur)

Abstract- Cloud Manufacturing Systems is

Four types of CMfg service platform

service oriented system (SOS) and they are used

i.e., public private, hybrid and community CMfg

to compose the various kinds of cloud related

service platforms. Compared with existing

applications. In which some of the applications

manufacturing models CMfg has the following

are failed to complete the execution of user

features or properties:

request with in the deadline. And the Cloud

i)

Service and requirement-oriented.

Manufacturing Systems are still facing the performance

problems

and

the

Most of the manufacturing models are

existing

resource-or order-oriented, while CMfg is a

techniques considering the tracing request data

service-and requirement-oriented manufacturing

as performance data to find and diagnosis the

model. The core idea of CMfg is manufacturing

performance of the service. In this paper, we

as a service.

propose the Cloud Debugger. It is a promising

ii) Dynamic with uncertainty.

tool to diagnosis the performance problems in

The resources and services in CMfg are

Cloud Manufacturing Systems.

various and dynamic, and the solutions for Index terms- Cloud Manufacturing System (CMfg),

Performance

Diagnosis,

addressing manufacturing tasks are dynamic.

Service-

iii) Knowledge-based.

Oriented.

The whole life cycle of CMfg system I.

INTRODUCTION

The Cloud

Manufacturing

needs knowledge support. iv) Initiative.

Systems

In a Cloud manufacturing System, both

service-oriented systems and compose the

manufacturing requirements are active and they

various services. In the real world combining the increased

advanced

technologies

such

can automatically find and match with each

as

other with the support of semantic reasoning

virtualization, advanced computing technologies

based on knowledge.

and service-oriented technologies with existing models and new wide range of information

v) Physically

technologies, a new computing and service manufacturing

model

called

distributed

and

logically

centralized

cloud

The physical developed resource and

manufacturing is proposed.

facility in CMfg locate in different places and

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

323

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

are

restricted

by

different

persons

or

is supreme model for utilize in production.

organizations.

There is no overhead for enabling the debugger on this paper.

a) Existing System The

ISBN: 378 - 26 - 138420 - 5

art

of

debugging

for

a) Cloud Trace

cloud

applications is not much more than writing out

Performance is an important feature of

diagnostic messages and spelunking the logs for

service which directly interrelated with end

them.

users satisfaction and maintenance. No on intends to build a slow service, but it can be

When the right data is not being write to

extremely difficult to identify the core reasons of

the logs, developers have to change the code and

slowness when it happens.

redeploy the application to production. The traditional debuggers are not well suited for

The cloud Trace helps you visualize and

cloud-based services for two reasons. First, it is

understand the time spent by application for

hard to know which method to attach to. Second,

request processing. This enables the cloud

stopping a method in production makes it hard

operator to hastily find and repair performance

to repeat an issue and gives end-users a bad

bottlenecks.

experience.

The cloud operator can easily produce a

b) Proposed System

report that shows the performance change in the service from one release to another.

To address above mentioned problems this paper proposes the a novel Debugger is

b) Cloud Monitoring

called as Cloud Debugger which significantly

Cloud

reduce the effort of cloud operator to identify the

Monitoring

provides

the

dashboards and alerting capabilities that help the

which method to attach and it provides a easy

cloud operator to identify and repair the

way to repeat the service based on users

performance problem quickly.

experiences. With minimum configuration and no II. PROPOSED WORK The

Cloud

Debugger

separate infrastructure to

maintain,

Cloud

completely

Monitoring provides deep visibility into Cloud

changes this. It allows developers to start where

platform services when compared to the

they know best-in the code. By simply setting a

traditional approaches.

watch point on a line of code, the next time a

Finally, we can create end point checks

request on any of the cloud servers hits that line

to monitor availability and response times for

code, and we get a snapshot of all the local

end-user facing services. Endpoint checks are

variables, parameter, instance variables and a

performed by problems in Texas, Virginia

full stack trace. There is zero setup time and no

enabling monitoring of latency.

complex configuration to enable. The debugger

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

324

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

SSH to VM instantly

[2] F, Tao; L Zhang; VC Venkatesh; YL Luo; Y Cheng (2011). "Cloud manufacturing: a

Sometimes it is unavoidable to connect

computing and service-oriented manufactu -

directly to a VM to debug or repair the

ring model". Proceedings of the Institution of

production issue. It can be a bit of pain,

Mechanical Engineers, Part B, Journal of

especially when the operator is on the road, so

Engineering

now we can do that from just about anywhere.

Manufacture.

doi:10.1177/

0954405411405575

With our new browser based SSH client can speedily and securely connect to any of VMs

[3] Michael Rung-Tsong Lyu

from the console. It no needs to install any SDK

“Toward

or any tools. This could be taken as future work.

Scalable

III.

ISBN: 378 - 26 - 138420 - 5

Fine-Grained, Performance

and Hua Cai Unsupervised, Diagnosis

for

Production Cloud Computing Systems” Proc.

CONCLUSION

IEEE Parallel and Distribute Systems pp The traditional approaches diagnosis the

1245-1255, 2013.

performance problems based on the users [4] H. Mi, H. Wang, G. Yin, H. Cai, Q. Zhou,

requesting trace data but it suffers from the tradeoffs

between

tracing

granularity

and

and

T.

Sun,

“Performance

Problems

Diagnosis in Cloud Computing Systems by

debugging effort. As a result, more effort will be

Mining Request Trace Logs,” Proc. IEEE

increased in troubleshooting the performance

Network Operations and Management Symp.

problems. So that, this paper proposes the Cloud

(NOMS), pp. 893-899, 2012.

Debugger as tool for performance diagnosis with effortless. And this can be significantly reducing

[5] P. Barham, A. Donnelly, R. Isaacs, and R.

the communication and computation overheads.

Mortier,

“Using

Magpie

for

Request

Extraction and Workload Modelling,” Proc.

FUTURE ENHANCEMENT

USENIX Sixth Conf. Symp. Operating The proposed system of this paper

Systems Design and Implementation (OSDI),

considers the cloud monitoring and cloud trace

pp. 259-272, 2004.

as a techniques to find and repair the AUTHORS

performance problems. But the performance of the service is also depends on the system

First Author

dynamics which is leave it as future research

Second Author

work. REFERENCES [1] http://googlecloudplatform.blogspot.in / 20 14 /06 /enabling-developers-to-tame-product -ion-systems-in-the-cloud.html.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

325

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Secure data storage against attacks in cloud Environment using Defence in Depth 1

Naresh.G

(Audisankara College Of Engineering And Technology)

Abstract-

Cloud

computing

is

emerging

organizations. The outsourced storage makes

technology which offers various services with

shared data and resources much more accessible

low cost and flexible manner. One of the most

as users can access from anywhere.

important service models is Data Storage as a On the other hand, security remains the

Service (DaaS) in which user can remotely store

important issue that concerns privacy of users. A

their data and enjoy the on demand access using

major challenge for any comprehensive access

high quality application. Cloud computing faces

control solution for outsourced data. And have

many devastating problems to ensure the proper

the ability to handle the user requests for

physical, logical and personnel security controls.

resources according to the specified security

While moving large volumes of data and

policies. Several solutions have been proposed

software, the management of the data and

in the past, but most of them don’t consider

services may not be fully trustworthy. In this

protecting privacy of the policies and user access

paper, we mainly focus on the security features

patterns.

of data storage in the presence of threats and attacks and solutions. The paper also proposes

In this paper we address the main aspects related

an effective and flexible distributed scheme with

to security of cloud storage. It presents an

two silent features opposing to its predecessors.

attempt to propose an effective and flexible

Index

terms-

Cloud

Computing,

security policy and procedure explicit to

storage

enhance the Data storage security in the cloud.

correctness, Data Storage as a Service. I.

II. THREATS AND ATTACKS FROM STORAGE

INTRODUCTION

PERSPECTIVES Cloud computing is the delivery of the While the benefits of storage networks have

computing as a service rather than a product,

been widely acknowledged, consolidation of

whereby widely shared resources, software and

enterprise data on networked storage poses

information are provided to IT industry over a

significant security risks. Hackers adept at

network. Cloud can be classified as public,

exploiting network-layer vulnerabilities can now

private or hybrid‌etc. meanwhile, the emerging

explore deeper strata of corporate information

trend of outsourcing data storages at third parties attention from both research and industry

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

326

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Following is brief listings of some major drivers

III. SYSTEM DESIGN

to implementing security for networked storage

a) System Model

ISBN: 378 - 26 - 138420 - 5

from perspectives of challenging threats and Cloud networking can be illustrated by three

attacks:

different network entities: o

Perimeter

defence

strategies

focus

on User: who have data to be stored in the cloud

protection from external threats. With the

and rely on the cloud for data computation,

number of security attacks on the rise,

consist of both individual consumers and

relying on perimeter defence alone is not

organizations?

sufficient to protect enterprise data, and a

o

single security breach can cripple a business

Cloud Service Provider

[7].

significant resources and expertise in building

The number of internal attacks is on the rise

and managing distributed cloud storage servers,

thereby threatening NAS/SAN deployments

owns and operates live Cloud Computing

that are part of the “trusted” corporate

systems.

(CSP): who

has

networks [8]. Reports such as the CSI/FBI’s Third Party Auditor (TPA): who has expertise

annual Computer Crime & Security Survey

and capabilities that users may not have, is

help quantify the significant threat caused by

trusted to assess and expose risk of cloud storage

data theft o

services on behalf of the users upon request.

The problem of incorrectness of data storage in the cloud

o

b) Adversary Model

The data stored in the cloud may be updated There are two different sources for Security

by the users, including insertion, deletion,

threats faced by cloud data storage.

modification, appending, reordering, etc. o

Individual user’s data is redundantly stored

1. CSP can be self-interested, un-trusted and

in multiple physical locations to further

possibly malicious.

reduce the data integrity threats. o

It may move data that is rarely accessed to a

Moreover, risks due to compromised storage

lower tier of storage for monetary reasons,

range from tangible loss such as business

but

discontinuity in the form of information

o

It may hide a data loss incident due to

downtime, to intangibles such as the loss of

management errors, Byzantine failures and

stature as a secure business partner. With the

so on.

number of reported security attacks on the rise, a firm

understanding

of

networked

2. Economically motivated adversary, who has

storage

the capability to compromise a number of cloud

solutions is a precursor to determining and

data storage servers in different time intervals

mitigating security risks.

and subsequently is able to modify or delete

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

327

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Layer – 1 Devices on the Storage Network

users 'data while remaining undetected by CSPs for a certain period.

The following risk-mitigation measures are There are two types of adversary

recommended:

Weak Adversary: The adversary is interested in

o

Authentication schemes provisioned by the

corrupting the user’s data files stored on

Operating System should be evaluated.

individual servers. Once a server is comprised,

Schemes utilizing public-private key based

an adversary can pollute the original data files

authentication such as SSH or Kerberos,

by modifying or introducing its own fraudulent

which

data to prevent the original data from being

communications on the network.

retrieved by the user.

o

also

encrypt

authentication

Authentication using Access control Lists (ACL) to setup role-based access and

Strong Adversary: This is the worst case

appropriate

scenario, in which we assume that the adversary

permissions

will

enhance

security,

can compromise all the storage servers so that he

o

can intentionally modify the data files as long as

Strong password schemes like minimum length and periodic change of passwords

they are internally consistent.

should be enforced. The default user name and passwords that are configured on the

IV. PROPOSED SOLUTIONS

device should be changed. Control Access Data Storage that includes the necessary

policies,

processes

and

Constant

control

monitoring

of

published

OS

activities for the delivery of each of the Data

vulnerabilities using database, SANS Security

service offerings. The collective control Data

Alert Consensus newsletter and the NAS

Storage encompasses the users, processes, and

vendor’s support site, is a

technology

necessary

to

maintain

an

environment that supports the effectiveness of

o

necessity to prepare for possible attacks

o

Logging and auditing controls should be

specific controls and the control frameworks.

implemented to prevent unauthorized use,

The Security, correctness and availability of the

track usage and for incident response

data files being stored on the distributed cloud servers must be guaranteed by the following: o

Layer -2 Network Connectivity

Providing Security policy and Procedure for

NAS appliances face similar vulnerabilities as IP

Data Storage

based network devices. Common techniques used to protect IP networks are also applicable

The Defence in Depth (referred to as did in this

to Storage Network:

paper) is an excellent framework advocating a layered approach to defending against attacks,

o

thereby mitigating risks.

Extending

network

perimeter

defence

strategies like using a Firewall and IDS

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

328

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

o

o

ISBN: 378 - 26 - 138420 - 5

device to filter traffic reaching the NAS

localization in our challenge- response

appliance will increase protection

protocol

Use VLANs for segregating traffic to the

o

The response values from servers for each

NAS appliances

challenge

Separate and isolate management interface

correctness of the distributed storage, but

from

the Storage

also contain information to locate.

Network, thus enforcing out of band

b) Reliability of the analysis strategy

data

interfaces

on

not

only

determine

the

management which is more secure o

Monitor

traffic

patterns

on

the

The reliability of secure data storage strategy

data

depends on security procedure and the backup

interfaces of the NAS devices for unusual

data coefficients. When one or more nodes

activity

cannot be accessed, the secure strategy can Layer – 3 Management Access

ensure that the data will be restored as long as one of the k nodes can be accessed. However,

Management access is a significant source of

traditional data storage methods require all the

attack. To address the vulnerabilities, the

data in the k nodes to be retrieved. Thus, the

following guidelines provide help o

more blocks the data are split into, the poorer the reliability of traditional data storage

Disable the use of telnet and HTTP and enforce management access through SSH

V. CONCLUSION

and HTTPS for encrypted communication o

o

o

o

Create separate user accounts based on the

This paper suggests a methodical application of

management tasks assigned to the users

“defence in depth” security techniques that can

Implement

authentication

help allay security risks in networked storage.

mechanisms like two-factor authentication

More importantly, a defence in depth based

using tokens, biometrics, etc

networked storage security policy provides a

Strong password schemes like minimum

comprehensive framework to thwart future

length passwords and periodic change of

attacks as the current technologies are more

passwords should be enforced

clearly understood.

Implement

strong

authorization

using

Access

REFERENCES

Control Lists to setup role based access and appropriate permissions

[1] What is Cloud Computing? Retrieved April

a) Correctness verification o

o

6, 2011, available at:

Error localization is a key prerequisite for

http://www.microsoft.com/business/engb/sol

eliminating errors in storage systems.

utions/Pages/Cloud.aspx

We

can

correctness

do

that

by integrating the

verification

and

error

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

329

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

[2]

EMC,

Information-Centric

ISBN: 378 - 26 - 138420 - 5

Security.

http://www.idc.pt/resources/PPTs/2007/IT& Internet_Security/12.EMC.pdf. [3] End-User Privacy in Human–Computer Interaction. http://www.cs.cmu.edu/~jasonh/publications /fnt-enduser-privacy-in-human-computerinteractionfinal. pdf. [4] ESG White Paper, the Information-Centric Security Architecture. http://japan.emc.com/ collateral/analystreports/emc-white-paper

-

v4-4-21-2006.pdf. [5] Subashini S, Kavitha V., “A survey on security issues in service delivery models of cloud computing,” Journal of Network and Computer Applications (2011) vol. 34 Issue 1, January 2011 pp. 1-11. AUTHORS First Author Second Author

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

330

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

IMPROVED LOAD BALANCING MODEL BASED ON PARITIONING IN CLOUD COMPUTING G.RajivRatnakar II M.Tech AUDI SANKARACOLLEGE OF ENGINEERING Rajivratnakar1239@gmail.com CH.MadhuBabu M.Tech Associate Professor Madhugist@gmail.com Abstract: Load balancing plays a vital role in improving performance of cloud servers. Implementing better load balancing schemes improves performance of cloud servers there by more number of users can be satisfied. As number of cloud users increasing day by day load on cloud servers need to be maintained. In this article load balancing schemes are analyzed and a better load balancing scheme is identified in dynamic environment

and to avoid the overloading at a certain node we implement some algorithms.

Introduction: Cloud computing is large group of remote servers interconnected together that provide a facility of sharing. The cloud computing is high utility technology in world and having the ability to change IT software industry. Due to its simple architecture most of companies are adapting cloud computing.The number of cloud service providers are increasing day by day as the number of cloud users are increasing in day to day life. The increase of web traffic and load make load balancing as a big research topic.

Round Robin algorithm: Round robin implements an equal priority to all jobs mechanism. Based on time slice each and every process is allotted with a certain time in a round robin manner. It is very simple to implement. Equally spread current execution: ESCG work based on job priorities. It distributes the load randomly by checking the load and then shifts the load to virtual machine which is lightly loaded so that to maximize throughput. It uses spread spectrum technique in which load is spread over virtual machines.

The term load balancing refers to distribution of larger processing load to smaller processing nodes for improving overall performance of system. An ideal load balancing algorithm should avoid overloading on a certain node in a partitioned cloud. Security, reliability response time throughput are some factors while choosing a good load balancing algorithm. The main aim of good load balancing algorithm is to improve throughput and reduce response time of the system.

Throttled load balancing algorithm: In throttled algorithm job manager maintains list of virtual machines and jobs assigned to them. Throttled algorithm finds appropriate virtual machine for assigning a particular job. If all virtual machines are in a heavy state it maintains jobs in job queue till virtual machine is free to process a job.

In this article both static and dynamic schemes are discussed and analyzed there by a good load balancing strategy is selected. When the User submits the job the job arrive at the main controller the main controller allocates the jobs to the node. Load balancing schemes: To distribute load of multiple networks to achieve maximum throughput minimize the response time

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

331

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

The analysis performed on static load balancing schemes round robin, equally spread current execution, throttled individually and the Overall response time, data center processing time, total revenue in dollars are listed in the following analysis.

Related work: Load balancing can be implemented in both static and dynamic schemes. Cloud analyst is a tool provided by cloud bus

organization to analyze the performance of the cloud. In cloud analyst there are certain terminologies as follows.

If we observe the above analysis although better algorithms are used there is no change in response time of the three algorithms. Not only the response time but also the other factors are remained unchanged. so the use of static algorithms are not preferable.

User base: The user base represents the single user but here user base represents a group of users. Data center: Datacenter manages the data management activities virtual machines creation destruction and other routing schemes.

Round Robin

ESCG

Throttled

Overall response time

300.12

300.12

300.12

Data center processing time(m.sec)

0.35

0.35

0.35

0.50

0.50

0.50

16

16

16

VM cost($) Total cost

Table: analysis of static load

Figure 1: cloud analyst tool

Proposed work: By using the cloud analyst tool both static and dynamic schemes are analyzed and there by a better load balancing model and scheme is determined Static schemes:

The static scheme does not analyze the systems state. It implements a single scenario for every situation. It is simple to implement. For example if static system uses round robin algorithm, for idle normal and busy states the system uses the same algorithm. It does not provide better results.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

332

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Implementing dynamic schemes : Dynamic schemes analyze the systems current status. Although dynamic schemes are difficult to implement, dynam ic schemes identifies the current position of the server whether it is in idle represented with certain thresholdor it is in normal or else it is in overloaded state. If the system status is in idle or normal main controller [2] will allocate job to virtual machine to process job or if it is busy it will wait till virtual machine is idle.Dynamic load balancing model may impl ements more than on load balancing schemes. As shown in figure belo w the load balancing is implemented with m ore than on scheme, it implemented round ro bin throttled equally spread current execution. By implementing more than one algorithm the load on server gets reduced there by a great user satisfaction achieved.

Fig2: model with Dynam ic schemes

The analysis of load bal ancing model in dynamic environment using RR, ESCG, and Throttled are as shown in below tables.

Algorithm Start

Overall response time Data center processing time (m.sec) VM cost($)

User submit a job to main controller Identify the status of partition. If state=idle!!normal Processjob

Round Robin

ESCG

Throttled

223.34

209.75

209.74

13.84

10.11

10.10

5.3

4.45

4.03

18.3

18.3

18.3

Else

Total Search for another partition.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

333

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

References: 1. Gaochao XU Jungie pang: “Load balancing model based on partitioning public cloud”. 2. Dr.Hemanth s mahalle : “Load balancing on cloud centers”. 3. Bhathiya: ”Cloud analyst cloud sim based visual modeller for analyzing cloud computing environments and applications”

Fig: Response time graph

Although it costs equal and high when compared to static algorithms there is a great variation in terms of response time the response time is very less in dynamic schemes.

4. Ramprasad pandey, P.Gotham Prasad rao: “Load balancing in cloud computing system”. 5. WWW.cloudbus.org/cloudsim

Regarding the dynamic schemes if we observe the cost and response time, there is a similar cost in all the three algorithms but there is a variation in terms of response time. Based on the response time the throttle and the equally spread current execution are best algorithms in dynamic load balancing environments.

Conclusion: Finally dynamic schemes are effective in balancing load when compared to static schemes. In dynamic schemes throttled and equally spread current execution are better to balance the load based on dynamic load environments.

Acknowledgements: I would thank my head of department Prof.Mr.M.Rajendra&myguideMr.CH.MadhuBabu sir for guiding me to this project. I thank my parents’, friends’ lecturers who gave a contribution for this project.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

334

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Effective User Navigability Through Website Structure Reorganizing using Mathematical Programming Model B. VAMSI, CH. SAI MURALI 1PG

Scholar, Dept of CSE, Audisankara College of Engineering & Technology, Gudur, AP, India. Prof, Dept of CSE, Audisankara College of Engineering & Technology, Gudur, AP, India.

2Asst

Abstract—Website design is easy task but, to navigate user efficiently is big challenge, one of the reason is user

behavior is keep changing and web developer or designer not think according to user’s behavior. Designing well structured websites to facilitate effective user navigation patterns has long been a hallenge in web usage mining with various applications like navigation prediction and improvement of website management. This paper addresses how to improve a website without introducing substantial changes. Specifically, we propose a mathematical programming model to improve the user navigation on a website while minimizing alterations to its current structure. Results from extensive tests conducted on a publicly available real data set indicate that our model not only significantly improves the user navigation with very few changes, but also can be effectively solved. We have also tested the model on large synthetic data sets to demonstrate that it scales up very well. In addition, we define two evaluation metrics and use them to assess the performance of the improved website using the real data set. Evaluation results confirm that the user navigation on the improved structure is indeed greatly enhanced. More interestingly, we find that heavily disoriented users are more likely to benefit from the improved structure than the less disorientedusers.

KEYWORDS-Website Design, User Navigatio, Web Mining, Mathematical Programming

I.

INTRODUCTION Previous studies on website has focused on a variety of issues, such as understanding web structures , finding relevant pages of a given page , mining informative structure of a news website, and extracting template from WebPages . Our work, on the other hand, is closely related to the literature that examines how to improve website navigability through the use of user navigation data. Various works have made an effort to address this question and they can be generally classified into two categories to facilitate a particular user by dynamically reconstituting pages based on his profile and traversal paths, often referred as personalization, and to modify the site structure to ease the navigation for all users, often referred as transformation.

Facilitating effective user navigation through website structure improvement is mainly used to improve website navigability through the use of user navigation data. Despite the heavy and increasing investments in website design, it is still revealed, however, that finding desired information in a website is not easy and designing effective websites is not a trivial task A primary cause of poor website design is that the web developers’ understanding of how a website should be structured can be considerably different from those of the users . Such differences result in cases where users cannot easily locate the desired information in a website. This problem is difficult to avoid because when creating a website, web developers may not have a clear understanding of users’ preferences and can only organize pages based on their own judgments. However, the measure of website effectiveness should be the satisfaction of the users rather than that of the developers. Thus, WebPages should be organized in a way that generally matches the user’s model of how pages should be organized .

In this paper it is concerned primarily with transformation approaches. The literature considering transformations approaches mainly focuses on developing methods to completely reorganize the link structure of a website. Although there are advocates for website reorganization approaches, their drawbacks are obvious. First, since a complete reorganization could radically change the location of familiar items, the new website may disorient users .

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

335

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

Second, the reorganized website structure is highly unpredictable, and the cost of disorienting users after the changes remains unanalyzed. This is because a website’s structure is typically designed by experts and bears business or organizational logic, but this logic may no longer exist in the new structure when the website is completely reorganized. Besides, no prior studies have assessed the usability of a completely reorganized website, leading to doubts on the applicability of the reorganization approaches. Finally, since website reorganization approaches could dramatically change the current structure, they cannot be frequently performed to improve the navigability.

ISBN: 378 - 26 - 138420 - 5

the testing data using simulations to approximate the real usage. I define two metrics and use them to assess whether user navigation is indeed enhanced on the improved structure. Particularly, the first metric measures whether the average user navigation is facilitated in the improved website, and the second metric measures how many users can benefit from the improved structure. Evaluation results confirm that user navigation on the improved website is greatly enhanced. II.

LITERATURE REVIEW:

Web personalization is the process of “tailoring” WebPages to the needs of specific users using the information of the users’ navigational behavior and profile data . Perkowitz and Etzioni describe an approach that automatically synthesizes index pages which contain links to pages pertaining to particular topics based on theco-occurrence frequency of pages in user traversals, to facilitate user navigation. The methods proposed byMobasher et al. and Yan et al. create clusters of users profiles from weblogs and then dynamically generate links for users who are classified into different categories based on their access patterns. Nakagawa and Mobasher develop a hybrid personalization system that can dynamically switch between recommendation models based on degree of connectivity and the user’s position in the site. For reviews on web personalization approaches, see [21] and [27]. Web transformation, on the other hand, involves changing the structure of a website to facilitate the navigation for a large set of users [28] instead of personalizing pages for individual users. Fu et al. [29] describe an approach to reorganize webpages so as to provide users with their desired information in fewer clicks. However, this approach considers only local structures in a website rather than the site as a whole, so the new structure may not be necessarily optimal. Gupta et al. [19] propose a heuristic method based on simulated annealing to relink webpages to improve navigability. This method makes use of the aggregate user preference data and can be used to improve the link structure in websites for both wired and wireless devices. However, this approach does not yield optimal solutions and takes relatively a long time (10 to 15 hours) to run even for a small website. Lin [20] develops integer programming models to reorganize a website based on the cohesion between pages to reduce information overload and search depth for users. In addition, a two-stage heuristic involving two integer-programming models is developed to reduce the computation time.

Recognizing the drawbacks of website reorganization Approaches. Specifically, mathematical programming (MP) model is developed that facilitates user navigation on a website with minimal changes to its current structure. This model is particularly appropriate for informational websites whose contents are static and relatively stable over time. Examples of organizations that have informational websites are universities, tourist attractions, hospitals, federal agencies, and sports organizations. The number of outward links in a page, i.e., the out degree, is an important factor in modeling web structure. Prior studies typically model it as hard constraints so that pages in the new structure cannot have more links than a specified out-degree threshold, because having too many links in a page can cause information overload to users and is considered undesirable. This modeling approach, however, enforces severe restrictions on the new structure, as it prohibits pages from having more links than a specified threshold, even if adding these links may greatly facilitate user navigation. This model formulates the out-degree as a cost term in the objective function to penalize pages that have more links than the threshold, so a page’s out-degree may exceed the threshold if the cost of adding such links can be justified. Extensive experiments are performed on a data set collected from a real website. The results indicate that this model can significantly improve the site structure with only few changes. Besides, the optimal solutions of the MP model are effectively obtained, suggesting that this model is practical to real-world websites. model with synthetic. To assess the user navigation on the improved website, the entire real data set is partitioned into training and testing sets. The training data is used to generate improved structures which are evaluated on

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

336

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

III.

ISBN: 378 - 26 - 138420 - 5

attract and retain customers. This model allows Webmasters to specify a goal for user navigation that the improved structure should meet. This goal is associated with individual target pages and is defined as the maximum number of paths allowed to reach the target page in a mini session. We term this goal the path threshold for short in this paper. In other words, in order to achieve the user navigation goal, the website structure must be altered in a way such that the number of paths needed to locate the targets in the improved structure is not larger than the path threshold.

METRIC FOR EVALUATING NAVIGATION EFFECTIVENESS

Main objective is to improve the navigation effectiveness of a website with minimal changes. Therefore, the first question is, given a website, how to evaluate its navigation effectiveness. Marsico and Levialdi point out that information becomes useful only when it is presented in a way consistent with the target users’ expectation. Palmer indicates that an easy navigated website should allow users to access desired data without getting lost or having to backtrack. We follow these ideas and evaluate a website’s navigation effectiveness based on how consistently the information is organized with respect to the user’s expectations.

IV.

PROBLEM

FORMULATION

The problem of improving the user navigation on a Website while minimizing the changes to its current Structure can then be formulated as the mathematical Programming model. The objective function minimizes the cost needed to improve the website structure, where the cost consists of two components: 1) the number of new links to be established (the first summation), and 2) the penalties on pages containing excessive links, i.e., more links than the out-degree threshold in the improved structure (the second summation).

This metric is related to the notion of information scent developed in the context of information foraging theory. Information foraging theory models the cost structure of human information gathering using the analogy of animals foraging for food and is a widely accepted theory for addressing the information seeking process on the web. Information scent refers to proximal cues (e.g., the snippets of text and graphics of links) that allow users to estimate the location of the “distal” target information and determine an appropriate path.

V.

COMPUTATIONAL EXPERIMENTS AND PERFORMANCE EVALUATIONS

Extensive experiments were conducted, both on a data set collected from a real website and on synthetic data sets. First tested the model with varying parameters values on all data sets. Then, we partitioned the real data into training and testing data. The training data is used to generate improved site structures which were evaluated on the testing data using two metrics that are discussed in detail later. Moreover, we compared the results of our model with that of a heuristic.

Here backtracks are used to identify the paths that a user has traversed, where a backtrack is defined as a user’s revisit to a previously browsed page. The intuition is that users will backtrack if they do not find the page where they expect it . Thus, a path is defined as a sequence of pages visited by a user without backtracking, a concept that is similar to the maximal forward reference defined in Chen et al. Essentially, each backtracking point is the end of a path.

VI.

Evaluation of the Improved Website

In addition to the extensive computational experiments on both real and synthetic data sets, we also perform evaluations on the improved structure to assess whether its navigation effectiveness is indeed enhanced by approximating its real usage. Specifically, we partition the real data set into a training set (first three months) and a testing set (last month). We generate the improved structure using the training data, and then evaluate it on the testing data using two metrics: the average number of paths per mini session and the percentage of mini sessions enhanced to a specified threshold. The first metric measures whether the improved structure can

Problem Description Difficulty in navigation is reported as the problem that triggers most consumers to abandon a website and switch to a competitor. Generally, having traversed several paths to locate a target indicates that this user is likely to have experienced navigation difficulty. Therefore, Webmasters can ensure effective user navigation by improving the site structure to help users reach targets faster. Thesis easy navigated websites can create a positive attitude toward the firm, and stimulate online purchases, whereas websites with low usability are unlikely to

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

337

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

[3] Internetretailer, “Web Tech Spending Static-But High-for the Busiest E-Commerce Sites,” http://www.internetretailer.com/ dailyNews.asp?id = 23440, 2007.

facilitate users to reach their targets faster than the current one on average, and the second metric measures how likely users suffering navigation difficulty can benefit from the improvements made to the site structure.

4] D. Dhyani, W.K. Ng, and S.S. Bhowmick, “A Survey of Web Metrics,” ACM Computing Surveys, vol. 34, no. 4, pp. 469-503, 2002.

The evaluation procedure using the first metric consists of three steps and is described as follows: 1. Apply the MP model on the training data to obtain the set of new links and links to be improved. 2. Acquire from the testing data the mini sessions that can be improved, i.e., having two or more paths, their length, i.e., number of paths, and the set of candidate links that can be used to improve them. 3. For each mini session acquired in step 2, check whether any candidate link matches one of the links obtained in step 1, that is, the results from the training data. If yes, with the assumption that users will traverse the new link or the enhanced link in the improved structure, remove all pages (excluding the target page) visited after the source node of the session for the improved website, and get its updated length information. VII.

ISBN: 378 - 26 - 138420 - 5

[5] X. Fang and C. Holsapple, “An Empirical Study of Web Site Navigation Structures’ Impacts on Web Site Usability,” Decision Support Systems, vol. 43, no. 2, pp. 476-491, 2007. 6] J. Lazar, Web Usability: A User-Centered Design Approach. Addison Wesley, 2006. [7] D.F. Galletta, R. Henry, S. McCoy, and P. Polak, “When the Wait Isn’t So Bad: The Interacting Effects of Website Delay, Familiarity, and Breadth,” Information Systems Research, vol. 17, no. 1, pp. 20- 37, 2006.

Mini Session and Target Identification

I employed the page-stay timeout heuristic to identify users’ targets and to demarcate mini sessions. The intuition is that users spend more time on the target pages. Page-stay time is a common implicit measurement found to be a good indicator of page/document relevance to the user in a number of studies . In the context of web usage mining, the page-stay timeout heuristic as well as other timeoriented heuristics are widely used for session identification and are shown to be quite robust with respect to variations of the threshold values . The identification of target pages and mini sessions can be affected by the choice of page-stay timeout threshold. Because it is generally very difficult to unerringly identify mini sessions from anonymous user access data, we ran our experiments for different threshold values. REFERENCES [1] Pingdom, “Internet 2009 in Numbers,” http://royal.pingdom. Com/2010/01/22/internet-2009-in-numbers/, 2010. [2] J. Grau, “US Retail e-Commerce: Slower But Still Steady Growth,” http://www.emarketer.com/Report.aspx?code=emark eter_2000492, 2008.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

338

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

AN EFFECTIVE APPROACH TO ELIMINATE TCP INCAST COLLAPSE IN DATACENTER ENVIRONMENTS S Anil Kumar1, G Rajesh2 1

M.Tech 2nd year, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India.

2

Asst Professor, Dept. of CSE, Audisankara College of Engineering & Technology, Gudur, A.P, India.

Abstract – Transport

Control Protocol (TCP)

I.

many to one communication congestion happens

Transport Control Protocol (TCP) is widely used

in high-bandwidth and low-latency networks when

on the Internet and normally works fine.

two or more synchronized servers send data to the same receiver in parallel.

However, recent works have shown that TCP does

For many key data-

not work well for many-to-one traffic patterns on

center applications such as MapReduce and Search,

this

many-to-one traffic pattern

INTRODUCTION

high-bandwidth, low-latency networks.

is

many-to-one

Congestion occurs when many synchronized

communication congestion may severely degrade

servers under the same Gigabit Ethernet switch

their performances, e.g., by enhancing response

simultaneously send data to one receiver in

time. In this paper, we explore the many-to-one

parallel. Only after all connections have finished

communication by focusing on the relationships

the data transmission can the next round be issued.

between TCP throughput, round-trip time (RTT),

Thus, these connections are also called barrier-

and receive window. Unlike previous approaches,

synchronized. The final performance is determined

which moderate the impact of TCP incast

by the slowest TCP connection, which may suffer

congestion by using a fine-grained timeout value,

from timeout due to packet loss. The performance

this plan is to design an Incast congestion Control

collapse of these many-to-one TCP connections is

for TCP (ICTCP) scheme on the receiver side. In

called

particular, this method changes the TCP receive

networks are well structured and layered to

window proactively before packet loss occurs.

achieve high bandwidth and low latency, and the

common.

Hence

TCP

TCP

incast

congestion.

Data-center

buffer size of top-of-rack (ToR) Ethernet switches Index terms – congestion

many-to-one communication,

control,

round

trip

time,

is usually small this is shown in the below figure.

TCP

throughput.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

339

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

time for packet loss recovery with quicker retransmissions, or controlling switch buffer occupation to avoid overflow by using ECN and modified TCP on both the sender and receiver sides. The incast congestion in data-center networks is shown in the below figure. This paper focuses on avoiding packet loss before incast congestion, which is more appealing than Fig. 1 Data-center network of a ToR switch

recovery after loss and recovery schemes can be

connected to multiple rack-mounted servers.

complementary to congestion avoidance. Our idea is to perform incast congestion avoidance at the

A recent measurement study showed that a barriersynchronized

many-to-one

traffic

pattern

receiver side by preventing incast congestion. The

is

receiver side can adjust the receive window size of

common in data-center networks, mainly caused

each TCP connection, so the aggregate burstiness

by MapReduce and similar applications in data

of all the synchronized senders are kept under

centers this can be shown in below figure.

control. We call our design Incast congestion Control for TCP (ICTCP). We first perform congestion avoidance at the system level. We then use the per-flow state to finely tune the receive window of each connection on the receiver side. The technical novelties of this work are as follows: 1) To perform congestion control on the receiver side, we use the available bandwidth on the network interface as a quota to coordinate the receive

window

increase

of

all

incoming

Fig. 2 Incast Congestion in Data-center application

connections.

The root cause of TCP incast collapse is that the

2) Our per-flow congestion control is performed

highly bursty traffic of multiple TCP connections

independently of the slotted time of the round-trip

overflows the Ethernet switch buffer in a short

time (RTT) of each connection, which is also the

period of time, causing intense packet loss and

control latency in its feedback loop.

thus TCP retransmission and timeouts. Prior solutions focused on either reducing the response

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

340

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

3) Our receive window adjustment is based on the ratio of the difference between the measured and expected throughput over the expected. This allows us to estimate the throughput requirements from the sender side and adapt the receiver window accordingly. We also find that live RTT is necessary for throughput estimation as we have observed that TCP RTT in a high-bandwidth low-latency network increases with throughput, even if link

Fig. 4 Total goodput of multiple barrier-

capacity is not reached. We have developed and

synchronized TCP connections versus the number

implemented ICTCP as a Windows Network

of senders, where the data traffic volume per

Driver Interface Specification (NDIS) filter driver. i)

sender is a fixed amount.

TCP Incast Congestion

We first establish multiple TCP connections

Incast congestion happens when multiple sending

between all senders and the receiver, respectively.

servers under the same ToR switch send data to

Then, the receiver sends out a (very small) request

one receiver server simultaneously, as shown in

packet to ask each sender to transmit data,

Fig. 3.

respectively. The TCP connections are issued round by round, and one round ends when all connections on that round have finished their data transfer to the receiver. We observe similar goodput trends for three different traffic amounts per server, but with slightly different transition points. TCP throughput is severely degraded by incast congestion since one or more TCP

Fig. 3 Scenario of incast congestion in data-enter

connections can experience timeouts caused by

Networks.

packet drops. each

TCP variants sometimes improve performance, but

connection is relatively small. In Fig. 4, we show

cannot prevent incast congestion collapse since

the good input achieved on multiple connections

most of the timeouts are caused by full window

versus the number of sending servers.

losses due to Ethernet switch buffer overflow. The

The amount

of

data

transmitted by

TCP incast scenario is common for data-center applications.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

341

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ii) Reasons for TCP Incast Congestion

buffer

overflow

significantly

ISBN: 378 - 26 - 138420 - 5

reduces

TCP

timeouts and saves unnecessary retransmissions.

Incast congestion happens when the switch buffer overflows because the network pipe is not large

We focus on the typical incast scenario where

enough to contain all TCP packets injected into the

dozens of servers are connected by a Gigabit

network.

The ToR switch is usually low-end

Ethernet switch. In this scenario, the congestion

(compared to high-layer ones), and thus the queue

point happens right before the receiver. A recent

size is not very large. To constrain the number of

measurement study showed that this scenario

packets on the flight, TCP has two windows: a

exists in data-center networks, and the traffic

congestion window on the sender side and a

between servers under the same ToR switch is

receive window on the receiver side. This paper

actually one of the most significant traffic patterns

chooses the TCP receive window adjustment as its

in data centers, as locality has been considered in

solution space. If the TCP receive window sizes

job distribution.

are properly controlled, the total receive window

We observe that the TCP receive window can be

size of all connections should be no greater than

used to throttle the TCP throughput, as it can be

the base BDP plus the queue size.

leveraged to handle incast congestion even though

II. SYSTEM DESIGN

they receive window was originally designed for flow control. The benefit of an incast congestion

Our goal is to improve TCP performance for incast

control scheme at the receiver side is that the

congestion without introducing a new transport-

receiver knows how much throughput it has

layer protocol. Our transport-layer solution keeps

achieved and how much available bandwidth

backward compatibility on the protocol and

remains. The difficulty at the receiver side is that

programming interface and makes our scheme

an overly throttled window may constrain TCP

general enough to handle the incast congestion in

performance, while an oversized window may not

future high-bandwidth and low-latency networks.

prevent incast congestion.

Therefore, the TCP connections could be incast or not, and the coexistence of incast and non-incast

As the base RTT is hundreds of microseconds in

connections is achieved.

data centers, our algorithm is restricted to adjust the receive window only for TCP flows with RTT

Previous work focused on how to mitigate the

less than 2 ms. this constraint is designed to focus

impact of timeouts, which are caused by a large

on low-latency flows. Based upon following

amount of packet loss on incast congestion. Given

observations, our receive-window based incast

such high bandwidth and low latency, we focus on

congestion control is intended to set a proper

how to perform congestion avoidance to prevent

receive window for all TCP connections sharing

switch buffer overflow. Avoiding unnecessary

the same last hop. Considering that there are many

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

342

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

TCP connections sharing the bottlenecked last hop

UDP or TCP, etc. Then, define the available

before incast congestion, we adjust the TCP

bandwidth to increase all of incoming connections

receive window to make those connections share

on that bandwidth BWA interface as

the bandwidth equally. This is because in a data

= max (0,

center, parallel TCP connections may belong to

∗ −

)

the same job, where the last one finished

Where β € [0, 1] is a parameter to absorb potential

determines the final performance.

oversubscribed

bandwidth

adjustment. A larger

III. ICTCP ALGORITHM

during

window

€ [0, 1] indicates the need

to more conservatively constrain the receive ICTCP

provides

a

receive-window-based

window and higher requirements for the switch

congestion control algorithm for TCP at the end-

buffer to avoid overflow; a lower β indicates the

system. The receive windows of all low-RTT TCP

need to more aggressively constrain the receive

connections are jointly adjusted to control

window, but throughput could be unnecessarily

throughput on incast congestion. ICTCP algorithm

throttled. A fixed setting of BWA in ICTCP, an

closely follows the design points made. It is

available bandwidth as the quota for all incoming

described how to set the receiver window of a

connections to increase the receive window for

TCP connection. i)

higher throughput. Each flow should estimate the

Bandwidth Estimation

potential throughput increase before its receiving

Using available bandwidth to increase all of

window is increased. Only when there is enough

incoming connections on the receiver server.

quota (BWA) can the receive window be increased,

Developing ICTCP as an NDIS driver on

and the corresponding quota is consumed to

Windows OS. Our NDIS driver intercepts TCP

prevent bandwidth oversubscription. To estimate

packets and modifies the receive window size if

the available bandwidth on the interface and

needed. It is assumed there is one network

provide a quota for a later receive window

interface on a receiver server, and define symbols

increase, we divide the time into slots. Each slot

corresponding to that interface. This algorithm can

consists of two sub slots of the same length. For

be applied to a scenario where the receiver has

each network interface, we measure all the traffic

multiple interfaces, and the connections on each

received in the first sub slot and use it to calculate

interface

algorithm

the available bandwidth as a quota for window

independently. Assume the link capacity of the

increase on the second sub slot. The receive

interface on the receiver server is L. Define the

window of any TCP connection is never increased

bandwidth of the total incoming traffic observed

at the first sub slot, but may be decreased when

on that interface as BWT, which includes all types

congestion is detected or the receive window is

of packets, i.e., broadcast, multicast, unicast of

identified as being over satisfied.

should

perform

this

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

343

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ii) Flow Stack Information

throughput and its expected throughput. The measured throughput represents the achieved

A flow table maintains the key data structure in

throughput on a TCP connection, also implies the

the Receiver server. A flow is identified by a 5tuple:

ISBN: 378 - 26 - 138420 - 5

source/destination

IP

current requirement of the application over that

address,

TCP connection.

source/destination port, and protocol. The flow table stores flow information for all the active

,

= max (

,

,

+ (1 − )∗

)

flows. The packet header is parsed and the corresponding information is updated in the flow

am represents Incoming measured throughput

table. Network driver interface specification filter

aisrepresents Sample of current throughput (on

functionalities are performed like collecting

connection i) .The expected throughput represents

network statistics,

and

our expectation of the throughput on that TCP

filtering the unauthorized ones. In ICTCP, each

connection if the throughput is only constrained by

connection adjusts its receive window only when

receive window.

monitoring activities

an ACK is sending out on that connection. No

aie = max(ami, rwndi/RTTi)

additional pure TCP ACK packets are generated aie represents Expected throughput of i ,rwndi

solely for receive window adjustment, so that no

represents Receive window of i. The ratio of

traffic is wasted. For a TCP connection, after an

throughput

ACK is sent out, the data packet corresponding to

difference

represented as

that ACK arrives one RTT later. As a control

Tdib

=

(aie

-

of

connection

aim)/

aie.

i

is

system, the latency on the feedback loop is one

Our idea on receive window adjustment is to

RTT for each TCP connection, respectively.

increase window when the difference ratio of

Meanwhile, to estimate the throughput of a TCP

measured and expected throughput is small, while

connection for a receive window adjustment; the

decrease window when the difference ratio is

shortest timescale is an RTT for that connection.

large.

Therefore, the control interval for a TCP

iv) Choosing

connection is 2*RTT in ICTCP, and needed one

Fairness

among

Multiple

Connections

RTT latency for the adjusted window to take effect and one additional RTT to measure the achieved

When the receiver detects that the available

throughput with the newly adjusted receive

bandwidth has become smaller than the threshold,

window.

ICTCP starts to decrease the receiver window of the selected connections to prevent congestion.

iii) Receive Window Adjustment

Considering that multiple active TCP connections

For any ICTCP connection, the receive window is

typically work on the same job at the same time in

adjusted

a data center, there is a method that can achieve

based

on

its

incoming

measured

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

344

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

fair sharing for all connections without sacrificing

communication,‖ in Proc. ACMSIGCOMM,

throughput.

2009, pp. 303–314. IV. CONCLUSION

[3] S. Kandula, S. Sengupta, A. Greenberg, P. Patel, and R. Chaiken, ―The nature of data

This paper presented an effective, practical and

center traffic: Measurements & analysis,‖ in

safe solution to eliminate TCP incast collapse in

Proc. IMC, 2009, pp. 202–208.

datacenter environments. Our approach utilizes the link bandwidth as fully as possible but without

[4] J. Dean and S. Ghemawat, ―MapReduce:

packet losses by limiting the round trip time value.

Simplified data processing on large clusters,‖

Based on the concept of bandwidth delay product,

in Proc. OSDI, 2004, p. 10.

our

technique

conservatively

estimates

the [5] M. Alizadeh, A. Greenberg, D.Maltz, J.

reasonable number of concurrent senders. In this work

we

used

Network

driver

Padhye, P. Patel, B.Prabhakar, S. Sengupta,

interface

and M. Sridharan, ―Data center TCP

specification filter functionalities like collecting network

statistics;

monitoring,

(DCTCP),‖ in Proc. SIGCOMM, 2010, pp. 63–

filtering

74.

unauthorized ones are done. We can avoid retransmission, safely send the data to the receiver

[6] D. Nagle, D. Serenyi, and A. Matthews, ―The

and also avoid congestion and traffic. Our system

PanasasActiveScale

is capable to match user preferences while

Delivering scalable high bandwidth storage,‖

achieving full utilization of the receiver’s access in

in Proc.SC, 2004, p. 53.

storage

cluster:

many different scenarios. AUTHORS REFERENCES

S Anil Kumar has received his B.Tech

[1] A. Phanishayee, E. Krevat, V. Vasudevan, D.

degree

in

Computer

Andersen, G. Ganger, G.Gibson, and S.

Science & Engineering from

Seshan, ―Measurement and analysis of TCP

Chadalavada

throughput collapse in cluster-based storage

Engineering College, Tirupathi

systems,‖ in Proc. USENIX FAST, 2008,

affiliated to JNTU, Anantapur in 2009 and

Article no. 12.

pursuing M.Tech degree in Computer Science & Engineering

[2] V. Vasudevan, A. Phanishayee, H. Shah, E.

at

Audisankara

Ramanamma

College

of

Engineering & Technology, Gudur affiliated to

Krevat, D. Andersen, G. Ganger, G. Gibson,

JNTU, Anantapur in (2012-2014).

and B.Mueller, ―Safe and effective finegrained TCP retransmissions for datacenter

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

345

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

G Rajesh M.Tech, (Ph.D) currently he is working as Assistant

Professor

Audisankara

College

in of

Engineering And Technology, Gudur (M), Nellore, Andhra Pradesh, India. He has seven years of experience in teaching and two years of experience in Software Industry. Previously he trained and worked with

DSRC

(Data

Software

Research

Company) Chennai on Oracle Application Functional Consultant. And he has worked with Capgemini India Ltd Mumbai as a Software Engineer (Oracle Apps Technical Consultant) as a contract employee through the Datamatics Pvt Ltd. He was doing his Ph.D on “A Cross Layer Framework for Bandwidth Management of Wireless Mesh Networks”.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

346

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Secured and Efficient Data Scheduling of Intermediate Data Sets in Cloud

D.TEJASWINI.,M.TECH C.RAJENDRA.,M.TECH.,M.E.,PH.D.

AUDISANKARA COLLEGE OF ENGINEERING & TECHNOLOGY

ABSTRACT Cloud computing is an emerging field in the development of business and organizational environment. As it provides more computation power and storage space users can process many applications.Due to this large number of intermediate datasets are generated and Encryption and decryption techniques are used to preserve the intermediate data sets in cloud. An Upper bound constraint based approach is used to identify sensitive intermediate data sets and we apply suppression technique on sensitive data sets in order to reduce the time and cost. The Value Generalization Hierarchy protocol is used to achieve more security so that number of users can access the data with privacy.Along with that Optimized Balanced Scheduling is also used for the best mapping solution to meet the system load balance to the greatest extent or to reduce the load balancing cost The Privacy preservation is also ensured with dynamic data size and access frequency values. Storage space and computational requirements are optimally utilized in the privacy preservation process. Data distribution complexity is also handled in the scheduling process. Keywords: Cloud computing, privacy upper bound, intermediate data sets, optimized balanced scheduling, value generalization hierarchy protocol.

shared for multiple users but also dynamically reallocated per demand. The privacy issues [12] caused by retaining intermediate data sets in cloud are important but they were paid little attention. For preserving privacy v[9] of multiple data sets, we should anonymize all data sets first and then encrypt them before storing or sharing them in cloud. Usually, the weightage of intermediate data sets[11] is huge. Users will store only

1. INTRODUCTION Cloud computing mainly relies on sharing of resources to achieve coherence and economies of scale similar to a utility over a network. The basement of cloud computing is the broader concept of converged infrastructure and shared services. The cloud mainly focuses on maximizing the effectiveness of shared resources. Cloud resources are not only

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

347

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

cloud. While our work provides a significant first step towards Zhang et al.[10] proposed a system named Sedic which partitions Map Reduce computing jobs in terms of the security labels of data they work on and then assigns the computation without sensitive data to a public cloud. The sensitivity of data is required to be labelled in advance to make the above approaches available. Ciriani et al.[10] has proposed an approach that combines the encryption and data fragmentation to achieve the privacy protection for distributed data storage with encrypting only part of data sets.

important datasets on cloud when processing original data sets in dataintensive applications such as medical diagnosis[16], in order to reduce the overall expenses by avoiding frequent recomputation to obtain these data sets. Such methods are quite common because data users often re-analyse results, conduct new analysis on intermediate data sets, and also share some intermediate results with others for collaboration. Data Provenance is employed to manage the intermediate datasets. A number of tools for capturing provenance have been developed in workflow systems and a standard for provenance representation called the Open Provenance Model (OPM) has been designed.

3.SYSTEM ARCHITECTURE

2.RELATED WORK. Encryption is usually integrated with other methods to achieve cost reduction, high data usability and privacy protection. Roy et al. [8] investigated the data privacy problem caused by Map Reduce and presented a system named Airavat which incorporates mandatory access control with differential privacy. Puttaswamy et al. [9] described a set of tools called Silverline which identifies all encryptable data and then encrypts them to protect privacy. Encrypted data on the cloud prevent privacy leakage to compromisedor malicious clouds, while users can easily access data by decrypting data locally with keys from a trusted organization. Using dynamic program analysis techniques Silverline automatically identifies the encryptable application data that can be safely encrypted without negatively affecting the application functionality. By modifying the application runtime, e.g. the PHP interpreter, we show how Silverline can determine an optimal assignment of encryption keys that minimizes key management overhead and impact of key compromise. Our applications running on the cloud can protect their data from security breaches or compromises in the

Fig 1: System Architecture For Secure Transaction Using The Cloud Our approach mainly will work by automatically identifying the subsets of an application’s data that are not directly used in computation, and exposing them to the cloud only in encrypted form. • We present a technique to partition encrypted data into parts that are accessed by different sets of the users (groups). Intelligent key assignment limits the damage which is possible from a given key compromise, and strikes a good trade off between robustness and key management complexity. • We present a technique that enables clients to store and use their keys safely while preventing cloud-based service from stealing the keys. Our solution works

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

348

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

today on unmodified web browsers. There are many privacy threats caused due to the intermediate data sets so, we need to encrypt these data sets to provide privacy and make them secure.

ISBN: 378 - 26 - 138420 - 5

4.2 Privacy Preserved Data Scheduling Scheme Here multiple intermediate data set privacy models is combined with the data scheduling mechanism. The Privacy preservation is ensured with the dynamic data size and access frequency values and along with that Storage space and computational requirements are optimally utilized in the privacy preservation process and Data distribution complexity is also handled in the scheduling process. The Data sensitivity is considered in the intermediate data security process. Resource requirement levels are monitored and controlled by the security operations. The system is divided into five major modules. They are data center, data provider, intermediate data privacy, security analysis and data scheduling. The data center maintains the encrypted data values for the providers. Shared data uploading process are managed by the data provider module. The Intermediate data privacy module is designed to protect intermediate results. Security analysis module is designed to estimate the resource and access levels. Original data and intermediate data distribution is planned under the data scheduling module. Dynamic privacy management and scheduling mechanism are integrated to improve the data sharing with security. Privacy preserving cost is reduced by the joint verification mechanism.

Fig.2: A Scenario Showing Privacy Threats Due To Intermediate Datasets

4.IMPLEMENTATION 4.1 Requirements The problem of managing the intermediate data which is generated during dataflow computations, deserves deeper study as a first-class problem. They are the following two major requirements that any effective intermediate storage system needs to satisfy: availability of intermediate data, and minimal interference on foreground network traffic generated by the dataflow computation. Data Availability: A task which is in a dataflow stage cannot be executed if the intermediate input data is unavailable. A system that provides higher availability for intermediate data will suffer from fewer delays for re-executing tasks in case of failure. In multi-stage computations, high availability is critical as it minimizes the effect of cascaded re-execution. Minimal Interference: At the same time, the data availability cannot be pursued over-aggressively. In particular, since intermediate data is used immediately, and there is high network contention for foreground traffic of the intermediate data transferred to the next stage. So an intermediate data management system needs to minimize interference.

4.3 Analysis of the Cost Problem A cloud service provides various pricing models to support the pay-as-you-go model, e.g., Amazon Web Services pricing model[4]. The Privacy-preserving cost of the intermediate data sets can be reduced from frequent encryption or decryption with charged cloud services which needs more computation power, data storage, and other cloud services. To avoid the pricing

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

349

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

multiple data sets, it is promising to anonymize all data sets first and then encrypt them before storing or sharing them in cloud. Usually, the volume of intermediate data sets is huge. Data sets are divided into two sets. One is sensitive intermediate data set and another is nonsensitive intermediate data set. Sensitive data set is denoted as SD then non sensitive data set is denoted as NSD. The equations, sd U NSD =D and SD ∊ NSD =Ф hold. The pair of (SD, NSD) as a global privacy preserving of cloud data. Suppression technique done only on sensitive data sets in two ways such as semi suppression and full suppression, while full suppression apply on most important sensitive intermediate data set that is individual data set value fully encoded then semi suppression apply on selective sensitive data sets that is half of the data set value will be encoded. Also propose Value Generalization Hierarchy (VGH) protocol to reduce cost of data

details and to focus on, combine the prices of various services required by encryption or decryption into one. 4.4 Proposed Framework The technique or the new protocol which we use for privacy protection here is the Value Generalization Hierarchy Protocol which has the functionality of assignment of the common values for the unknown and original data values for general identification later on we add of full suppression on the more important data sets which enhances the complete encryption of the entire data sets given. To Investigate privacy aware and efficient scheduling of intermediate data sets for minimum cost and fast computation. Suppression of data is done to reduce the overall computation time and cost and where VGH Protocol is also proposed to achieve it. Here we secure the more important dataset though semi suppression only. The full suppression to achieve the high privacy or security of original data sets and the original data set is only viewed by owner. Here number user can access the data with security and to avoid privacy leakage. The privacy protection cost for intermediate data sets that needs to be encoded while using an upper bound constraint-based approach to select the necessary subset of intermediate data sets. The privacy concerns caused by retaining intermediate data sets in cloud are important Storage and computation services in cloud are equivalent from an economical perspective because they are charged in proportion to their usage. Existing technical approaches for preserving the privacy of datasets stored in cloud mainly include encryption and anonymization. On one hand, encrypting all data sets, a straightforward and effective approach, is widely adopted in current research. However, processing on encrypted data sets efficiently is quite a challenging task, because most existing applications only run on unencrypted data sets. Thus, for preserving privacy of

4.5 Optimized Balanced Scheduling The optimized balanced scheduling is used for the best mapping solution to meet the system load balance to the greatest extent or to lower the load balancing cost. The best scheduling solution for the current scheduling process can be done through genetic algorithm. First we need to compute the cost through the ratio of the current scheduling solution to the best scheduling solution, and then we have to make the best scheduling strategy according to the cost. So that it has the least influence on the load of the system after scheduling and it has the lowest cost to reach load balancing. In this way, we can form the best scheduling strategy.

5. CONCLUSION In this paper, focus is mainly contributed towards identification of the areas where the most sensitive intermediate datasets are present in cloud. An upper bound constraint based approach is used where data sets needs to be

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

350

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

[5] K. Zhang, X. Zhou, Y. Chen, X. Wang, and Y. Ruan, “Sedic: Privacy-Aware Data Intensive Computing on Hybrid Clouds,” Proc. 18th ACM Conf. Computer and Comm. Security (CCS ’11), pp. 515-526, 2011. [6] H. Lin and W. Tzeng, “A Secure Erasure Code-Based Cloud Storage System with Secure Data Forwarding,” IEEE Trans. Parallel and Distributed Systems, vol. 23, no. 6, pp. 995-1003, June 2012. [7] G. Wang, Z. Zutao, D. Wenliang, and T. Zhouxuan, “Inference Analysis in Privacy-Preserving Data Re-Publishing,” Proc. Eighth IEEE Int’l Conf. Data Mining (ICDM ’08), pp. 1079-1084, 2008. [8] K.P.N. Puttaswamy, C. Kruegel, and B.Y. Zhao, “Silverline: Toward Data Confidentiality in Storage-Intensive Cloud Applications,” Proc. Second ACM Symp. Cloud Computing (SoCC ’11), 2011. [9] I. Roy, S.T.V. Setty, A. Kilzer, V. Shmatikov, and E. Witchel, “Airavat: Security and Privacy for Mapreduce,” Proc. Seventh USENIX Conf. Networked Systems Design and Implementation (NSDI ’10), p. 20, 2010. [10] X. Zhang, C. Liu, J. Chen, and W. Dou, “An Upper-Bound Control Approach for Cost-Effective Privacy Protection of Intermediate Data Set Storage in Cloud,” Proc. Ninth IEEE Int’l Conf. Dependable, Autonomic and Secure Computing (DASC ’11), pp. 518-525, 2011. [11] B.C.M. Fung, K. Wang, R. Chen, and P.S. Yu, “Privacy-Preserving Data Publishing: A Survey of Recent Developments,” ACM Computing Survey, vol. 42, no. 4, pp. 1-53, 2010. [12] H. Lin and W. Tzeng, “A Secure Erasure Code-Based Cloud Storage System with Secure Data Forwarding,” IEEE Trans. Parallel and Distributed Systems, vol. 23, no. 6, pp. 995-1003, June 2012.

encoded, in order to reduce the privacy preserving cost so we investigate privacy aware efficient scheduling of intermediate data sets in cloud by taking privacy preserving as a metric together with other metrics such as storage and computation. Optimized balanced scheduling strategies are expected to be developed toward overall highly efficient privacy aware data set scheduling and mainly in the overall time reduction and Data delivery overhead is reduced by the load balancing based scheduling mechanism. Dynamic privacy preservation model is supported by the system and along with that a high security provisioning is done with the help of full suppression, semi suppression and Value Generalization Hierarchy Protocol. This protocol is used to assign the common attribute for different attributes and the Resource consumption is also controlled by the support of the sensitive data information graph.

6.REFERENCES [1] L. Wang, J. Zhan, W. Shi, and Y. Liang, “In Cloud, Can Scientific Communities Benefit from the Economies of Scale?,” IEEE Trans. Parallel and Distributed Systems, vol. 23, no. 2, pp. 296-303, Feb.2012. [2] Xuyun Zhang, Chang Liu, Surya Nepal, Suraj Pandey, and Jinjun Chen, “A Privacy Leakage Upper Bound ConstraintBased Approach for Cost-Effective Privacy Preserving of Intermediate Data Sets in Cloud”, IEEE Transactions On Parallel And Distributed Systems, Vol. 24, No. 6, June 2013. [3] D. Zissis and D. Lekkas, “Addressing Cloud Computing Security Issues,” Future Generation Computer Systems, vol. 28, no. 3, pp. 583- 592, 2011. [4] D. Yuan, Y. Yang, X. Liu, and J. Chen, “On-Demand Minimum Cost Benchmarking for Intermediate Data Set Storage in Scientific Cloud Workflow Systems,” J. Parallel Distributed Computing, vol. 71, no. 2, pp. 316-332, 2011.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

351

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

Key Reconstruction and clustering opponent nodes in Minimum Cost Blocking Problems D.Sireesha ASCET M.Tech(CSE) Siri3835@gmail.com

Prof C.Rajendra srirajendra.c@gmail.com

ABSTRACT: This paper detects malicious activities on wireless mesh networks with wireless routing protocols. Upon the selection of route discovery malicious attacks are ahead to compromise the nodes such as network partitioning, node isolation activities .The cardinal covering in the routing protocols address the packets in the network. When attacks are take place by the opponent, negotiate the attacker node and choose another way to send the packet from source to destination .But it can’t be moved from the wireless network. Here proposing, looking into the Clustering protocols along with id-based key update protocols, which is very predicting in wireless networks. Cryptography techniques improvise the packet in an assured way from source to destination and it doesn’t render to secure the network/system. Keywords: Wirelessmeshnetworks, clustering, MSR protocol, adversary nodes, updating key. of a seamlessly connected world into reality.

1. INTORDUCTION: Wireless mesh networks (WMN’S)

Wireless mesh Networks includes mesh

issued as a bright conception to meet the

routers and mesh clients. Getting rid of the

disputes on succeeding generation wireless

wires from the wireless LAN’s, doubling the

networks. Providing flexibility, adaptability,

access point is known as mesh routers and it

reconfigurable the network upon the cost-

forms a backbone network. A mesh network

efficient solutions to service provider. It has

can be designed using a flooding technique

the possibility to wipe out many of these

or a routing technique.[3] When using a

disadvantages like low-cost, wireless broad

routing

band internet access for both wired and

propagated along a path, by hopping from

mobile nodes. Wireless mesh networks are

node to node until the destination is reached.

an emerging technology right now. The use of

To ensure all its paths' availability, a routing

mesh wireless networks may bring the dream

network

technique,

must

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

352

the

allow

message

for

is

continuous

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

connections

and

broken

blocked

or

reconfiguration paths,

around

ISBN: 378 - 26 - 138420 - 5

protocols for wireless mesh networks make it

using self-

particularly

hard

for

an

adversary

to

healing algorithms. A mesh network whose

effectively launch such attacks. Attempting to

nodes are all connected to each other is

model the theoretical hardness of attacks on

a fully connected network. Mesh networks

multi-path routing protocols for mobile nodes

can be seen as one type of. Mobile adhoc

and qualify it in mathematical terms. The

networks (MANETs) [1] and mesh networks

study will impact the areas of security and

are therefore closely related. The mesh

robustness of routing protocols of wireless

routers render a rich radio connectivity in

mesh networks. [3], threshold cryptography

which importantly brings down the direct

and network coding. In previously, every

deployment cost of the network. Mesh routers

node had a unique ID randomly, routing

are usually stationary and it doesn’t have

occurs through protocols such as greedy and

power constraint. Mesh clients are mobile

LP algorithms. Problems incorporated in last

nodes. Sometimes mesh routers can also act

investigations are if any opponent were

as gateways which are connected to the

hacked any of the nodes in the network, it

internet through a wired backbone.

can retrieve the packet before sending the

2. RELATED WORK:

destination. And one more consequence is about the topological information of the

Privacy and security plays a major role in the

network, through that it may miss send the

communication network for both wired and

packet without permissions of authenticated

wireless network. Security issues related to

nodes. To reduce these complications, here

the packet transmission from the source to

introducing the clustering concepts[4] were

the destination nodes. Securing the data from

incorporated i.e. , grouping the nodes in the

the opponent without changing the main

mesh

thing. Different technologies are introduced to secure

data

from

opponents

by

algorithms

the

to

group

the

nodes

using

MSR(multipath split routing) protocol. Again

techniques are included for protecting packet

to

from un-authorizers. It demonstrates the high

protect

data

from

adversary

nodes

introducing cryptographic key distributions

quality of multipath routing protocols above

techniques are integrated. Introducing the

traditional single path protocols in terms of

clustering algorithms to group the nodes in

resiliency under the attacks of blocking and

the mesh networks, giving different ID’s to the

node isolation types of attacks , particularly in domain.

identifying

node from the network again make clustering

network. Clustering [4] and cryptography

networks

after

misbehaving node move out the effected

using

clustering the different nodes in the mesh

wireless

networks

nodes. Grouping sensor nodes into clusters

Multi-path

has been widely investigated by researchers

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

353

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

in order to achieve the network system's

behavior, section 4.2 presents the clustering

scalability and management objectives. Every

algorithm implementation details, section 4.3

cluster would have a leader sensor node,

discuss key updating process after clustering

often referred to as the cluster-head (CH),

adversary nodes section 5 concludes the

which

paper with the future discussion.

can

be

fixed

or

variable.

CHs

aggregate the data collected by the sensor nodes

in

its

cluster,

thus,

4. IMPLEMENTATION:

clustering

decreases the number of relayed packets. As

Wireless mesh networks integrating a key

the benefits of clustering, it conserves

technology for

communication

networks

bandwidth

and

avoids

next

showing

generation rapid

wireless

progress

and

redundant exchange of messages among

numerous applications. In a WMN, high

sensor nodes, because of limiting the inter-

speed routers integrated with each other in a

cluster

multi-hop fashion over wireless channels and

interactions.

Therefore,

clustering

prolongs the lifetime of WSNs [2][4].

form a broadband backhaul. In the previous analysis of the paper includes that in wireless

(A).How does a mesh network works?

mesh networks. Due to the delivery of packet

While traditional networks rely on a small

from source to destination it should be send

number of wired access points or wireless

through secure path by providing a key to the

hotspots to connect users, a wireless mesh network

spreads

a

network

packet. In WMN’s an adversary node involves

connection

in the network can know the details of

among dozens or even hundreds of wireless

topological information, obviously the routing

mesh nodes that "talk" to each other, sharing

details. In order to secure the packet from the

the network connection across a large area.

opponent we need to reduce the way of

Some think of the internet as being the

adversary

world's largest mesh network. When using

nodes

about

topological

information[6] and providing another key to

the internet, information travels by being

the packet. Here in this paper introducing the

bounced automatically from one router to the

clustering

next until it reaches its destination. The

and

cryptographic

key

regeneration. Analyzing the node behavior

internet has billions of potential paths across

whether it is acting in a normal way or not, it

which data can travel.

is possible by using finite state machine.

3. PAPER ORGANIZATION:

Behavior of each node is noted in the table,

The rest of this paper is organized as follows

clustering misbehaving nodes using a unique

section

implementation

id and authorized one with another id’s. Upon

process, section 4.1 introducing the finite

creating different id’s for misbehaving nodes

state machine to record the details of nodes

and authorized node packet details have to

4

discuss

the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

354

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

be modified. This is re-generating the key to

A. ROUTE DISCOVERY OF MSR:

protect

Here

Multipath split routing builds multi routes

introducing MSR protocol for generating the

using request/reply cycles. When source

maximal disjoint paths in the network, and

needs routes to the destination but no route

how the MSR is working upon the selection of

information is known, it overflows the route

routes in the network. To observer the

request message (RREQ) to the entire

behavior of the nodes incorporating finite

network. Due to the packet overflow, several

state machine model. Examine each and

replicate that spanned through different

every nodes behavior about the status of

routes

packet delivery and soon. The selfish node is

destination node selects multiple disjoint [5]

identified in the network based on data

routes and sends route reply (RREP) packets

collected by each node from its local

back to the source through the selected

message

routes.

data

from

unit

misbehaving

opponents.

(LMU).

nodes

Clustering

the

a

and

as

group

to

reach

the

destination.

The

authorized one’s a group. Although grouping

4.2 THE FINITE STATE MACHINE MODEL:

opponents doesn’t made packet in a secured

In the proposed mechanism, the messages

way. Again providing a key is an important

corresponding to a RREQ flooding and the

task

re-generating

uni-cast RREP are referred to as a message

algorithms are introduced to provide a key at

unit. It is clear that no node in the network

the packet header.

can observe all the transmission in a

to

protect

data

i.e.

message unit. The subset of a message unit

4.1 MULTI-PATH SPLIT ROUTING:

that a node can observe is referred to as the

Multipath Split Routing (MSR) protocol that

local message unit (LMU). The LMU for a

builds maximally disjoints paths. Multiple

particular node consists of the messages

routes, of which one is the shortest delay

transmitted by the node and its neighbors,

path, are discovered on demand. Established

and the messages overheard by the node.

routes are not necessarily of equal length.

The selfish node detection is done based on

Data traffic is split into multiple routes to

data collected by each node from its

avoid congestion[5] and to use network

observed

resources efficiently. We believe providing

transmission in an LMU, a node maintains a

multiple routes are beneficial in network

record of its sender, and the receiver, and the

communications,

neighbor nodes that receive the RREQ

wireless

particularly

networks

where

in

mobile

routes

LMUs.

For

each

message

broadcast sent by the node itself.

are

disconnected frequently because of mobility

The finite state machine depicts various

and poor wireless link quality.

states

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

355

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

1. Init – in intial phase no RREQ is

by the source-destination pair contained in a

observed.

RREQ message. At the start of a routing

2. Unexp RREP- receiving a RREP RREQ is

session, a monitored node is at the state 1 in

not observed.

its finite state machine. As the monitor

3. Recv RREQ- acknowledgment of RREQ

node(s)

is observed.

monitored node based on the LMUs, it

4. Frwd RREQ- distribute of a RREQ is

records transitions form its initial state 1 to

observed.

one of its possible final states -- 5, 7 and 8.

observes

the

behavior

of

the

5. Timeout RREQ – Timeout after receiving RREQ. 6. Recv

RREP-

Receipt

of

RREP

is

observed. 7. LMU complete- forwarding a valid RREP is observed. 8. Timeout RREP-Timeout after receipt of a RREP.

Fig 1 : Finite state machine of a monitored node

In which a node may exist for each LMU [12]. The final states are shaded. Each message

When a monitor node broadcasts a RREQ, it

sent by a node causes a transition in each of

assumes that the monitored node has

its neighbor’s finite state machine. The finite

received it. The monitor node, therefore,

state machine in one neighbor gives only a

records a state transition 1->3 for the

local view of the activities of that node. It

monitored node’s finite state machine. If a

does not in any way, reflects the overall

monitor node observes a monitored node to

behavior of the node. The collaboration of

broadcast a RREQ, then a state transition of

each neighbor node makes it possible to get

3->4 is recorded if the RREQ message was

an accurate picture about the monitored

previously sent by the monitor node to the

node’s behavior. In the rest of the paper, a

monitored node; otherwise a transition of 1->

node being monitored by its neighbors is

4 is recorded since in this case, the RREQ

referred to as a monitored node[8], and its

was received by the monitored node from

neighbors are referred to as a monitor node.

some other neighbor. The transition to a

In the protocol, each node plays the dual role

timeout state occurs when a monitor node

of a monitor node and a monitored node. A

finds no activity of the monitored node for the

local message unit (LMU).Each monitor node

LMU before the expiry of a timer. When a

observes a series of interleaved LMUs for a

monitor node observes a monitored node to

routing session. Each LMU can be identified

forward a RREP, it records a transition to the

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

356

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

final state – LMU complete (State No 7). At

Table 1: The state transitions of the neighbor

this state, the monitored node becomes a

nodes of node N

candidate for inclusion on a routing path. 4.3 When the final state is reached, the state

4.3 CLUSTERING ALGORITHM:

machine terminates and the state transitions

For clustering mesh networks need to

are stored by each node for each neighbor.

perform two phases clustering setup and

After sufficient number of events is collected,

clustering maintenance .The first phase is

a statistical analysis is performed to detect

accomplished by choosing some nodes that

the presence of any selfish nodes.

act as coordinators of the clustering process (cluster heads). Then a cluster is formed by associating a cluster head with some of its neighbors that become the ordinary nodes of the cluster. Based on the ID’ s and the

Fig 2: An example local message unit (LMU) observed by node N

knowledge of the neighbor’s and the obtained clustering

Above fig: depicts an example of LMU

respect

observed by the node N during the discovery

clustering

destination node D indicated by bold lines.

Z

one.

with

Introducing

set

up

and

maintenance.

Mobility-Adaptive

Clustering

following properties.

each of its three neighbor nodes X, Y and Z.

Y

initial

properties

(DMAC, for short) algorithm we obtain the

N and the corresponding state transitions for

X

the

Distributed

Table 1 shows the events observed by node

Events

to

different

clustering algorithm suitable for both the

of a route from the source node S to the

Neighbor

has

 Nodes can move even in the clustering setup DMAC is adaptive it can changes in the

State

topology of the network, remove any node

changes

from the network.

X broadcasts RREQ.

1to 4

N broadcasts RREQ.

4to 4

N sends RREP to X.

4to 6

X sends RREP to S.

6to 7

Y broadcasts RREQ.

1to 4

N broadcasts RREQ.

4to 4

Timeout N broadcasts RREQ.

4to5 1to3

Z broadcasts RREQ.

3 to4

Z sends RREP to N

4to7

 DMAC

is

fully

distributed.

A

node

decides its own role (i.e., cluster head or ordinary node) solely knowing its current one hop neighbors.  Every ordinary node always has direct access to at least one cluster head. Thus, the nodes in the cluster are at most two hops apart. This guarantees fast

intra-cluster

communication

inter-cluster

and

fast

exchange of information between any pair of

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

357

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

nodes.

(2) An existing mesh router leaves the

 DMAC uses a general mechanism for the

backbone network.

selection of the cluster heads. The choice is

(3) A cheater is detected in the network

now based on generic weights associated

before packets are sent by the source, if any

with the nodes.

attacker present in the network it shows an

 The number of cluster heads that are

indication about the opponent.

allowed to be neighbors is a parameter of the

To prevent a mobile attacker from breaking t

algorithm

key pieces, every key piece should be updated within a defined cycle T. Only after at

4.4

least t key pieces are obtained in the same

KEY UPDATING UPON

cycle, can the secret be reconstructed. Key

CLUSTERING CHEATER NODES: In

the

MCBP

(minimum

cost

update involves the following four steps:

blocking

problems) whether the nodes in the network

(1) The offline CA (Administrator center) is

were attacked by the opponent are identified.

aroused in the network.

Before packet is sending a node has been

(2).The CA constructs a new group key SK

checking whether the neighboring nodes

and selects new polynomial f (x) .The new

were attacked by the cheater. If any of the

key pieces (di, SKi) are calculated and

nodes in the network were attacked, then

delivered to n selected mesh routers. Then,

node must comprise and choose another

the CA disconnects itself from the network

path to send the packet.

and remains offline.

Even though compromising the node doesn’t

3) A mesh router requests (t-1) key pieces

changes the cheaters behavior, grasping the

from other mesh routers.

topological information, Packet details. So

(4).After t key pieces are collected; the mesh

here we are introducing the key update

router reconstructs the new group key SK ,

algorithms after clustering the misbehaving

d which cheater detection and identification

node[10] from the authorized ones in the

can be carried out as described.

wireless mesh networks. Here in the key update algorithms we are

5. CONCLUSION:

reconstructing the group key, including key

This paper concentrates on avoiding the

pieces algorithm.

opponent in the network, identifying the

i.

misbehaving nodes within the mesh networks

Key Update:

Following are several conditions in which the

clustering the misbehaving nodes and the

group key needs to be updated.

authenticated persons separately. By finite

(1) A new mesh router connects to the

state machine observing the states of the

backbone network.

adversary nodes in the network, due to these

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

358

www.iaetsd.in


INTERNATIONAL CONFERENCE ON CURRENT INNOVATIONS IN ENGINEERING AND TECHNOLOGY

ISBN: 378 - 26 - 138420 - 5

observation defines how it is behaving and

[6].“A length-flexible threshold cryptosystem

comparing both the authorized and opponent.

with applications,” in Proceedings of the 8th

While delivering packet from the source to the

Australasian

destination we need to secure the packet by

security and privacy, ser. ACISP’03. Berlin,

using some cryptographic techniques by

Heidelberg: Springer-Verlag, 2003, pp. 350–

providing key to the packet. Introducing the

364.

group key reconstruction algorithms gives

Wireless Networks, Communications and

wireless mesh networks. Further we can on

these

group

Mobile

key

and

additional

Computing,

2005

International

Conference on, vol. 1, June 2005, pp. 69 –

reconstruction algorithms and the earlier algorithms

Information

hoc networks and threshold cryptography,” in

clustering the adversary node from the simulations

on

[7]. L. Ertaul and N. Chavan, “Security of ad

betterment in the delivery of packet before

make

conference

74

clustering

techniques may be introduces.

[8]. M. A. Moustafa, M. A. Youssef, and M. N.

6. REFERENCES:

El-Derini, “MSR: A multipath secure reliable

[1].S. Mueller, R. Tsang, and D. Ghosal,

routing protocol for WSNs,” in Computer

“Multipath routing in mo-bile ad hoc networks:

Systems and Applications (AICCSA), 2011

Issues and challenges,” in In Performance

9th IEEE/ACS Interna-tional Conference on,

Tools

December 2011, pp. 54 –59.

and

Applications

to

Networked

Systems, volume 2965 of LNCS. Springer-

[9]. D. Ganesan, R. Govindan, S. Shenker,

Verlag, 2004, pp. 209–234. [2].Pavan

kumar

T,

Ramesh

Babu

and

B,

networks,”

based routing protocol for cognitive radio

wireless

energy-

SIGMOBILE

Mob.

Comput.

2001.

[3].Ann lee and Paul A.S Ward “A study of in

“Highly-resilient,

Commun. Rev., vol. 5, pp. 11–25, October

wireless mesh networks”. algorithms

Estrin,

efficient multipath routing in wireless sensor

Rajasekhar rao K, Dinesh Gopalni “cluster

routing

D.

mesh

[10]. J. R. Douceur, “The Sybil attack,” in

networks”.

Peer-to-Peer Systems, First International

[4]. Nikos Dimokas, Dimitrios Katsaris, Yannis

Workshop, IPTPS 2002, Cambridge, MA,

Manolopoulos.”Node clustering in wireless

USA, March 7-8, 2002, Revised Papers.

sensor networks by considering structural

Springer, 2002, pp. 251–260.

characteristics of the network graph”. [5].Jaydip Sen. Innovation labs “Security and privacy issues in wireless mesh networks”.

INTERNATIONAL ASSOCIATION OF ENGINEERING & TECHNOLOGY FOR SKILL DEVELOPMENT

359

www.iaetsd.in


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.