Hindustan University Journal

Page 1



Hindustan Journal A JOURNAL OF HINDUSTAN INSTITUTE OF TECHNOLOGY & SCIENCE CHENNAI, INDIA

Vol. 6, 2013


© 2013. All rights reserved. No part of this publication may be produced, stored in retrieval system or transmitted in any form or by any means, electronic and mechanical, photocopying or otherwise without the prior permission of the publishers. The responsibility for information, opinions, and facts reported in these papers rests exclusively with the authors.

PUBLISHED BY

Hindustan Group of Institutions 40, GST Road, St. Thomas Mount, Chennai – 600 016, Tamil Nadu, India. PATRONS: Dr. ANAND JACOB VERGHESE Mr. ASHOK VERGHESE EDITORIAL TEAM Chief Editor: Dr. R. DEVANATHAN Associate Editors: Ms. P. RANJANA & Ms. AL. VALLIKANNU English Editor: Dr. C. INDIRA

PRINTED BY

ARVIND ASSOCIATES, Chennai.

REVIEW PROCEDURE Each manuscript is blind reviewed by subject specialists and by an English Editor. ii


PANEL OF ADVISORS

PANEL OF REVIEWERS

Dr. BVSSS PRASAD

APPLIED SCIENCES

Professor of Mechanical Engineering, lIT, Madras

Dr. C.Indira Dr. K.Nithyanandam

Dr. S. SHANMUGAVEL

Dr. I.Sasirekha

Professor, Department of Electronics and Communication Engineering, Anna University. Chennai

BUILDING SCIENCES Dr. V.Subbiah Dr. R.Angeline Prabhavathy

Dr. A. ALPHONES

Dr. Ravikumar Bhargava

Associate Professor, Division of Communication Engineering School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore

Dr. Jessy Rooby Dr. P.S.Joanna Dr. Sheeba Chander

Dr. P. RAMESHAN

COMPUTING SCIENCES

Director & Professor (Strategic Management) lIM, Rohtak

Dr. Anitha S. Pillai Dr. Rajeswari Mukesh

Dr. G.L. DUTTA

Dr. E.R.Naganathan

Chancellor, K.L. University, Vijayawada

Dr. S.Nagarajan

Dr. HARSHA SIRISENA

Ms. S.Lakshmi Sridevi

Emeritus Professor, Electrical & Computer Engineering, University of Canterbury, Chirstchurch, New Zealand.

Ms. S.Vijayalakshmi

Ms. P.Ranjana

ELECTRICAL SCIENCES Dr. R.Devanathan

Dr. LAKMI JAIN

Dr. M.J.S.Rangachar

Professor of Knowledge Based Engineering, Founding Director of the KES Centre, Electrical and Information Engineering, University of South Australia, Adelaide.

Dr. A.K.Parvathy Dr. P.M.Rubesh Anand

Dr. PAUL APPASAMY

MECHANICAL SCIENCES

Honorary Professor, Madras School of Economics, Chennai.

Dr. D.G.Roy Chowdhury

Dr. N. GANAPATHI SUBRAMANIAM

Dr. G.Ravikumar Solomon

Professor , Quantum - Functional Semiconductor Research Center, Dongguk University, Republic of Korea

Dr. T.Jeyapovan

Ms. Manjula Pramod

Dr. B.Venkataraman

Dr D.Dinakaran Dr. Hyacinth J. Kennady Dr. A.Anitha

iii


Scanning the Issue The current issue of Hindustan Journal of impulse noise detection techniques in provides articles of varied interest to readers. images. Prakash and Kumaraguru Diderot In the area of Building Sciences, the paper explain and review the commonly used cyclic by Karuppiah and Angeline Prabhavathy redundancy checking algorithm for verifying discusses the prospect of shear strengthening data integrity. Helen and Arivazhagan of reinforced concrete beams using carbon propose the use of temporally ordered routing fiber reinforced polymer. Nagarajan and algorithm along with medium access control Ravi K. Bhargava analyse the role of trees to overcome bandwidth limitation. Under the section on Mechanical Sciences, Jeya Pradha and Mahendran evaluate the evaporative heat transfer characteristics of a refrigerant mixture using computational fluid dynamics. Ravikumar and Saravanan describe the design and fabrication of a chilling system to reach a very low temperature to meet the requirements of specific applications. Viswanathan, Sengottuvel and Arun review Under the section on Computing Sciences, the application of electrical discharge Kodhai, Bharathi and Balathiripurasundari machine for the machining of hard materials. propose a filtering scheme for wireless Under Education and Library Sciences, sensor networks to address bogus reports, Aby Sam and Akkara Sherine eloquently false report injection attacks and denial of discuss the role of community colleges in services. Thiyagarajan, Rasika, Sivasankari nation building and describe a success story and Sophana Jennifer propose an artificial to drive home their point. Bhaskaran Nair neural network based anomaly detection argues passionately the case for an integrated technique to detect changes in medical reading professional programme on teaching in a patient monitoring system. SreeVidhya English as a second language. Boopalan, proposes a new fuzzy clustering algorithm Nithyanandam and Sasirekha gaze at the which can handle efficiently outlier as well crystal ball and wonder about the future role as natural data. Deeptha and Rajeswari of the librarian in an information era. Mukesh propose a genetic algorithm based Finally, we conclude the issue with a list selection model to improve the quality of of forthcoming conferences for the benefit of service performance in the context of web our readers. services development. and plants in the hospital premises in order to improve the well- being of recuperating patients. Thulasi Gopal provides an analysis of the design of an Integrated Silk Park at Kanchipuram to bring back its lost glory. Karthigeyan argues the case for the speedy implementation of high speed rail links in India citing the successful story of high speed trains in China.

Under the section on Electrical Sciences, the paper by Priya and Seshasayanan proposes a method to improve the efficiency

Dr. R.DEVANATHAN Chief Editor iv


Contents BUILDING SCIENCES Shear Strengthening of RC Beam Using Carbon Fiber Reinforced Polymer Sheet

1

Pl. Karuppiah and R. Angeline Prabhavathy.

A Qualitative Research on the Role of Landscape Architecture in and around Hospital Premises as an Aid to Medical Treatment in Chennai

7

R. V. Nagarajan and Ravi K. Bhargava

A Research on Nuances of Silk Weaving and Designing a Handloom Hub at Kanchipuram

15

Ar . Thulasi Gopal

A Case for the Development of High Speed Rail Link in India

21

D. Karthigeyan

COMPUTING SCIENCES HMAC Filtering Scheme for Data Reporting in Wireless Sensor Network 26 E.Kodhai, P.Bharathi and D.Balathiripurasundari

An Efficient Neural Network Technique to Detect Collective Anomalies in E-Medicine

36

G.Thiyagarajan, C.M.Rasika, B.Sivasankari and S.Sophana Jennifer

Deriving Intelligence from Data through Text Mining

42

C.T.Sree Vidhya

Web Service Assortment through Genetic Algorithm and XML Deeptha R and Rajeswari Mukesh

v

50


HINDUSTAN JOURNAL, VOL. 6, 2013

ELECTRICAL SCIENCES Improving the Efficiency of Impulse Noise Estimation

55

S.V.Priya and R.Seshasayanan

Review of Cyclic Redundancy Checking Algorithm

61

Prakash V R and Kumaraguru Diderot P.

Optimization of Temporally Ordered Routing Algorithm (TORA) Â in Ad-Hoc Network

67

D.Helen and D.Arivazhagan

MECHANICAL SCIENCES Evaluation of Evaporative Heat Transfer Characteristics of CO2/Propane Refrigerant Mixtures in a Smooth Horizontal Tube using CFD

71

S.Jeya Pratha and S.Mahendran

Design and Fabrication of Ultimate Chilling System

78

T.S.Ravikumar and S.Saravanan

Review of Electrical Discharge Machining Process

83

K.Viswanathan, P.Sengottuvel and J.Arun

EDUCATION AND LIBRARY SCIENCES Community Colleges to SEmpower the Youth to Transcend Social Barriers

88

Aby Sam and Akkara Sherine

Continuous Professional Development (CPD): A Proposal for an Integrated Programme in Teaching English as a Second Language (TESL)

97

P Bhaskaran Nair

Librarianship in Digital Era

101

E. Boopalan, K. Nithyanandam and I. Sasirekha

FORTHCOMING CONFERENCES

106

vi


HINDUSTAN JOURNAL, VOL. 6, 2013

Shear Strengthening of RC Beam Using Carbon Fiber Reinforced Polymer Sheet PL. Karuppiah and R. Angeline Prabhavathy

Abstract — The technique of strengthening of reinforced concrete beam with externally bonded Carbon Fiber Reinforced Polymer (CFRP) has been successfully applied in Civil Engineering. This paper discusses the effect of shear strengthening of RC beams on the stress distribution, initial crack, crack propagation and ultimate strength. The experimental programme includes testing of five simply supported reinforced concrete beams of which four beam specimens are cast with bonding CFRP and the remaining one beam without CFRP which is considered as the control beam. The CFRP epoxy bonded specimens are specimens, with full side wrap (FSW), one side u wrap at shear (SUWS), vertical wrap stirrups (VWS) and inclined wrap stirrups (IWS). Mix design of M30 concrete is adopted and the mix proportion is arrived at. Based on the mix proportion, the specimens are cast. The deflection, shear failure, cracking and ultimate load for rectangular beams bonded with CFRP are investigated. The experiments are conducted to predict the critical load, cracks and increase in strength. It is concluded that in beams bonded with side u wrap stirrups (SUWS), there is a delay in the formation of initial crack and the ductility ratio is higher, which is desirable in earthquake prone areas. The general and regional behaviour of concrete beams with bonded CFRP are studied with the help of strain gauges. The appearance of the first crack and the crack propagation in the structure up to failure is monitored and discussed for the control and the strengthened beams.1 Intex terms — CFRP wrap, U-wrap, Carbon fiber. PL. Karuppiah and R. Angeline Prabhavathy are in School of Building Sciences, Hindustan University, Chennai, India, (e-mail: plkaruppiah@hindustanuniv. ac.in, deanbs@hindustanuniv.ac.in).

I.  Introduction Carbon Fiber composites and reinforced polymer offer unique advantages in many applications where conventional materials cannot provide satisfactory service life. Carbon fiber reinforced polymer (CFRP) is a very strong and light fiber reinforced polymer which contains carbon fiber. The polymer which is most often used is epoxy, but other polymers such as polyester, vinyl ester or nylon are sometimes used. The composite may contain other fibers such as Kevlar, aluminum, glass fibers as well as carbon fibers. The use of CFRP is advantageous, because it is easier to maintain a relatively uniform epoxy thickness throughout the bonding length. By using CFRP wrap, the shear strength and stiffness increase substantially reducing shear cracking. This paper provides the results of an experimental investigation on using CFRP sheets to prevent local cracks around shear region in reinforced concrete beams.

II.  Literature Review Norris et al. (1997) investigated the shear and flexural strengthening of RC beam with carbon fiber sheets. The CFRP sheets were epoxy bonded to the tension face and web of concrete beams to enhance their flexural and shear strengths. When the CFRP sheets were placed perpendicular to cracks in the beam, a large increase in stiffness and strength was observed and there was no difference in the behavior between the pre-cracked beams and the un-cracked ones at the ultimate level. It was concluded that CFRP (carbon fiber reinforced plastic) sheets increased the strength and stiffness of existing concrete beams when bonded to the web and tension face.


2  HINDUSTAN JOURNAL, VOL. 6, 2013

Chaallal et al. (1998) studied the shear strengthening of RC beams using externally bonded side CFRP sheets. It is concluded that diagonal side CFRP (Carbon fiber reinforcement plastic) strips outperformed vertical side strips for shear strengthening in terms of crack propagation, stiffness and shear strength. Alex Li et al. (2001) investigated the shear strengthening of RC beam with externally bonded CFRP sheets. The results of tests performed in the study indicated that stiffness increased while increasing the area of the CFRP sheet at the flanks and the strain gauge measurements showed that strengthening the entire lateral faces of the beam was not necessary. For the strengthened beam, the ultimate strength had a significant increase when compared with the normal beam. Spadea et al. (2001) studied the strength and ductility of RC beams repaired with bonded CFRP laminates. The results showed that significant increase in strength was obtained by strengthening with bonded CFRP laminates. Charlo Pellegrino et al. (2002) investigated the shear strengthening of reinforced concrete beams using fiber reinforced polymer. Except for the control tests, all the tests were done on beams with side-bonded CFRP sheets. The comparison between the experimental and the theoretical values were made and it was found that the shear capacity increment is due to Carbon Fiber Reinforced Polymer. Tavakkolizadeh et al. (2003) investigated the strengthening of steel-concrete composite girders using carbon fiber reinforced polymer sheets. The result indicated that the loadcarrying capacity of a steel-concrete composite girder improved significantly and the ultimate load-carrying capacities of the girders significantly increased by 44, 51, and 76% for 1, 3 and 5 layers respectively. Kesse et al. (2007) investigated the experimental behaviour of reinforced concrete beams strengthened with pre-stressed CFRP shear straps. He concluded that the pre-stressed CFRP strap strengthening system showed good results and it is an effective means of significantly increasing the shear capacity of existing concrete structures. From the review of literature, it has been found out that much work has not been done on shear strengthening of RC beams with different types of CFRP wraps. Therefore the shear strengthening of RC beams with CFRP wrap is discussed in this paper.

III.  Experimental Program In the experiment program of this research, tests are conducted on reinforced concrete beams with external bonding of CFRP sheets in the shear zone. The beams are tested under two-point loading to investigate their structural behaviour. The objective of this experimental investigation is to determine the ●● Structural behaviour of RC beam; ●● Shear strength of RC beam; ●● Shear failure of RC beam and ●● S hear strengthening of RC beam using CFRP sheets. Experimental investigations always show the real behaviour of the structure, an element or a joint. Five rectangular RC beams are cast and tested under two point loading. Out of five beams, one is a control beam. The CFRP epoxy bonded specimens are specimens with full side wrap, one side u wrap at shear, vertical wrap stirrups and inclined wrap stirrups. The following are the dimensions of the beam.

A.  3.1 Beam Dimension Details Size: 2000 x150 x 250 mm Effective cover:

20 mm

Grade of concrete:

M30

B.  3.2 Type of material Sheet: Carbon fibre reinforced polymer Glue for bonding: Nitowrap 30 (Base), Nitowrap 410 Harder, Nitowrap 410 Base.

IV.  Specimen Details Tests are carried out on five reinforced concrete beam specimens and all are strengthened for shear capacity using external bonded CFRP wraps. The beam with 150 x 250 mm cross section and 2000 mm clear span are simply supported and subjected to two concentrated static loads. Steel stirrups of 8mm diameter are placed at 160 mm spacing along the beam length for all beams. Fig. 1 shows the Test setup and Fig. 2 shows the setup of vertical wrap stirrups. Table 1 shows the details of specimens and reinforcement.


KARUPPIAH AND ANGELINE PRABHAVATHY: SHEAR STRENGTHENING OF RC BEAM    3

A.  Properties of Nitowrap Tables 2 to 4 show the properties of Nitowrap CF, Nitrowrap 30(primer), and Nitrowrap 410 (Saturant) respectively. Table 2. Nitowrap CF Fibre orientation

Fig. 1. Test setup

Unidirectional

Weight of fibre

200 g/m2

Density of fibre

1.80g/cc

Fibre thickness

0.30mm

Ultimate elongation (%)

1.5

Tensile strength

3500 N/mm2

Tensile modulus

285 x103 N/mm2

Table 3. Nitowrap 30, Primer Colour

Pale yellow to amber

Application temperature

150C - 400C

Viscosity

Thixotropic

Density

1.25 - 1.26 g/cc

Pot Life

2 hours at 300C

Cure time

5 days at 300C

Fig. 2. Setup of Vertical wrap stirrups Table 1. Details of specimen and reinforcement Details of

Types

beam

of beam

Control beam Full side wrap

Testing of beam (days)

Density Reinforcement in beam Longitudinal

Stirrups

CB FSW

SUWS

shear wrap

2-10# @ top 28

Vertical IWS

and 2-12# @ bottom

8mm # stirrups @ 160mm C/C

stirrups Inclined wrap

1.14 g/cc

Pot life

25 min. @ 270C

Full cure

7 days

B.  Surface preparation

Side U wrap at

Table 4. Nitowrap 410, saturant

VWS

stirrups

V.  Material Properties The concrete used in the experimental program is M20 and steel with nominal yield strength of 415 N/mm2 is used as the longitudinal reinforcement.

It is ensured that concrete surfaces are free from oil residues, demoulding agents, curing compounds, grout holes and protrusions. Structural damages are repaired by using epoxy grouting/ appropriate mortar from the Renderoc range. All depressions, imperfections etc. are repaired by using Nitocote VF/ Nitomortar FC, epoxy putty. The base and hardener are thoroughly mixed in a container for 3 minutes. Mechanical mixing using a heavy-duty slow speed (300-500 rpm) drill, fitted with a mixing paddle is done. The mixed material of Nitowrap 30 epoxy primer is applied over the prepared and cleaned surface. It is applied with a brush and allowed to dry for about 24 hours before application of saturant. The mixed material of Nitowrap 410 saturant is applied over the tack free primer.


4  HINDUSTAN JOURNAL, VOL. 6, 2013

VI.  Results And Discussions Five simply supported reinforced concrete beam specimens are tested which include one control beam, and four CFRP epoxy bonded specimens with full side wrap (FSW), one side u wrap at shear (SUWS), vertical wrap stirrups (VWS) and inclined wrap stirrups (IWS). The load deflection behaviour, first crack load, finial crack load and maximum deflection are studied.

A.  Load – Deflection behaviour Table 5 shows the Comparison of Ultimate Load and Maximum Deflection.

Si. No.

Specimen

First Crack Load (kN)

Ultimate Load (kN)

Maximum Deflection (mm)

Table 5. Comparison of Ultimate Load and Maximum Deflection

1

CB

33.5

123.9

32.6

2

FSW

51.8

158.8

14.3

3

SUWS

41.9

122.5

32.4

4

IWS

22.6

122.4

22.4

5

VWS

54

134.8

26.8

Fig. 5. Bar chart of maximum deflection

From Fig. 3, it can be seen that the first crack is delayed in the case of FSW and VWS beams. Fig. 4 shows that the final crack is delayed only in the case of FSW beam. Fig. 5 shows that the deflection is minimum in the case of FSW beam. Fig. 6 shows the Load Vs deflection behaviour of the various beam specimens.

The comparison of initial crack, final crack and deflection of various specimens are shown in Fig 3 to 5.

Fig. 6. Load Vs Deflection Behaviour of All Beam specimens.

Fig. 3. Bar chart of first crack load

From the load – deflection behaviour, it can be seen that the load carrying capacity is maximum for FSW beam but brittle failure occurs. In SUWS beam, the initial crack occurs at 41.9 kN which is 25% higher than that of the control beam. The ductility ratioes also higher in SUWS beam which is desirable in earthquake prone areas.

B.  Failure Pattern Fig. 7 shows the cracking pattern of a control beam. The initial crack occurs at 33.5 kN and final crack at 123.9 kN. The ultimate load is 123.9 kN. Fig. 4. Bar chart of final crack


KARUPPIAH AND ANGELINE PRABHAVATHY: SHEAR STRENGTHENING OF RC BEAM    5

Fig. 7. Cracking pattern of Control beam

Fig. 8 shows the cracking pattern of FSW specimen. The initial crack occurs at 51.1 kN and final crack at 157.5 kN. The ultimate load is 158.8 kN. Total CFRP covered area is 1400 mm (Length), and 170 mm (Height).

Fig. 10. Cracking pattern of IWS specimen

Fig. 11 shows the cracking pattern of VWS specimen. The initial crack occurs at 54 kN and final crack at 129 kN. The ultimate load is 129 kN. Vertical CFRP stirrups 100mm wide are wrapped at 90o.

Fig. 8. Cracking pattern of FSW specimen

Fig. 9 shows the cracking pattern of SUWS specimen. The initial crack occurs at 41.9 kN and final crack at 122.6 kN. The ultimate load is 122.6 kN. CFRP is wrapped in the shear area as U section, with a width of 250 mm.

Fig. 11. Crack pattern of VWS specimen

Debonding of CFRP wraps occurred after the initial crack appeared. Fig. 12 to Fig. 15 show the debonding of CFRP.

Fig. 9. Cracking pattern of SUWS specimen

Fig. 10 shows the cracking pattern of IWS specimen. The initial crack occurs at 22.6 kN and final crack at 123.8 kN. The ultimate load is 123.8 kN. Inclined CFRP stirrups are wrapped at an angle of 60o with a width of 60 mm.

Fig. 12. Debonding of FSW specimen


6  HINDUSTAN JOURNAL, VOL. 6, 2013

Sudden failure of FSW beam occurred at the ultimate load.

●● C ompared to all other specimens deflection of FSW specimen is less and load bearing capacity is more. However brittle failure occurs. ●● I n SUWS beam, the initial crack occurs at 41.9 kN which is 25% higher than that of the control beam. The ductility ratio is also higher in SUWS beam which is desirable in earthquake prone areas.

References

Fig. 13. Debonding of SUWS specimen

[1] T om Norris et al. (1997), Shear And Flexural Strengthening Of RC Beams With Carbon Fiber Sheets. Journal of Structural Engineering 123, 903-911. [2] O . Chaalla. et al. (1998), Shear Strengthening Of RC Beams by Externally Bonded Side CFRP Strips. Journal of Composites for Construction, 2, 111-113. [3] A lex Li, et al. (2001), Shear Strengthening Of RC Beams With Externally Bonded CFRP Sheets. Journal of Structural Engineering,127, 374-380.

Fig. 14. Debonding of IWS specimen

[4] G . Spadea et al. (2001), Strength And Ductility Of RC Beams Repaired With Bonded CFRP Laminates, Journal of Bridge Engineering, 6, 349-355. [5] C arlo Pellegrino et al. (2002), Fiber Reinforced Polymer Shear Strengthening of Reinforced Concrete Beams with Transverse Steel Reinforcement. Journal of Composites for Construction, 6, 104-111.

Fig. 15. Debonding of VWS specimen

VII.  Conclusion Tests were performed in externally applied epoxybonded CFRP. Based on the test results the following conclusions are drawn.

[6] M . Tavakkolizadeh, et al.(2003), Strengthening of Steel-Concrete Composite Girders Using Carbon Fiber Reinforced Polymers Sheets. Journal of Structural Engineering, 129, 30-40. [7] G yamera Kesse et al., (2007), Experimental Behavior of Reinforced Concrete Beams Strengthened with Prestressed CFRP Shear Straps. Journal of Composites for Construction, 11, 375-383.


HINDUSTAN JOURNAL, VOL. 6, 2013

A Qualitative Research on the Role of Landscape Architecture in and around Hospital Premises as an Aid to Medical Treatment in Chennai. R. V. Nagarajan and Ravi K. Bhargava

Abstract — The milieu of the hospitals ought to be healthy and hygienic for the patients to recuperate from their illness. The role of trees and plants in a hospital premises is considered a dynamic parameter in the creation of the hospital quality. This paper attempts to discern the ratio of minimum land / area required for the medicinal landscape to the area of hospital units. The very question of how to border out the minimum quantity of trees required for a hospital landscape is the prime aim of this research. Secondly, what are the aspects (air purification, killing bacteria, noise reduction, etc.) to be considered in the selection of trees, is the next level of research. Finally, aided by statistical results of a survey conducted in hospitals, this research narrows down to the ratio (x:y) for a typical hospital premises, where ‘x’ is the minimum area required for ‘n’ number of occupants (patients, non-patients, hospital-staff, etc.) and ‘y’ is the minimum open space required for the medicinal landscape to be executed for a Healthy Hospital.1 Index terms — Landscape, Hospital, Treatment.

I.  Introduction “Research gathered over recent years has highlighted the countless benefits to people, wildlife and the environment that come from planting trees and creating new woodland habitat. It is obvious trees are good things,” says Clive Anderson. R.V. Nagarajan and Ravi K. Bhargava are in School of Architecture, Hindustan University, Chennai, India, (e-mail: rvnagarajan@hindustanuniv.ac.in)

The belief that plants and gardens are beneficial for patients in healthcare environments is more than one thousand years old, and appears prominently in Asian and Western cultures [1]. The awareness of the positive influence of the outdoor environment on patients’ healing process has long been present in hospital architecture. The term healing garden applies to the gardens that promote recuperation from illness. In this context, ‘healing’ does not necessarily refer to curing, but to the overall improvement of well-being.Integration and unity of hospital buildings and their surrounding outdoor spaces contribute to the creation of hospital as a ‘small city within a city’, with its own specific patterns of use [2].

II.  Characteristics of Plants Plants possess the ability of escalating the pain tolerance effects in the patients so as to enable them to recuperate from their illness or surgery. This ability of the plants is found nil in the first case and comparatively higher in the third case than the second one in the following category [3]: 1. No plants 2. Foliage plants 3. Foliage + Flowering plants Patients in hospital rooms with plants and flowers have significantly showed more positive physiological responses, lower ratings of pain, anxiety and fatigue, and more positive feelings and higher satisfaction about their rooms than the patients who are kept in


8  HINDUSTAN JOURNAL, VOL. 6, 2013

rooms without plants [4]. Findings of such researches suggested that plants in a hospital environment could be noninvasive, inexpensive, and an effective complementary medicine for patients recovering from abdominal surgery. Researchers who have assessed the impact of nature/plants on human health have suggested that nature and plant experiences are positively associated with human physical [5], psychological [6], emotional [7], and cognitive health [8]. In addition, viewing nature/plants is linked to pain reduction, less need for analgesics, and fast recovery from surgery [9]. For many years, the importance of aesthetics in relevance to the health was not experimentally proven as the additional quality of plants. Apart from the recuperation of illness, aesthetic of plants is another important philosophical discipline which must be added to the ambience of hospital for the further betterment to both the patients and the doctors. High quality nursing care includes the aesthetic dimension [10]. Aesthetics influences a person’s feelings, both physical and psychological. Both aesthetic and nonaesthetic surroundings create an impression and affects a person consciously or unconsciously [11].

III.  Method to Calculate Green Areas for Any Site According to the Green Guide for Health Care, the following formula is for the calculation of the required green area: Natural Habitat Area = (Site Area x Site Size Factor) / Floor Space Ratio, where Floor Space Ratio = Gross Constructed Area including all service spaces and excluding parking areas / Site Area and Site Size Factor = (1/√Site Area) x 10 (usually around 0.15) [12]. The main difference between the calculation of green areas for any site and with hospital site is the nature of the people occupying it. The prime aim of this research is to find out the variation in the level of the ratio in the above formula framed by the GGHC (Green guide for Health Care), with the level of the ratio in hospital site, particularly concentrating on the landscape features.

the buildings, 15% for internal communication routes and parking, 50% for vacant area (25-30% in case of hospitals with a limited capacity for future growth) out of which 10% is reserved for recreational areas. In brief, they should be planned according to following requirements: (1) to create opportunities for movement and exercise; (2) to offer a choice between social interaction and solitude; (3) to provide both direct and indirect contacts with nature and other positive distractions [13]. Several studies of non-patient groups (such as university students) as well as patients have consistently shown that simply looking at environments dominated by greenery, flowers, or water -- as compared to built -scenes lacking nature (rooms, buildings, towns) -- is significantly more effective in promoting recovery or restoration from stress. To promote the speed of postoperative recovery and to improve the quality of life during hospitalizations, it is important to provide patients with not only the best treatment possible, but also to remove such sources of stress and to counter them with positive distractions.

V.  Interior Plants When plants were added to the interior space, the participants were more productive (12% quicker reaction time on the computer task) and less stressed (systolic blood pressure readings lowered by one to four units). Immediately after completing the task, participants in the room with plants present reported feeling more attentive (an increase of 0.5 on a selfreported scale from one to five) than people in the room with no plants [14]. Regardless of the physical air quality benefits, people generally have an affinity to being around plants. Many studies have proven a link to plants and their beneficial psychological effects on people, including increases in productivity and decreases in stress levels [15].

IV.  Design Considerations for Hospital Landscaping

In 2006, many studies were published that indicated that simply having three small potted plants can significantly reduce (50-75%) the total VOC (Volatile Organic Compound) levels in a real office of 30-50m3 size [16]. The only consideration was that the level of total VOC needed to be above 100ppb - a concentration level that is much lower than acceptable limits.

In an ideal case, optimal distribution of the total site area of a hospital complex should be the following: 30% for

The National Aeronautics and Space Administration studies on indoor landscape plants and their role in


NAGARAJAN AND BHARGAVA:A QUALITATIVE RESEARCH ON THE ROLE OF LANDSCAPE  9

improving indoor air quality included reports on toxins common to the interior environment, specifically benzene, formaldehyde, and trichloroethylene [17]. The following list of plants typically used in the interior environment outlines the plants found to be more effective in air purification, based on the NASA studies [18]. 1. A echmeafasciata (Excellent for formaldehyde and xylene) 2. A glaonemamodestum (Excellent for benzene and toluene) 3. Aloe vera (Excellent for formaldehyde) 4. C hamaedorea Bamboo (Excellent for benzene and formaldehyde) 5. C hlorophytumelatum (Excellent monoxide and formaldehyde)

for

carbon

6. C hrysanthemum morifolium (Excellent trichloroethylene, good for benzene formaldehyde)

specialized for a single disease. 3. A hospital located in the outskirts. Following hospitals in Chennai were selected for the survey: 1. Rajiv Gandhi Government Hospital, Central, Chennai. 2. Cancer Institute, Adyar, Chennai and 3. Kamakshi Memorial Hospital, Velachery Road, Chennai.

VII.  Selection of People for Survey It was already planned that the selection of the people for survey was as per the requirement of the research. So, the people for survey were categorized into four following types: 1. with respect to occupation, 2. with respect to their age, 3. with respect to the time of survey and 4. with respect to their gender.

for and

7. D endrobium Orchid (Excellent for acetone, ammonia, chloroform, ethyl acetate, methyl alcohol, formaldehyde and xylene) 8. Dieffenbachia maculate (Good for formaldehyde) 9. D racaena deremensis (Excellent for benzene and trichloroethylene, good for formaldehyde) 10. Dracaena marginata (Excellent for benzene, good for formaldehyde and trichloroethylene) 11. Dracaena Massangeana formaldehyde)

(Excellent

Fig. 1. Occupation

for

VI. Surveillance in Hospitals in Chennai As the aim of this research was conceptualized to calculate the ratio of the minimum open space required for landscape in a hospital to the built up space of the site, the research was further proceeded to organize a survey with the people who inhabit the hospital premises. Surveys were carried out in three major hospitals in Chennai in the following categories: 1. a hospital in the populated / noisy zone of the city. 2. A hospital

Fig. 2. Age


10  HINDUSTAN JOURNAL, VOL. 6, 2013

(b)

Fig. 3. Time

(c)

Fig. 4. Gender

VIII. Report on The Surveillance

(d)

The following are the statistical ripostes for the questionnaire prepared for the survey:

(e) Fig. 6. Sub categories of fig. 5.

Fig. 5. Liking of parts of Hospital

(a)

Fig. 7. Duration in Hospital


NAGARAJAN AND BHARGAVA:A QUALITATIVE RESEARCH ON THE ROLE OF LANDSCAPE  11

Fig. 8. Noise Level Fig. 11. Elements missing in Hospital

Fig. 9. Smoke / Dust in premises

Fig. 12. Inside the building-1

Fig. 10. Preferred surroundings Fig. 13. Inside the building-2


12  HINDUSTAN JOURNAL, VOL. 6, 2013

1. 5 7.5% People feel comfort in the place where the following trees are planted: Azardirastraindica, Ficusreligiousa, Ficusbengalinensis, Flowering trees and Pongameapinnata. 2. 6 3.7% of people desperately want some mode of system to enhance their breathing comfort, and 50% among them recommended plants inside the building.

Fig. 14. Trees liked

3. A s most of the previous researches proved, 45% of the people surveyed preferred flowering plants in their vicinity and they expressed that they felt relaxed compared to the people who were not having flowering plants in their rooms. 4. E qually, 40% of people preferred earth walkway and also lawn in the open space of the premises. 5. 6 5% of people complained that the process of shedding leaves of trees is irritable than the problems of insects over it (32% complained of insects). 6. A mong the people surveyed, 75% of patients, 65% of non - patients and 78% of staff members of hospital prefer to rest under the tree during midday. 7. A ge wise, 82% of above 55 age people preferred noiseless area than the active / noisy area.

Fig. 15. Trees recommended

8. A lmost 95% of the women prefer to rest inside the building than resting under trees, street-benches or anywhere in open spaces. 9. A lmost 88% of the men patients whose rooms were not having plants felt boredom and wanted to move around, when the same feeling was felt by only 15% of the men patients whose rooms had plants. 10. Almost 90%of all age group men and women who are patients prefer to have a walk in either in the morning or in the evening in the road which has trees, than the road which does not have them.

Fig. 16. Trees disliked

IX.  Synthesis of The Survey From the above report of the survey conducted in three hospitals in Chennai, the following are the syntheses observed:

11. Area Calculation of the First hospital: Total Area - 61,336.0716 sq.m and the total open space is 22,114.0452 sq.m. 12. Area Calculation of the Second hospital: Total area - 31,567.9558 sq.m and the total open space is 16,423.8566 sq.m 13. Area Calculation of the Third hospital: Total Area: - 12,437.2557 sq.m and the total open space is 3,211.7854 sq.m


NAGARAJAN AND BHARGAVA:A QUALITATIVE RESEARCH ON THE ROLE OF LANDSCAPE  13

14. The satisfaction level of the people staying in the premises in terms of overall aspects, synthesized from all the three hospitals is as follows: 71.35%, 83.75% and 56.21% are the percentage of the satisfaction level measured from the first, second and third hospitals respectively. 15. With the above satisfaction levels measured, the total built up area, total open space area and the site area of all the three premises are multiplied with the percentages of the satisfaction level. 16. 71.35% of 22,114.0452sq.m. =x 17. 83.75% of 16,423.8566 sq.m= y 18. 56.21% of 3,211.7854 sq.m= z 19. Built up spaces of all the three premises are considered as a, b and c respectively.

X. Calculation of Ratio of Minimum Open Space for A Hospital (x+y+z) / 3 = Xos where x, y, z are the satisfied open area for a hospital and Xos is the factor for open space. (a+b+c) / 3 = Ybs where a, b, c are the built up area of the hospital buildings and Ybs is the factor for built up space.

XI.  Conclusion

Considerations”, Architecture and Civil Engineering Vol. 8, No 3, 2010, pp. 293 - 305 [3] S .-H. Park, R.H. Mattson, E. Kim (2011), “Pain Tolerance Effects of Ornamental Plants in a Simulated Hospital Patient Room”, Department of Horticulture, Forestry and Recreation Resources, Kansas State University. [4] S eong-Hyun Park and Richard H. Mattson, (2009) “Effects of Flowering and Foliage Plants in Hospital Rooms on Patients Recovering from Abdominal Surgery”, Department of Horticulture, Forestry and Recreation Resources, Kansas State University. [5] C hang, C. and P. Chen. (2005). “Human response to window views and indoor plants in the workplace”, Hort Science 40:pp.1354–1359. [6] K aplan, R. and S. Kaplan. (1995), “The experience of nature: A psychological perspective”, Ulrich’s, Ann Arbor, MI. [7] A dachi, M., C.L.E. Rode, and A.D. Kendle. (2000), “Effects of floral and foliage displays on human emotions”, HortTechnology 10:pp.59–63. [8] C imprich, B. (1993), “Development of an intervention to restore attention in cancer patients”, Cancer Nurs. 16:pp.83–92. [9] D iette, G., E. Haponik, and H. Rubin. (2003), “Distraction therapy with nature sights and sounds reduces pain during flexible bronchoscopy”, Chest 12:pp.941–948.

The research concludes that Xos:Ybs is the ratio of minimum open spaces to the built up space of a hospital premises.

[10] S ynnøveCaspari, (2006), “The aesthetic dimension in hospitals - an investigation into strategic plans”, International Journal of Nursing Studies 43 pp.851–859.

References

[11] U lrich, R., (1991), “Effects of interior design on wellness. Theory on recent scientific research”, Journal of Health Care and Interior Design, 3.

[1] U lrich, R. S. and R. Parsons (1992), “Influences of passive experiences with plants on individual well-being and health. In D. Relf (Ed.)”, The role of horticulture in humanwell-being and social development, Portland, Timber Press, pp. 93-105. [2] D ejanaNedučin, “Milena Krklješ, NađaKurtović-Folić”, “Hospital Outdoor Spaces - Therapeutic Benefits And Design

[12] G reen Guide for Health Care, Version 2.2, SS Credit 5.1., Site Development: Protect or Restore Open Space or Habitat, 2007, www.gghc.com, p.6 23. [13] Ulrich, R.S., Cooper-Marcus, C., Barnes, M. (Eds.), (1999), “Effects of Gardens on Health Outcomes: Theory and Research, in Healing Gardens: Therapeutic Benefits and Design


14  HINDUSTAN JOURNAL, VOL. 6, 2013

Recommendations”, John Wiley & Sons, New York, pp. 27-86. [14] V irginia I. Lohr, Caroline H. Pearson-Mims, and Georgia K. Goodwin, “Interior Plants May Improve Worker Productivity and Reduce Stress In A Windowless Environment”, Department of Horticulture and Landscape Architecture Washington State University, Pullman, WA 99164-6414 [15] R yan Hum and Pearl Lai (2007), “Assessment of Biowalls: An Overview of Plant- and Microbial-

based Indoor Air Purification System”. [16] W ood, R.A., Burchett, M.D., Alquezar, R., Orwell, R.L., Tarran, J. and F. Torpy. (2006). “The potted-plant microcosm substantially reduces indoor air VOC pollution: I. office field-study”, Water, Air, and Soil Pollution, 175, pp.163-180. [17] P rescod, A.W. (1992). “More indoor plants as air purifiers”, Pappus, 11:4. [18] U nited States Environmental Protection Agency (1991), “Sick building syndrome”, Air and Radiation, Indoor Air Facts, 4.


HINDUSTAN JOURNAL, VOL. 6, 2013

A Research on Nuances of Silk Weaving and Designing a Handloom Hub at Kanchipuram Ar. Thulasi Gopal.

Abstract — The lost platform of silk weaving industry in Kanchipuram has been identified in order to bring back the lost glory of original silk weaving techniques, process and products through down to earth planning and designing patterns. A particular community has been confined to these industries. The idea of the silk parks with appropriate infrastructure is to create awareness among others to take up this profession. Deliberate research and extensive interaction with the weaving community has gone into evolving this design concept. The weaving community was widely studied on their everyday lifestyle, weaving activity, duration to complete each activity, spacial organization, proximity of spaces etc., in order to meet the requirements in the Silk Park. Apart from these, the supporting activities like cocoon reeling and dyeing activities and their spaces were studied. Weaver’s psychology which results in the sari designs, creativity etc., was taken into account for giving a suitable design solution. Emotions related to the occupational spaces resulted in interior-exterior connectivity, to avoid solitude. Traditional Kanchipuram weavers’ house and their elements were studied to incorporate those features into the design. The challenge in the output was how all the versatile activities of silk weaving can be designed under one roof, bringing in wholesomeness through form, tone, style, texture, hue, and bringing unity, balance and continuity. The design created would provide people involved with a comfortable living environment that they are longing for and contribute to India’s gross domestic product.1 Index Terms — Silk Weaving, Handloom, Spatial Organization, Design, Interior-exterior connectivity. Ar. Thulasi Gopal is in School of Architecture, Hindustan, Chennai, India, (e-mail: gopal.aarthi@ gmail.com)

I.

Introduction

Tamil Nadu has a rich cultural history and legacy that spans several areas. All of these need to be preserved for posterity as they remind the people of its enormity and feat. It has a world class brilliances to showcase, which needs to be nurtured and suitably promoted to support the branding and economic outcomes. One such craft that needs to be reinstated from a declining trend is Silk weaving. India is the secondlargest Silk producer in the world, next to China and major sourcing base for international retail players. According to Tamil epic ‘Silapadikaram’ the Silk handloom weaving activity is said to have existed since second Century AD at Kanchipuram. It is one of the traditional centers of Silk weaving and handloom industries that is losing its identity. The Scheme for Integrated Textile Park was approved by Central Government of India to facilitate setting up of Textile parks with world class infrastructure and amenities. The Government of Tamilnadu has proposed to bring a Silk park at Kanchipuram. Seventy five acres of land allotted by the Government of Tamil Nadu for the purpose is located at Kilkathirpur village, Kanchipuram Taluk and District.

II. Constraints The Silk and other textile industries are still community driven i.e. a particular community is confined to these industries. The idea of the Silk parks with appropriate infrastructure is to create awareness among a lot of others to take up this profession. This in turn keeps the industry in the head front of Indian economic development and increases the demand for Indian textiles in International markets.


16  HINDUSTAN JOURNAL, VOL. 6, 2013

Sriperumbudhur industrial area is situated 35 kms away from Kanchipuram which attracts people to work there due to time flexibility, better income and less hard work (when compared to weaving), suitable transport facilities, allowances etc., provided by the companies such as Hyundai, Nokia etc.

III. Objectives The objectives of the project are ●● To design a prime handloom hub ●● T o re-establish the traditional and cultural value of ancient silk weaving which is the prime occupation of the temple city and its surrounding villages and village hamlets of Kanchipuram. ●● T o encourage the occupation, by providing the workers with better functioning environment and resources that would take the economy of the rural sector to a superior stature. ●● T o bring back the lost platform for the weavers to market their products, avoid duplicate market players and also to showcase the culture.

IV. Methodology The methodology proposed to be adopted are ●● Understanding the site surroundings and services. ●● Understanding the occupation and workplace. ●● Weavers’ needs/opinions through questionnaires. ●● Comparison of history against recent happenings. ●● T echniques in the field to choose the best for today’s scenario. ●● Requirement framing in detail. ●● C ase study- comparative study of Ayangarkulam (weaving village and Pillayarpalayam weaving town).Analysis of the common and the contrasting features and characteristics. ●● Formulating conceptual ideas. ●● D evelopment of concepts into schemes and into final design output.

V.

Scope and Nature of Activities in The Complex

The spaces planned on site are: Administration and expo hall, Research center and training, Marketing area, Warehouse, Cocoon reeling, Garment unit, Canteen and hostel , Dyeing unit –CETP, Weaving cluster, Residential cluster, Central hub – OAT, health care, child care., Restaurant, Guest house, Multipurpose area, Temple along with the pond, and other Services. The main focus in the design was given to the Dyeing cluster, Weaving cluster and the Residential Cluster.

VI. Challenges Faced The challenges faced include ●● Bringing in different activities in one complex. ●● Bringing wholesomeness in the design. ●● Creating buffer spaces between each block. ●● Proximity between all the spaces. ●● Connectivity and flow of functions. ●● S egregating the different residential, floating, working and shopping population. ●● Meeting the workplace requirements. ●● I nnovations for enhanced productivity of silk products.

VII. Analysis Along With Evolution of Design Deliberate research and extensive interaction with the weaving community has gone into evolving this design concept. Their needs have been understood and have been approached accordingly. The site, on entry, will have the administrative blocks, followed by the marketing blocks with a research and testing center. This is to facilitate effective marketing of the products as well as to ensure the quality of the products. There is an industry, behind the marketing area, which is to produce woven Silk garments. The need for original silk sarees is decreasing day by day and hence the requirement of the weavers too is receding. To change this situation, silk can be used to produce various other useful garments, apart from sarees. They can be in accordance with the current trends in fashion. This will escalate the demand for woven silk garments which will in turn increase the demand for the weavers.


THULASI GOPAL:A RESEARCH ON NUANCES OF SILK WEAVING    17

After a detailed discussion with the village weavers, it was found that they are not very keen with the idea of shifting to a new alien location. Hence the design is done in such a way so as to provide them with the most homely environment possible. The weaving looms customized is specific to match the needs of the Kanchipuram weavers. The houses planned in the Silk Park are categorized into two main styles, as per the requirements of the weavers. After documentation, observation and analysis with the weaving community of Kanchipuram, it was understood that they are broadly segregated into two groups, based on their economic needs. The houses have been constructed in such a way so as to cater to their needs. One of the concepts adopted by the earth institute at Auroville is CSEB – Compressed Stabilized Earth Blocks. The soil at the site was observed to be sandy clayey soil, one such type of soil which is used for making CSEB blocks which can be used for construction. This does not require any skilled labours at work, and hence can be a source of income to the local dwellers, who necessarily are not weavers, surrounding the site. As mentioned, the site is located 7 km away from the original weaving society; hence the locales here too will have an opportunity to gain through employing the concept. A large water body, for example, a typical pond is created that facilitates water distribution to different areas on site through the tank which is a focal feature on site. The mud evacuated to create these water bodies will be used for CSEB block making. Burnt bricks replaced by CSEB blocks provide a sustainable concept. There are farming areas around each housing sectors, which will enable food production. This offers them an additional source of income, as well as an alternate food source. There are green areas designed all around the site which acts as a buffer space, segregating the diversified functions involved in weaving a saree. The central focus of this site is a multipurpose area with a temple, a water body, commercial spaces, which will provide the platform necessary for the weavers to hold fares. It is to break the monotonous weaving routine and to provide them with some relaxation. The fares are also a means of interaction and communication with weaving communities from other districts and states. Thus holding fares and exposition summons collaborative work from other communities, along with exchange of various important ideas and tools, which

will not only improvise the silk weaving techniques but also make them aware of the current trends in the market. All the silk saree shops can be shifted under the silk society’s supervision so that adulteration is minimized and originality is maintained. Training centers can be proposed with Government certified courses on silk weaving to attract younger generation into this activity, Fig.1, 2 and 3 describe the process, design features and the concepts adopted respectively in regard to the proposal of the handloom hub at Kanchipuram.

VIII.  CONCLUSION Combination of traditional and contemporary architecture is done which targets site planning level to weaving machine design customized for the weavers. Macro level to micro level planning is undertaken. Material from site is used for construction which can involve local dwellers who can be benefited apart from the main target - the weavers. Dyeing areas which were earlier inside the Kanchipuram towns causing pollution, will be shifted here where the CETP is set up to solve the issue of pollution. Considering the hot humid climate, features like courtyard have been adopted to give a natural day lighting and stack effect thus maintaining a suitable indoor environment. Better workspace is created which will result in better productivity. Efficient usage of energy, water, and other resources is seen. Measures are taken to protect occupants health and improve employee productivity. Maximum reduction in waste, pollution and environmental degradation is seen into. CSEB blocks, which are green materials are extensively used in the construction.

BIBILIOGRAPHY http://www.silkclick.com http://www.csapl.co.in/industrial.asp http://www.thehindu.com/todays-paper/tp-national/ tp-tamilnadu/site-identified-for-silk-park-inkancheepuram/article168017.ece http://www.kanchipuramdistrict.com/ http://smehorizon.sulekha.com/advancementmade-panipat-weaving-industry-sustain_textilesviewsitem_8253 http://www.oldandsold.com/articles04/textiles16.shtml http://environmental_impact_assessment


Fig. 1. Silken Archinomy - The process

18  HINDUSTAN JOURNAL, VOL. 6, 2013


Fig. 2. Sliken Archinomy - Design Features

THULASI GOPAL:A RESEARCH ON NUANCES OF SILK WEAVING    19


Fig. 3. Silken Archinomy - Concepts Adopted

20  HINDUSTAN JOURNAL, VOL. 6, 2013


HINDUSTAN JOURNAL, VOL. 6, 2013

A Case for the Development of High Speed Rail Link in India D. Karthigeyan

Abstract — Indian Railways is an Indian stateowned enterprise, owned and operated by the Government of India through the Ministry of Railways. It is one of the world’s largest railway networks comprising 115,000 km (71,000 mi) of track over a route of 65,000 km (40,000 mi) and 7,500 stations. India is a country with more than 1.2 billion population, which includes 35 cities with more than 1 million people each as per Census 2011. Its urban population is increasing day by day, and the rail network forms the lifeline of the country, where majority of the people are poor and cannot afford to travel by air. Under these circumstances, India which is aiming to become a global super power by 2050 requires high speed rail network similar to China, which has the world’s largest high speed railway network of more than 10,000 km. In this context, India needs to have a quality and affordable high speed rail network for its poor people to connect its major metropolitan areas and to decongest the which are transforming to megalopolition areas.1

list of countries which currently have a commercial high speed rail network. The average speed of trains in developed nations is around 200 kmph whereas in India, the maximum speed of any train hardly exceeds 150 kmph. Rajdhani and Shatabdi are among the fastest trains which run nearly at a speed of 120 kmph. On the other hand, India’s neighbour China has built world’s largest high speed railway network of about 10,463 Km long [2]. China also has the largest single track length between Beijing and Guangzhou which is 2,298 km. China has world’s fastest trains running at the speed of 380 kmph. It is surprising to see that the high-speed railway network in China was developed in a short span of five years. The proposal for high speed trains had come to fore in 1990 in that country and work had started in 2007.

Index terms — High speed rail network, Bullet train, Transportations.

I. Introduction High-speed rail is a type of rail transport that operates significantly faster than traditional rail traffic, using an integrated system of specialized rolling stock and dedicated tracks. The first such system began operation in Japan in 1964 and was widely known as the bullet train. Even though India has one of the world’s largest railway networks, it is yet to find itself a place in the D. Karthigeyan is in School of Architecture, Hindustan University, Chennai, India, (e-mail: dkarthikeyan@ hindustanuniv.ac.in)

Fig. 1. High speed rail in China

In 2015 China will have 18,000km of high speed rail. Just five years after China’s high-speed rail system opened. It is carrying nearly twice as many passengers each month as the country’s domestic airline industry. With traffic growing at 28 percent a year for the last several years, China’s high-speed rail network will handle more passengers by early next year than the 54 million people a month who board domestic flights in the United States.


22  HINDUSTAN JOURNAL, VOL. 6, 2013

China’s high-speed rail system has emerged as an unexpected success story. Economists and transportation experts cite it as one reason for China’s continued economic growth when other emerging economies like India are faltering due to the global economic slowdown. Chinese workers are now more productive. The productivity gains occur when companies find themselves within a couple of hours’ of train ride of tens of millions of potential customers, employees and rivals. Companies are opening research and development centers in more glamorous cities like Beijing and Shenzhen with abundant supplies of young, highly educated workers, and having them take frequent day trips to factories in cities with lower wages and land costs, like Tianjin and Changsha. Businesses are also customizing their products more through frequent meetings with clients in other cities, part of a broader move up the ladder toward higher value-added products. Airlines in China have largely halted service on routes of less than 300 miles when high-speed rail links open. They have reduced service on routes of 300 to 470 miles. The double-digit annual wage increases give the Chinese enough disposable income that domestic airline traffic has still been growing 10 percent a year. Currently, China’s high-speed rail service costs significantly less than similar systems in developed countries, but is considerably more expensive than conventional rail service. For the 419 km trip from Beijing to Jinan, High Speed Rail costs US$30 and takes 1 hour 32 minutes, while a conventional train costs US$12 and takes about 6 hours. By comparison, the Acela train from Washington DC to New York City covering a slightly shorter distance of 370 km costs US$152–180 and takes 2 hour 50 minutes [3].

Chinese government have a major plan with respect to high speed rail network, by connecting it to the whole of Asia and European Continent, so that all its freight travel will happen through this network, which in turn will make the Chinese a global leader in the trade and commerce. In this connection, Chinese government even plans to build a high-speed rail line connecting its south-western city of Kunming to New Delhi and Lahore, part of a 17-country transcontinental rail project which is part of its pan-Asian high-speed rail link. After many years of negotiations with other Asian countries, China has finally reached agreements with several Central Asian countries and got the green signal to its ambitious pan-Asian high-speed rail link, which envisages connecting cities in China to Central Asia, Iran, Europe, Russia and Singapore.

II. High Speed Rail Network There is no standard or a global definition for it; however, there are certain parameters that are unique to high-speed rail, which are ●● U IC (International Union of Railways) and EC Directive 96/58 define high-speed rail as systems of rolling stock and infrastructure which regularly operate at or above 250 km/h (155 mph) on new tracks, or 200 km/h (124 mph) on existing tracks. However lower speeds can be required by local constraints. ●● A definitive aspect of high speed rail is the use of continuous welded rail which reduces track vibrations and discrepancies between rail segments enough to allow trains to pass at speeds in excess of 200 km/h (124 mph). ●● D epending on design speed, banking and the forces deemed acceptable to the passengers, curve radius is above 4.5 kilometres (2.8 mi) and for lines capable of 350 km/h (217 mph) running, typically at 7 to 9 kilometres (4.3 to 5.6 mi).

A. Parameters of A High Speed Travel: ●● The frequency of service, ●● Regular-interval timetables, ●● A high level of comfort, Fig. 2. China’s Pan-Asian high-speed rail link


KARTHIGEYAN:A CASE FOR THE DEVELOPMENT OF HIGH SPEED RAIL  23

●● A pricing structure adapted to the needs of customers,

improve transportation efficiency and to conserve the depleting resources.

●● Complement with other forms of transport,

High speed rail network is the best choice for distances of 500-700 km, where airlines cannot match; below 200 km, road transport has an edge; beyond 1,000 km, air option may be better.

●● More on-board and station services.

III. Indian Government Context

Fig. 3. Inside first class cabin of high speed train in France

In India, high speed trains are often referred to as “bullettrains”. One of the first proposals by the Government of India to introduce high-speed trains was mooted in the mid-1980s by then Railway Minister. A high speed rail line between Delhi and Kanpur via Agra was proposed. An internal study found the proposal unviable at that time due to the high cost of construction and inability of travelling passengers to bear much higher fares than what was changed for normal trains. The Railways instead introduced Shatabdi trains which ran at 130 km/h.

B. On The Eco-Friendly Atmosphere: ●● T ransport is responsible for 25% of the world’s carbon dioxide (Co2) emissions, with 80 – 90% coming from cars and highway trucks, and only 2 % from rail. ●● O n high-speed railways the energy consumption per passenger-kilometer is three and half times less than for a bus, five times less than for air and ten times less than for a private car. ●● T he social cost of noise, dust, carbon dioxide, nitric oxide and sulfur oxide emission for high-speed rail is one fourth of road transport and one-sixth for air. ●● I t requires the construction of an eight-lane highway to provide the same capacity as a double track high-speed railway line [1]. Worldwide concerns over depleting fossil fuel reserves, climate change, overcrowded airports, delayed flights and congested roads have conspired with the high speed rail technology as the only alternative. High speed rail entails much less land usage than motorways: a double track rail line has more than thrice the passenger carrying capacity of a six-lane highway while requiring less than half the land. India is a relatively small country with a huge population and it will be too costly to build highways so high-speed rail network will be a better option to

Fig. 4. Potential high speed rail corridors in India

The Indian Ministry of Railways’ in its whitepaper Vision 2020 submitted to the Parliament on December 2009 envisages the implementation of regional high-speed rail projects to provide services at 250-350 km/h, and planning for corridors connecting commercial, tourist and pilgrimage hubs. Six corridors have already been identified and feasibility studies have been started, 1. Delhi-Chandigarh-Amritsar, 2. Pune-Mumbai-Ahmadabad,


24  HINDUSTAN JOURNAL, VOL. 6, 2013

3. Hyderabad-Dornakal-Vijayawada-Chennai, 4. Howrah-Haldia, 5. Chennai-Bangalore-Coimbatore-Ernakulam, 6. Delhi-Agra-Lucknow-Varanasi-Patna. These high-speed rail corridors will be built as elevated corridors in keeping with the pattern of habitation and the constraint of land. Two new routes were later proposed by Indian Railways, namely ●● A hmadabad - Dwarka, via Rajkot, Jamnagar and the other from Rajkot to Veraval via Junagadh [4]

A. Approach to High-Speed Indian Railways’ approach to high-speed is on incremental improvement on the existing conventional lines for up to 200 km/h, with a forward vision of speed above 250 km/h on new tracks with state-of-the-art technology.

B. Upgrade Tracks for 160-200 Km/H The approach is to upgrade the dedicated passenger tracks with heavier rails, and build the tracks to a close tolerance geometry fit for 160-200 km/h. High-speed tracks to be maintained and inspected using automation to ensure required track geometry. There is a need to perform more frequent inspection to ensure high confidence of safety at high-speed.

C. Likely Initial Lines In India, trains in the future with speed of 250-350 km/h, are envisaged to run on elevated corridors, to prevent trespassing by animals and people. This is an excellent way to isolate high-speed train tracks.

D. Project Execution The cost of building high speed rail tracks is about Rs 70 crore/km (U$15.6m/km), compared with Rs 6 crore/ km of normal rail tracks.

will exclusively deal with the proposed ambitious high speed rail corridor projects. It will handle tendering, pre-feasibility studies, awarding of contracts and execution of the projects. All high-speed rail lines will be implemented through public private partnership (PPP) mode on a Design, Build, Finance, Operate and Transfer (DBFOT) basis.

IV. Prospect of High Speed Train Operation in India Mumbai – Ahmadabad rail line is likely to be the first high speed rail network project in India which the central government plans to take in the next five year plan. Central Government is likely to make some important announcements on this project in the upcoming Budget session of the Parliament, and the state government of Maharashtra is keeping its fingers crossed as till now the share between the centre and the state government is yet to be announced. Both France and Japan Governments have shown interest in this line which covers a distance of 500 kilometers (312 miles) and expected to cost around Rs.65,000 crores. Both the governments have taken a feasibility study and are likely to submit the report by March 2014. Both the governments are hopeful, that their technology will be utilized in building this high speed rail network. Their feasibility study includes defining “high speed” for India (which could be 300350 km per hour), the fares and the finance practices, including public-private partnerships. On the technology front, what separates the French high-speed train technology from the Japanese, who pioneered the system, is that TGV trains of France could be operated at a normal speed (160 kmph), and on special sections, shifted to peak speeds. This made it possible to integrate them easily with the existing railways. Costs are high for such systems but when supplied with cheap Indian labor the total cost will come down drastically.

E. High Speed Rail Corporation of India Ltd Indian Railways set up a corporation called High Speed Rail Corporation of India Ltd (HSRC) in July 2012 that

Fig. 5. Rail link from Mumbai to Ahmadabad


KARTHIGEYAN:A CASE FOR THE DEVELOPMENT OF HIGH SPEED RAIL  25

The quickening pace of commercial co-operation comes with India and Japan -- both democracies -- eyeing the rise of China with increasing unease, as Beijing presses territorial claims with growing insistence [5] With this regard, Japan has already submitted its final report of the feasibility study on upgrading the speed of the existing Mumbai-Ahmadabad route to 160-200 km per hour and further consultations on the report between the two countries are on.

V.  Benefits in The Indian Context In India, out of all the benefits, discussed earlier, the reduced journey time has been the overriding consideration in the adoption of high-speed rail work. On the basis of the current experiences in the world, it has been observed that when the distances are between 300 to 600 Km, and the travel time by the high-speed train is less than 2 – 2.5 hours, the market share of passengers for the high-speed rail is at least 75-80%. This percentage decreases dramatically when the travel time of train increases to 4 to 5 hours and a round trip during the day is not possible. High speed train operation will play a significant role in the de-congestion of megalopolis towns of Delhi, Kolkata, Mumbai, Chennai, etc. Operationally, high-speed trains can optimally connect cities 500 to 1,000 km apart, and in one of the best-known sectors, Paris-Lyon, the peak capacity is 12,000 passengers per hour at 1,000 people per train, providing service once in four minutes.

VI. Conclusion Once the Indian government decides, it should not take more than 4-5 years to have high-speed trains running

on Indian soil. The benefits for a common man will be like, ●● W ith less than one hour of journey time, it will then be possible to live in the salubrious climate of Chandigarh and commute to Delhi for work. ●● A bullet train between Bangalore and Mysore (about 88 miles) will decongest Bangalore and one can reach Mysore in 30 minutes. This train will bridge the travel time between Bangalore and Mysore and pave way for their development as twin cities. ●● H igh-speed rail lines from Bangalore to Chennai (180 miles) are also under discussion by the Government of India. Then we might reach Chennai within an hour from Bangalore by the surface transport. [6]

References [1] M undrey, “Tracking for High speed trains in India”, January, 2010, RITES Journal. [2] h ttp://zeenews.india.com/news/world/china-shigh-speed-bullet-train-network-exceed-10-000km_879426.html [3]

h ttp://www.globalresearch.ca/eurasian-economicboom-and-geopolitics-china-s-land-bridge-toeurope-the-china-turkey-high-speed-railway

[4] h ttp://www.mapsofindia.com/railways/highspeed-rail-corridors.html [5] h ttp://www.ibtimes.com/next-stop-bangalorejapan-may-help-south-india-build-high-speedrail-system-1408542 [6]

h ttp://www.indianexpress.com/news/india-japanto-study-highspeed-rail-feasibility/1134280/


HINDUSTAN JOURNAL, VOL. 6, 2013

HMAC Filtering Scheme for Data Reporting in Wireless Sensor Network E. Kodhai, P. Bharathi and D. Balathiripurasundari

Abstract — Wireless Sensor Networks consist of a large number of small sensor nodes, high processing power, limited in usage of efficient security mechanisms and susceptible to possible node compromise, passive and active attacks. These restrictions make them extremely vulnerable to a variety of attacks. Mostly public key cryptographic techniques are found to be more work prone with the secure exchange of keys, mainly lengthy hash operations with high processing rounds etc. Even though these techniques do not provide adequate verification process of reports from source to sink, they do not completely mitigate false report injection attacks and Denial of Service attacks. In this work we propose a HMAC’ed filtering scheme for secure transmission of data and we propose a technique called encryption of combined hashes which filters bogus reports and then specifically addresses false report injection attacks and Denial of Services. It has three phases which are Key Pre-distribution, Key Dissemination and Report Forwarding Phase. The legitimacy of the report being forwarded by the cluster head is collectively endorsed by a preset value and achieved by Message Authentication codes. In our proposed scheme the increase in performance is achieved through control messages, increasing secure data transmission and addressing false data reports.1,2 Index Terms — Wireless Sensor Network, mobile relay nodes, wireless routing, bandwidth, energy consumption.

E. Kodhai and P. Bharathi are in Department of Information Technology, Sri Manakula Vinayagar Engineering College, Pudhucherry, India. (e-mail: kodhaiej@yahoo.co.in, bharathyit3@gmail.com) D. Balathiripurasundari is in DotNet TCS Corporate, Chennai. (e-mail: bala10.12.1990@gmail.com.)

I.  Introduction Sensor networks are dense wireless networks which are small in size, very low-cost and which collect and disseminate environmental data. Wireless Sensor Networks (WSNs) facilitates monitoring and controlling of physical environments from remote locations with better accuracy. They have applications in a various fields such as environmental usage, military requirement and gathering sensing information in inhospitable places. Sensor nodes have various energy and calculating constraints because of their inexpensive nature and ad hoc method of deployment. The number of nodes in a WSN is usually much larger than that in an ad hoc network. Sensor nodes are more resource constrained in terms of power, computational capabilities, and memory. Sensor nodes are typically randomly and densely deployed (e. g., by aerial scattering) within the target sensing area. The postdeployment topology is not predetermined. Although in many cases the nodes are static in nature, the shape and size might change frequently because the sensor nodes and the wireless channels are prone to failure.

II.  System Model Some of the existing schemes for Filtering False Reports in WSN are Statistical En-route Filtering (SEF), Interleaved hop-by-hop authentication (IHA) and Providing Location aware End- to-End Data Security (LEDS). The details of these techniques are discussed briefly in the following sub-sections.

A.  Statistical En-route Filtering (SEF) Ye et al. [12] proposed a statistical En-route filtering (SEF) scheme based on probabilistic key distribution.


KODHAI ET AL.: HMAC FILTERING SCHEME FOR DATA REPORTING  27

In SEF, a global key pool is divided into n partitions, each containing m keys. Every node randomly picks k keys from one partition. When some event occurs, each sensing node (that detects this event) creates a Message Authentication Code (MAC) for its report using one of its random keys. The cluster-head aggregates the reports from the sensing nodes and guarantees each aggregated report contains T MACs that are generated using the keys from T different partitions, where T is a predefined security parameter. Given that no more than T-1 nodes can be compromised, each forwarding node can detect a false report with a probability proportional to 1/n. The filtering capacity of SEF is independent of the network topology, but constrained by the value of n. To increase the filtering capacity, we can reduce the value of n , however, this allows the adversaries to break all partitions more easily. In addition, since the keys are shared by multiple nodes, the compromised nodes can impersonate other nodes and report some forged events that “occur” in other clusters.

B.  Interleaved Hop-By-Hop Authentication (IHA) Zhu et al. [13] proposed an interleaved hop by hop authentication (IHA) scheme. In this scheme, the base station periodically initiates an association process enabling each node to establish pair wise keys with other nodes that are t+1 hops away, where t is called the security threshold value. In IHA, each sensing node creates a MAC using one of its multihop pairwise keys, and a legitimate report should contain t+1 distinct MACs. Since every multihop pairwise key is distinguishable, IHA can tolerate up to t level compromised nodes in each cluster instead of in the whole network as SEF does. However, IHA requires a fixed path for transmitting control messages between the base station and each cluster-head, which cannot be assured by some routing protocols such as GPSR and GEAR. Moreover, the high communication overhead incurred by the association process makes IHA unsuitable for networks whose topologies change frequently.

C.  Providing Location Aware End- To-End Data Security Providing Location aware End-to-End Data Security (LEDS) design overcomes the limitations of the existing hop-by-hop security paradigm and achieves an efficient

and effective end-to-end security paradigm in WSN. It exploits the static and location-aware nature of WSNs, and proposes a novel location-aware security approach through two seamlessly integrated building blocks: a location-aware key management framework and an end-to-end data security mechanism. In this method, each sensor node is implemented with several types of balanced secret keys, some of which are intended to provide end-to-end data confidentiality, and others are to provide both end-to-end data authenticity and hopby-hop authentication. All the keys are measured at each sensor node independently from keying materials pre-loaded before network deployment and the location information is obtained after network disposal, without inducing new communication overhead, for shared key establishment.

III.  Problem Definition Each of the existing schemes for Fig. 1. Statistical En-route Filtering (SEF), interleaved hop-by-hop authentication (IHA) and Providing Location aware End- to-End Data Security address false report injection attacks and or DoS attacks. However they all have some constraints. SEF is independent of network shape and size, but it has a limited number of filtering capacity and cannot prevent impersonating attacks on legitimate nodes. IHA has a drawback, that is, it must periodically establish multihop pair wise keys between nodes. Further, it refers to a located path between the base station and each cluster-head to transmit messages in both directions, which cannot be assured due to the dynamic topology of sensor networks or due to the use of some underlying routing protocol. LEDS utilizes location-based keys to filter false report. It assumes that sensor nodes can determine their locations in a short period of time. However, this is note practical approach, because many localization approaches take quite long and are also vulnerable to malicious attacks. It also tries to address selective forwarding attacks by allowing a whole cell of nodes to forward one report; however, this incurs high communication overhead. Later, we have discussed the routing protocol AODV on which the proposed scheme is to be executed. AODV takes care of the route discovery and maintenance process thereby easing the proposed scheme to concentrate on the En-route filtration


28  HINDUSTAN JOURNAL, VOL. 6, 2013

capacity and the mitigation of false report injection attacks and DoS attacks.

encrypt the authentication keys which are collectively used for producing MAC of the report and later used for the report’s collective endorsement.

IV.  Design A.  Introduction

B.  Problem Formulation

In this chapter we describe our proposed security scheme called HMAC’ed Filtering Scheme for Data Dissemination in WSN. This scheme addresses false report injection attacks and DoS attacks such as Selective forwarding and Report disruption in WSN. The multifunctional key management framework is used in this scheme which involves authentication keys. Similar to SEF and IHF discussed in section 3 our proposed En-route filtering scheme also uses the key distribution mechanism employed in WSN. Unlike other schemes which either lack strong filtering capacity or cannot support highly dynamic sensor networks, our scheme uses a hash chain of authentication keys which are used to endorse reports. Meanwhile, a legitimate report should be authenticated by a certain number of nodes. First each node disseminates its key to forwarding nodes. Then, after sending reports, the sending nodes disclose their keys, allowing the forwarding nodes to verify their reports. It can be explained with the help of the following figure 1.

The vast targeted terrain where the sensor nodes are deployed is divided into multiple cells after network deployment. We assume that sensor nodes within a cell form a cluster which contains n nodes. In each cluster of a cell a node is randomly selected as a cluster head as in figure 2. When an event of interest happens in any of these cells, the sensing nodes of that particular cell detects the event and broadcasts it to the cluster head. The cluster head aggregates the reports and forwards the aggregated report through the report authentication area down to the sink. The topologies of WSNs change frequently either because nodes are prone to failure or because they need to switch their states between Active and Sleeping for saving energy. As sensor networks are not tamper-resistant, it can be compromised by adversaries. Each cluster may contain some compromised nodes, which may in turn collaborate with each other to generate false reports by sharing the secret key information. In this project work we intend to provide solutions for attacks like bogus data injection and denial of services (selective forwarding attack & report disruption) that can be launched by adversaries to degrade node’s life time and the critical information carried by them.

Fig. 1. Key Derivation

Under this scheme control messages are used to disseminate and disclose the keys to forwarding sensor nodes and later allow nodes to verify the keys by decrypting them and finding a shared secret key. To accomplish this every sensor node maintains 2 secret key pools and a seed key. A series of authentication keys can be derived from this seed key when there is a need. Hence when a shared secret key is found its corresponding authentication keys are derived and stored in the memory of sensor nodes. Thus the keys selected randomly from the key pools are used to

Fig. 2. Cluster Formation and report forwarding Route to Sink

We consider N- Total no. of nodes present in the targeted terrain n- Average no. of nodes in each cell


KODHAI ET AL.: HMAC FILTERING SCHEME FOR DATA REPORTING  29

l- Size of the cell t- no. of correct endorsements to validate a report x- no. of compromised nodes in a cell Cluster head intimates events to sink periodically and finds a routing path called Report Forward Route. We consider that x nodes inject malicious data to reports periodically to drain out battery life. These x nodes inject bogus data by simply offering a wrong MAC to the collective endorsement. Due to the wrong MAC in t endorsements the legitimate event report has the possibility of being dropped by a legitimate node or even a legitimate report share can be dropped by an adversary near to the sink which is called Report Disruption attack. When multiple clusters disseminate keys at the same time, some forwarding nodes need to store the authentication keys of different clusters. Hence the nodes closer to the base station need to store more authentication keys than others do because they are usually the hot spots and have to serve more clusters. Our aim is thus to mitigate the false data injection at early route with minimal overhead, improved network life time, confidentiality and authentication.

report contains distinct MACs depending upon the number of the cluster members. In our scheme, each node possesses a sequence of auth-keys that form a hash chain. Before sending the reports, the cluster-head disseminates the first auth-keys of all nodes to the forwarding nodes that are located on multiple paths from the cluster-head to the base station. The reports are organized into rounds, each containing a fixed number of reports. In every round, each sensing node chooses a new auth-key to authenticate its reports. To facilitate verification of the forwarding nodes, the sensing nodes disclose their auth-keys at the end of each round. Meanwhile, to prevent the forwarding nodes from abusing the disclosed keys, a forwarding node can receive the disclosed auth-keys, only after its upstream node overhears that it has already broadcast the reports. Receiving the disclosed keys, each forwarding node verifies the reports, and informs its next-hop node to forward or drop the reports based on the verification result. If the reports are valid, it discloses the keys to its next-hop node after overhearing.

C.  Design of the Project There are 3 phases involved in the project and the relationships between them are shown in figure 3.

Fig. 3. Relationship between phases

When an event occurs within some cluster, the cluster-head collects the sensing reports from sensing nodes and aggregates them into the aggregated reports. Then, it forwards the aggregated reports to the base station through a set of forwarding nodes. In our scheme, each sensing report contains one MAC that is produced by a sensing node using its authentication key (called auth-key for short), while each aggregated

Fig. 4. Overall process of key distribution and Report Forwarding

The processes of verification, overhearing, and key disclosure are repeated by the forwarding nodes as shown in figure 4 at every hop until the reports are dropped or delivered to the base station. Specifically, our scheme can be divided into three phases: (i) key pre-distribution phase, (ii) key dissemination phase, and (iii) report forwarding phase. In the key pre-


30  HINDUSTAN JOURNAL, VOL. 6, 2013

distribution phase, each node is preloaded with a distinct seed key from which it can generate a hash chain of its auth-keys. In the key dissemination phase, the cluster-head disseminates each node’s first authkey to the forwarding nodes, which will be able to filter false reports later. In the report forwarding phase, each forwarding node verifies the reports using the disclosed auth-keys and disseminated ones. If the reports are valid, the forwarding node discloses the auth-keys to its next-hop node after overhearing that node’s broadcast. Otherwise, it informs the next-hop node to drop the invalid reports. This process is repeated by every forwarding node until the reports are dropped or delivered to the base station.

where K (n) is the authentication message collected by CH from the sensing nodes and aggregated to K (n).

D.  Algorithm

2. T o verify the authenticity of the authentication keys in K (t), υj checks if each authentication key it stored can be generated by hashing a corresponding key in K (t) in a certain number of times. If not, it is either replayed or forged and K (t) should be dropped.

STEP 1: Cluster Head (CH) collects sensing reports as in figure 4, from sensor nodes and generates a number of aggregated reports. R1, R2, R3 CH sends these aggregated reports plus an OK message to next hop υj. Aggregated report must contain t Message Authentication Codes (MACs) from each sensing node with a distinct Z key. Aggregated report R looks as follows. R={r(υi ),...,r(υi )}. 1

t

where υi ,...,υi denote t sensing nodes. 1

t

Since every sensing node reports the same event information E, only one copy of E is kept in the aggregated report R. STEP 2: Receiving the aggregated reports and OK, υj forwards them to next hop, υj +1. CH overhears the broadcast of aggregated reports from υj. STEP 3: Overhearing the broadcast from υj, the CH discloses the authentication keys to υj by message K (t) K(t) = {Auth(υi ),..., Auth(υi )} 1

t

where K (t) contains authentication keys of υi ,...,υi . 1 t It has the same format as K (n), but contains only t authentication keys.

STEP 4: Receiving K (t), υj first checks the authenticity of disclosed keys using the disseminated ones that it decrypted from K (n) earlier. Then, it verifies the integrity and validity of the reports by checking the MACs of the reports using the disclosed keys.

V.  Verification Process 1. T o verify the validity of K (t), υj checks if K (t) is in correct format and contains t distinct indexes of z- keys (secret keys picked randomly from global key pool Z). If not, it drops K (t).

3. T o verify the integrity and validity of reports R1, R2… υj checks the MACs in these reports using the disclosed authentication key that it decrypts from K (t). STEP 5: If the reports are valid, υj sends an OK message to υj +1. Otherwise it informs υj +1 to drop invalid reports. STEP 6: Similar to step 2, υj +1 forwards the reports to next hop. STEP 7: Similar to step 3, after overhearing the broadcast from υj +1, υj discloses K (t) to υj +1. STEP 8: Every forwarding node repeats step 4 to step 7 until the reports are dropped or delivered to the base station.

VI.  Simulation Results A.  Introduction In this section, we will start with an introduction to the simulation tool called NS-2, the ways of configuring it


KODHAI ET AL.: HMAC FILTERING SCHEME FOR DATA REPORTING  31

to run sensor networks, and implementation details of the Enroute filtering scheme.

3. C onfiguration of Phenomenon node’s pulse rate and phenomenon type. 4. Configuration of Sensor nodes.

B.  Simulation Tool

5. Attaching sensor agents.

NS-2 is an event driven network simulator developed at University of California at Berkeley, USA, as a REAL network simulator projects in 1989 and was developed with the cooperation of several organizations. NS is not a finished tool that can manage all kinds of network model. It is actually still an on-going effort of research and development.

6. A ttaching UDP agent and sensor application to each node.

NS is a discrete event network simulator where the timing of events is maintained by a scheduler and able to simulate various types of network such as LAN and WPAN according to the programming scripts written by the user. Besides that, it also implements a variety of applications, protocols such as TCP and UDP, network elements such as signal strength, traffic models such as FTP and CBR, router queue management mechanisms such as Drop Tail and many more.

Implementation Details Of HMAC’ed Filtering Scheme

There are two languages used in NS-2; C++ and OTcl (an object oriented extension of Tcl). The compiled C++ programming hierarchy makes the simulation efficient and execution times faster. The OTcl script which is written by the users models the network with its own specific topology, protocols and all requirements needed. The form of output produced by the simulator also can be set using OTcl. The OTcl script is written creating an event scheduler object and network component object together with network setup helping modules. The simulation results produced after running the scripts can be used either for simulation analysis or as an input to graphical software called Network Animation (NAM).

7. Starting the Phenomenon node. 8. Starting the Sensor Application.

Implementation Of Md5 Hashing Technique

MD5 Hashing technique is used to produce hash of the sensor report. To accomplish this task MD5 algorithm is implemented in tcl script for NS-2 simulation. The steps describing its process are listed below 1. Append the padding bits 2. Append length 3. Initialize the Message Digest buffer 4. Process the message in 512 bit blocks 5. Resultant 128 bit Message Digest. Implementation of Key Comparison Process and Report Delivery

1. C onfiguration of Phenomenon channel and Data channel.

As the reports are sent in rounds containing distinct n number of reports, it is not needed to send the whole K (t) which contains all the first authentication keys of the sensor nodes. Instead we can send alone the n number of t authentication keys which will now enable faster deciphering of the MAC-ed reports. In order to filter the false packets at the earlier route, this K (n) is discarded in the nodes nearer to the sink. The above said process is accomplished in the following ways. Keys are randomly picked up from a matrix and they are used for producing HMAC of the report. The cluster head now receives all the first authentication keys from the cluster members packs them in K (n) and sends to the Report forwarding nodes.

2. C onfiguration of Phenomenon nodes with the PHENOM “routing” protocol.

The Cluster members sense the events and produce HMAC of the report and then send them collectively

Configuration of sensor network simulations: Setting up a sensor network in NS-2 follows the same format as mobile node simulations. Places where sensor network simulations differ from a mobile node simulation are listed below.


32  HINDUSTAN JOURNAL, VOL. 6, 2013

to Cluster Heads. The Cluster head now collectively endorses the received HMAC’s with the preset value. The comparison of keys in K(n) and the key obtained from HMAC ’ed report are verified and forwarded by the cluster heads to their one hop report forwarding nodes. When the HMAC offered by a sensor node is found to be illegitimate, i. e. , if the key found in the HMAC is different from the collectively endorsed report, cluster head marks node as attacker which is shown in Figure 5.

Simulation Environment The proposed secure scheme of Dynamic enroute filtering is implemented in NS-2.27 simulator. The simulation consists of 24 sensor nodes out of which 4 nodes in green color are cluster heads; some nodes are configured to be attackers and a base station. The network is randomly deployed in a terrain dimension of 600m X 600m with the following simulation environment shown in Table 1. Table 1. Simulation Environment PARAMETER Channel

Propagation

Fig. 5. Identification of Attacker through collective endorsement

Implementation of Collective Endorsement of Sensor reports. Sensor reports are HMACed as the result of HMAC algorithm implemented in TCL script with the keys randomly picked up from the assigned key matrix. Those reports are further divided into small authenticated shares in the range of 16 bytes each and are sent in rounds from the cluster members to the cluster head in order to prevent Report disruption attack. A report disruption attack when launched by an attacker will make the complete legitimate share of sensor report abruptly dropped by a legitimate cluster head by simply offering an illegitimate MAC to the collective share. Hence through collective endorsement, the whole sensor reports are further divided into small authenticated shares such that even when an attacker offers illegitimate HMAC, the cluster head will be able to recover the complete collective share with the help of legitimate shares received from its members.

VALUE Channel/Wireless channel

Radio Propagation

Ray Ground

Model

Phy/WirelessPhy

MAC

Mac/802_11

Interface Queue

Queue/Drop Tail

Link Layer

LL

Interface Queue Size

Channel Type

Propagation/Two

Network Interface

Antenna

DESCRIPTION

Antenna/Omni Antenna 5000(in packets)

Network Interface Type Medium Access Control Type Interface Queue Type Link Layer Antenna Model Maximum packet in interface Queue

Routing Protocol

AODV

Routing Protocol

Data Rate

11Mbps

Data Transfer Rate

Interface Queue Size

50

Terrain Dimension

600m X 600m

Simulation Time

100 Seconds

Packet Size

1026Bytes

Number of Nodes

25 Reception- rx

Energy Model

Power 0. 3(J/bits) Transmission- tx Power 0. 5(J/bits)

Maximum packets in Interface Queue. Terrain Dimension of the network Total duration of the simulation Size of the CBR traffic packet Number of nodes in the Scenario Power Consumption Model


KODHAI ET AL.: HMAC FILTERING SCHEME FOR DATA REPORTING  33

Performance Metrics & Evaluation The performance metrics are used to measure the performance of the proposed system. Filtering capacity

Filtering capacity of the proposed scheme is defined as the average number of hops that a false report can be detected by the forwarding node at every hop or the fraction of number of false reports filtered to the number of hops travelled. Energy savings

Energy savings of the proposed scheme is defined as the energy consumption in transmission, reception and the computations due to the extra fields which incur extra overhead. We evaluate the length of a normal report without using any filtering scheme and then compare the length of an authenticated report in the next phases of the review.

shown in Figure 6, packet loss seems to be very high when there is increase in the attacker’s count. Attackers try to launch selective forwarding attack, report disruption attack and false report injection attack in which the total availability requirement of the critical information is lost leading to total energy drain of the resource constrained sensor nodes or false positives or false negatives intimation at the base station. Under this state the malicious node drops all the packets from a selective node or selective packets from a node leading to a huge packet loss in the network as discussed in the Threat and Trust model of section 2. With Enroute Filtering mechanism packet loss is reduced to 40% which is achieved by the identification of attacker nodes through collective endorsement implemented in the cluster heads.

Performance metrics determine the performance of a particular scheme in the presence of constraints related to domain oriented advantages and drawbacks. We have evaluated our Enroute mechanism in terms of throughput and packet loss. Packet loss

Mobility-related packet loss may occur at both the network layer and the MAC layer. When a packet arrives at the network layer, the routing protocol forwards the packet if a valid route to the destination is known. Otherwise, the packet is buffered until a route is available. A packet is dropped in two cases: the buffer is full when the packet needs to be buffered and the time that the packet has been buffered exceeds the limit. It can be evaluated with the formula given below. acket Loss (in packets) = DataAgtSent − DataAgt P Rec where AGT– agent trace (used in new trace file format) Scenario: Packet Loss Vs Number of Attacker nodes: Same scenario is maintained in which Packet loss is computed by varying the number of attackers. As

Fig. 6. Packet Loss Vs Number of Attacker node

VII. Conclusion A major challenge for a Wireless Sensor Network lies in the energy constraint at each node, which poses a fundamental limit on the network life time. Even though there are many enroute filtering schemes available in the literature they either fail to support the dynamic nature of the sensor networks or they cannot efficiently mitigate the adversary’s activities. Hence this enroute filtering scheme is currently an area of much research among the security professionals. Generally AODV performs better than many other ondemand protocols under high mobility, large network scenarios. When the size of the network is large and highly mobile the frequency of the link failure is also high. Due to this, latency and control load of the network is also increased. Also due to the attacker’s


34  HINDUSTAN JOURNAL, VOL. 6, 2013

single illegitimate MAC there is a threat of dropping the complete legitimate share. In this work, we propose a HMAC’ed filtering scheme for WSN that utilizes the dissemination of authentication keys for filtering false data injection attacks and DoS attacks. In our scheme, each node uses its own authentication keys to authenticate the reports and a legitimate report should be endorsed by t nodes. The authentication keys of each node form a hash chain and are updated in each round. The Enroute scheme also yielded a better attacker detection and mitigation framework together with disseminated key structure. We thus analyzed the performance metrics of the Enroute Filtering scheme with AODV protocol in terms of Throughput and Packet Loss and the results are also discussed. In future we intend to compare the performance of Enroute Filtering Scheme implemented with the security protocols such as SPINS etc.

References [1] Y un Zhou, Yuguang Fang, and Yanchao Zhang, “Securing wireless sensor networks: a survey”, IEEE Communications Surveys & Tutorials, Vol. 10, No. 3, pp. 6-28, 2008. [2] A l-Sakib Khan, Pathan,Hyung-Woo Lee, and Choong Seon Hong, “Security in Wireless Sensor Networks: Issues and Challenges”, International Conference on Advanced Computing Technologies, Vol. 4, No. 1, pp. 1043-1045, 2006. [3] Z oron S. Bojkovic, Bojan M. Bakmaz, Miodrag, and R. Bakmaz, “Security Issues in Wireless Sensor Networks”, International journal of Communications, Vol. 2, no. 1, pp. 106-114, 2008. [4] K arlof and D. Wagner, “Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures,” Proceedings of First IEEE International Workshop of Sensor Network Protocols and Applications, Vol. 1, pp. 113-127 2003. [5] H . Fang, F. Ye, Y. Yuan, s. Lu, and W. Arbaugh, “Toward resilient security in wireless sensor

networks”, proceeding of ACM international symposium on Mobile ad hoc networking and computing, Vol. 3, pp. 14-27, 2003. [6] F . L. Lewis “Wireless Sensor networks” Available: http://arri. uta. edu/acs [7] G . Padmavathi and Mrs. D. Shanmugapriya, “A Survey of Attacks, security mechanisms and challenges in wireless sensor networks”, International Journal of Computer science and Information Security, Vol. 4, No. 1, pp. 1-9, 2009. [8] I . F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, “A survey on sensor networks,” IEEE Communication Magazine, Vol. 40, No. 8, pp. 102–114, 2002. [9] H emanta Kumar Kalita, and Avijit Kar, “Wireless Sensor Network Security Analysis” International journal of Next-Generation Networks, Vol. 1, no. 1, pp. 1-10, 2009. [10] E laine shi and Adrian perrig, “Designing secure sensor networks”, IEEE wireless communications, Vol. 2 pp. 38-43, 2004. [11] H aowen Chan, Adrian Perrig, and Dawn Song, “Random Key Pre distribution Schemes for Sensor Networks”, Proceedings of IEEE Symposium on Security and Privacy, Vol. 3, pp. 1-17, 2003. [12] F . Ye, H. Luo, S. Lu, and L. Zhang “Statistical En-route detection and filtering of injected false data in sensor networks,” Proceedings of IEEE INFOCOM, Vol. 4, pp. 2446–2457, 2004. [13] S . Zhu, S. Setia, S. Jajodia, and P. Ning, “An interleaved hop-by-Hop authentication scheme for filtering of injected false data in sensor networks,” Proceedings of IEEE Symposium on Security and Privacy, Vol. 4, pp. 259–271, 2004. [14] K ui Ren, Wenjing Lou, and Yanchao Zhang, “LEDS: Providing Location-aware End-to-end Data Security in Wireless Sensor Networks”, Proceedings of the 25th IEEE International Conference on Computer Communications pp. 1-12, 2006. [15] F asee Ullah, Muhammad Amin, and Hamid ul Ghaffar, “Simulating AODV and DSDV


KODHAI ET AL.: HMAC FILTERING SCHEME FOR DATA REPORTING  35

for Adynamic Wireless Sensor Networks”, International Journal of Computer Science and Network Security, Vol. 10, No. 7, pp. 1-7, 2010.

[17] A drian Perrig, Robert Szewczyk, Victor Wen, David Culler, and J. D. Tygar “SPINS: Security Protocols for Sensor Networks”, Proceedings of Mobicom 2001, Vol 8, No. 5, pp. 521-534, 2002.

[16] N or Surayati Mohamad Usop, Azizol Abdullah, and Ahmad Faisal Amri Abidin “Performance Evaluation of AODV, DSDV & DSR Routing Protocol in Grid Environment” International Journal of Computer Science and Network Security, Vol. 9 No. 7, pp 261-268, 2009.

[18] G owrishankar. S, SubirKumarSarkar T. G. Basavaraju “Scenario Based Simulation Study of Adhoc Routing Protocol’s Behavior in Wireless Sensor Networks”, International Conference on Future Computer and Communication, Vol. 5, No. 4, pp 527-532, 2009.


HINDUSTAN JOURNAL, VOL. 6, 2013

An Efficient Neural Network Technique to Detect Collective Anomalies in E-Medicine G. Thiyagarajan, C.M. Rasika, B. Sivasankari and S.Sophana Jennifer

Abstract — Anomaly detection methods can be very useful in recognising the pattern and detecting the attack. The anomalies can be detected due to several reasons, such as condition of the patient, errors in the instruments, or recording errors. The input instance can be taken as a spatio-temporal data (space and time) for detecting the anomalies in any dataset. In this paper, we propose a collective anomaly technique to detect anomalies in such data. In addition, auto-associative networks are used to reconstruct the input pattern and make them free from errors. The results give us hope that these techniques may provide the basis of intelligent monitoring systems that alert clinicians to the occurrence of unusual events or decisions.12 Index terms — Collective anomaly, IDS, Neural Network, Auto associative network.

I.  Introduction With the current medical procedures and the healthy lifestyles of many, the average lifetime expectancy is ever increasing. Technical advances incorporated with wide and accurate knowledge of the human anatomy have allowed healthcare professionals the ability to handle almost any scenario that they encounter in individuals at hospitals and emergency treatment G. Thiyagarajan is Assistant professor, Dept of Computer Science, P.B College of Engineering, Chennai, Tamilnadu, India, (e-mail: sivashankaripitam@gmail. com). C.M. Rasika, B. Sivasankari and S. Sophana Jennifer are PG students, Dept of Computer Science and Engineering, P.B college of Engineering, Chennai, Tamilnadu, India.

facilities. As the average individual lifetime expectancy has increased, this has also directly impacted on our population and as such, a shortage of qualified healthcare professionals to treat the sick and needy has become an issue. Unlike the signature-based intrusion detection systems (IDS) in common use, anomaly-based IDSs have the potential to detect previously unseen, or zeroday, attacks. However, anomaly detection systems can be evaded through carefully crafted attacks and they often produce a large number of false positives. To build a successful anomaly detection system, we must develop detection methods that are better at detecting attacks but without misclassifying legitimate function. The key to this trade-off lies in the nature of the generalization performed by a given anomaly detection method. If the method generalizes too much over training examples, then it will be easy for attackers to craft attacks that resemble normal behaviour; if it generalizes too little, then previously unseen but legitimate function will generate false alarms. Careful control of generalization then is central to solving both problems. Anomaly detection refers to the problem of finding patterns in data that do not match to the expected behaviour. These unmatched patterns are often referred to as anomalies, outliers, jarring observations, exceptions, abnormality, surprises, peculiarities, or corrupted in different application domains.

II.  Background Collective Anomalies: If the collection of related data instances is anomalous with respect to the entire data set, then it is termed a collective anomaly. The single data instances in a collective anomaly may not be anomalies by themselves, but their existences together as a group is


THIYAGARAJAN ET AL.: AN EFFICIENT NEURAL NETWORK TECHNIQUE  37

anomalous. Collective anomalies have been explored for sequence data, graph data, and spatial data. It should be noticed that while point anomalies can exist in any data set, collective anomalies can occur only in data sets in which data instances are related. In contrast, existence of contextual anomalies depends upon the readiness of context aspects in the data. Any point anomaly or a collective anomaly can also be a contextual anomaly if it is analyzed with respect to a context. Hence the point anomaly detection problem or collective anomaly detection problem can be transformed to a contextual anomaly detection problem through integrating the context information.

we apply neural network. Our proposed result is intended to provide reliability in medical essentials used for continuous monitoring of patient, where we detect anomalies in a patient’s health, and differentiate between the individual entering a censorious health state and false readings.

Figure 1 is an example of a human electrocardiogram output. The highlighted region represents an anomaly because the same low value exists for an abnormally long time (which corresponds to an Atrial Premature Contraction). Note that the low value by itself is not an anomaly.

III.  Proposed Approach

It should be noted that this collection of events is an anomaly, but the individual events are not considered as anomalies when they occur in other locations in the sequence.

Fig. 1. Collective anomaly corresponding to an Atrial Premature Contraction in an human electrocardiogram output.

The techniques used for detecting collective anomalies are very different when compared to the point and contextual anomaly detection techniques. For a brief review of the research done in the field of collective anomaly detection, the reader is referred to an extended version of this survey. In this paper, we focus on collective anomaly detection in medical readings, and we propose a new approach based on neural network algorithms to detect abnormal values. First we use collective anomaly method to detect abnormal records, and when detected,

The design and function of neural networks simulate some functionality of biological brains and neural systems. The advantages of neural networks are their adaptive-learning, self-governing and fault-tolerance capabilities. For these outstanding abilities, neural networks are used for pattern recognition applications.

Neural network models in artificial intelligence are usually referred to as artificial neural networks (ANNs); these are essentially simple mathematical models defining a function f : X → Y or a distribution over X or both X and Y, but sometimes models are also intimately associated with a particular learning algorithm or learning rule. A common use of the phrase ANN model really means the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons or their connectivity). The word network in the term ‘artificial neural network’ refers to the inter–connections between the neurons in the different layers of each system. A typical example is a system having three layers. The first layer has input neurons which send data via synapses to the second layer of neurons, and then via more synapses to the third layer of output neurons. More complex systems will have more layers of neurons with some having increased layers of input neurons and output neurons. The synapses store parameters called “weights” that manipulate the data in the calculations. An ANN is typically defined by three types of attributes:

1. T he pattern between the different layers of neurons is interconnected 2. T he learning process used to update the weights of the interconnections 3. T he activation function which converts a neuron’s weighted input to its output activation.


38  HINDUSTAN JOURNAL, VOL. 6, 2013

A. Filtering: Filtering is removing surplus information or data from input. Depending on the application, the filter algorithm or method will change. For example, consider finger print identification. Each time we scan our fingerprints through a (non-ink) fingerprint device, the scanned output may be different. The difference may be due to a change in contrast or brightness or in the background of the image. There could be some falsehood. In order to process the input, we may need only lines in the fingerprints and we may not need the other parts or background of the fingerprint. In order to filter out the surplus portion of the image and replace it with a white background, we need a filtering feature. After the image is filtered through the filtering mechanism, we will get standard clean finger prints only with lines, which in turn helps with the mechanism of feature extraction. B. Feature extraction: Feature extraction is a process of studying and extracting useful information from the filtered input patterns. The derived information may be of general features, which are valued to ease further processing. For example, in image recognition, the extracted features will contain information about gray shade, texture, shape or context of the image. This is the ultimate information used in image processing. The process involved in feature extraction and the extracted features are application dependent. C. Classification: Classification is the final stage of the recognition of pattern.Classifictaion is the stage where an automated system declares that the input object belongs to a particular category. By using the method of Clustering Classification, the patterns of the targeted classes are represented using vectors whose components are real numbers. By the usage of clustering properties, we can easily classify the unknown pattern. If the target vectors are far apart in the arrangement of the geometry, it is easy to categorize the unknown patterns. If they are present nearby or if there is any overlap in the cluster arrangement, we need more complicated algorithms to classify the unknown patterns. One simple algorithm based on the clustering concept is Minimum-DistanceClassification. This method evaluates the distance between the unknown pattern and the chosen set of known patterns and determines which known pattern is nearby to the unknown and, finally, the unknown pattern is placed under the known pattern to which it has minimal distance. This algorithm works well when the target patterns are far apart.

IV.  Evaluation of Clustering Results Evaluation of clustering results is commonly referred to as cluster validation. There have been several proposals for a measure of similarity between two clustering schemes. Such a measure can be used to compare how well different data clustering algorithms perform on a set of data. These measures are usually tied to the type of criterion being considered in assessing the quality of a clustering method.

A.  Internal Evaluation When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with a high similarity within a cluster and a low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications. Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering. Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another. Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for certain kinds of models will not perform well if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion. For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use of k-means, nor of an evaluation criterion that assumes convexity, is sound. The following methods can be used to assess the quality of clustering algorithms based on internal criterion:

B.  Davies–Bouldin Index The Davies–Bouldin index can be calculated by the following formula:


THIYAGARAJAN ET AL.: AN EFFICIENT NEURAL NETWORK TECHNIQUE  39 DB =

 σi + σ j  1 n max  ∑  i ≠ j n i =1  d (ci , c j ) 

where n is the number of clusters, Cx is the centroid of cluster x, σ x is the average distance of all elements in cluster x to centroid Cx , and d(Ci, Cj) is the distance between centroids Ci and Cj. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies– Bouldin index. The clustering algorithm that produces a collection of clusters with the smallest Davies–Bouldin index is considered the best algorithm based on this criterion. Dunn index (J. C. Dunn 1974): The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:    d (i, j ) D = min  min   1≤ i ≤ n 1≤ j ≤ n , i ≠ j max  1≤ k ≤ n d ( k )    

where d(i, j) represents the distance between clusters i and j, and d(k) measures the intra-cluster distance of cluster k. The inter-cluster distance d(i, j) between two clusters may be any number of distance measures, such as the distance between the centroids of the clusters. Similarly, the intra-cluster distance d(k) may be measured in a variety of ways, such as the maximal distance between any pair of elements in cluster k. Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable. External evaluation: In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by human (experts). Thus, the benchmark sets can be thought of as a gold standard for evaluation. These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only for synthetic data sets

with a factual ground truth, since classes can contain internal structure and the attributes present may not allow separation of clusters or the classes may contain anomalies. Additionally, from a knowledge discovery point of view, the reproduction of known knowledge may not necessarily be the intended result. Some of the measures of quality of a cluster algorithm using external criterion include: Rand measure (William M. Rand 1971): The Rand index computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. One can also view the Rand index as a measure of the percentage of correct decisions made by the algorithm. It can be computed using the following formula: RI =

TP + TN TP + FP + FN + TN

where TP is the number of true positives, TN is the number of true negatives, FP is the number of false positives, and FN is the number of false negatives. One issue with the Rand index is that false positives and false negatives are equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern.

F-measure: The F-measure can be used to balance the contribution of false negatives by weighting recall through a parameter β ≥ 0. Let precision and recall be defined as follows:

P=

TP TP + FP

R=

TP TP + FN

where P is the precision rate and R is the recall rate. We can calculate the F-measure by using the following formula: Fβ =

( β + 1) ⋅ P ⋅ R β2 ⋅P+ R

Notice that when β = 0, F0 = P. In other words, recall has no impact on the F-measure when β = 0, and increasing β allocates


40  HINDUSTAN JOURNAL, VOL. 6, 2013

an increasing amount of weight to recall in the final F-measure.

for-chance variant of this that has a reduced bias for varying cluster numbers.

Pair-counting F-Measure is the F-Measure applied to the set of object pairs, where objects are paired with each other when they are part of the same cluster. This measure is able to compare clustering methods with different numbers of clusters.

C.  Clustering Axioms

Jaccard index: The Jaccard index is used to quantify the similarity between two data sets. The Jaccard index takes on a value between 0 and 1. An index of 1 means that the two datasets are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula: J ( A, B) =

A∩ B TP = A ∪ B TP + FP + FN

This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets. Fowlkes–Mallows index (E. B. Fowlkes & C. L. Mallows 1983): The Fowlkes-Mallows index computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes-Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula: FM =

TP TP ⋅ TP + FP TP + FN

where TP is the number of true positives, FP is the number of false positives, and FN is the number of false negatives. The FM index is the geometric mean of the precision and recall P and R, while the F-measure is their harmonic mean. Moreover, precision and recall are also known as Wallace’s indices B1 and B11. Confusion matrix: A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster. The Mutual Information is an information theoretic measure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clustering methods. Adjusted mutual information is the corrected-

Given that there is a myriad of clustering algorithms and objectives, it is helpful to reason about clustering independently of any particular algorithm, objective function, or generative data model. This can be achieved by defining a clustering function as one that satisfies a set of properties. This is often termed as an Axiomatic System. Functions that satisfy the basic axioms are called clustering functions. A partitioning function acts on a set S of n ≥ 2 points along with an integer k > 0, and pairwise distances among the points in S. The points are not assumed to belong to any specific larger set or space; the pairwise distances are the only data the partitioning function has about them. We may label the points in S using the numbers {1,2,..., n}. The pair wise distances define a distance function d : S × S → R which should have the properties of a semi metric: for any i, j ∈ S, we must have d(i,j) ≥ 0, d(i,j) = d(j,i), and d(i,j) = 0if and only if i = j. In other words, the distances must be nonnegative, symmetric, and two points have distance zero if and only if they are the same point. Consistency: Fix k. Let d be a distance function, and d’ be a F(d,k)-transformation of d. Then F(d,k) = F(d’,k). In other words, suppose that we run the partitioning function F on d to get back a particular partitioning Γ. Now, with respect to Γ, if we shrink incluster distances or expand between-cluster distances and run F again, we should still get back the same result - namely Γ .The partitioning function F is forced to return a fixed number of clusters: k. If this were not the case, then the above three properties could never be satisfied by any function. In many popular clustering algorithms such as k-means, Single-Linkage, and spectral clustering, the number of clusters to be returned is determined beforehand by the human user or other methods – and passed into the clustering function as a parameter.

V.  Conclusion In this paper, we proposed a collective anomaly and autoassociative Neural Network to detect anomalies in patient healthcare .The proposed approach achieves both spatial and temporal analysis for anomaly detection. We have


THIYAGARAJAN ET AL.: AN EFFICIENT NEURAL NETWORK TECHNIQUE  41

evaluated our approach on real medical data set with many (real and synthetic) anomalies. Our results demonstrate the ability of the proposed approach to achieve low false alarm rate with a high detection accuracy.

References [1] O sman Salem, Alexey Guerassimov and Ahmed Mehaoua (2013), “Sensor Fault and Patient Anomaly Detection and Classification in Medical Wireless Sensor Networks”, in University of Paris Descartes – LIPADE Division of ITCE, POSTECH, Korea. [2] M argarita Sordo (2002), “Introduction to Neural Networks in HealthCare”, in Open clinical knowledge for Medical care. [3] K enneth L. Ingham, Anil Somayaji (2009), “A Methodology for Designing Accurate Anomaly Detection Systems”. [4] V arun Chandola, Arindam Banerjee, and Vipin Kumar (2009), “Anomaly Detection: A Survey”, University of Minnesota. [5] S aman A. Zonouz, Himanshu Khurana, William H. Sanders, and Timothy M. Yardley (2013), “RRE: A Game-Theoretic Intrusion Response and Recovery Engine”.

[8] A . S. Raghuvanshi, R. Tripathi, and S. Tiwari (2011), “Machine Learning Approach for Anomaly Detection in Wireless Sensor Data”, International Journal of Advances in Engineering & Technology, vol. 1, no. 4, pp.47–61. [9] P oonam Dabas and Rashmi Chaudhary, “Survey of Network Intrusion Detection Using K-Mean Algorithm”. [10] S ushil Kumar Chaturvedi, Prof. Vineet Richariya, Prof. Nirupama Tiwari, “Anomaly Detection in Network using Data mining Techniques” [11] M ooi Choo Chuah, “ECG Anomaly Detection via Time Series Analysis”, in Lehigh University. [12] A . J. Barsky, E. J. Orav, and D.W. Bates (2006), “Distinctive patterns of medical care utilization in patients who somatize”, Med Care., 44(9):803–811. [13] L . Breiman, J. Friedman, R. Olshen, and C. Stone (1984), “Classification and Regression Trees”, CRC Press, Boca Raton. [14] L eo Breiman (2001), “Random forests”, Machine Learning, 45(1):5–32.

[6] A nimesh Patcha and Jung-Min Park, “An Overview of Anomaly Detection Techniques: Existing Solutions and Latest Technological Trends”

[15] H . V. D. Bussche, G. Schon, T. Kolondo, H. Hansen, K.Wagscheider, G. Glaeske, and D. Koller (2011), “Patterns of ambolatory smedical care utilization in elderly patients with special reference to chronic diseases and multimorbidityresults from a clams data based observational study in germanuy”, BMC Geriatrics, 11(1).

[7] I . H. Witten, E. Frank, and M. A. Hall (2011), “Data Mining: Practical Machine Learning Tools and Techniques (Third Edition)”, Morgan Kaufmann Publishers Inc.

[16] V . Chandola, A. Banerjee, and V. Kumar (2007), “Anomaly detection: A survey”, Technical Report TR 07-017, Dept. of Computer Engineering, Univ. Minnesota.


HINDUSTAN JOURNAL, VOL. 6, 2013

Deriving Intelligence from Data Through Text Mining C.T. Sree Vidhya

Abstract — Clustering or classification of data is an important activity in partitioning the data point into similarity classes. Fuzzy clustering algorithms provide a fuzzy description of the discovered structure. The main advantage of fuzzy clustering is the ability to model uncertainty within the data. The problem with Fuzzy C-Means (FCM) is that the membership of a data point in a cluster depends directly on the membership value of the other cluster centers and this sometimes happens to produce undesirable results. Another important task in data mining process is detecting the outliers. The isolation of outliers is important both for improving the quality of original data and for reducing the impact of outlying values in the process of knowledge discovery in databases. The drawback of detecting outliers through conventional method, such as crisp clustering techniques, is sensitivity to noise and outliers. This paper proposes a method that overcomes these drawbacks. The proposed method initially performs improved FCM algorithm. The algorithm uses as simple expression to calculate membership value. Further, the improved FCM algorithm is used to detect outliers. Diabetes database is used to test the result. Here diabetic persons are considered as customers (patients) from which intelligence is derived. The patients record are in document format.1 Index Terms — Text mining, Clustering, Fuzzy Clustering, and Diabetes Analysis.

C.T. Sree Vidhya is in School of Computing Sciences, Hindustan University, Chennai, India, (e-mail: mailctsree@gmail.com)

I.  Introduction A. Text Mining Text mining or knowledge discovery from text (KDT) — for the first time mentioned in Feldman et al.[1] — deals with the machine supported analysis of text. It uses techniques from information retrieval, information extraction as well as natural language processing (NLP) and connects them with the algorithms and methods of KDD, data mining, machine learning and statistics [7]. Thus, one selects a similar procedure as with the KDD process, whereby not data in general, but text documents are in focus of the analysis. Text mining essentially corresponds to information extraction — the extraction of facts from texts. Text mining can be also defined similar to data mining as the application of algorithms and methods from the fields of machine learning and statistics to texts with the goal of finding useful patterns. For this purpose, it is necessary to pre-process the texts accordingly. Many authors use information extraction methods, natural language processing or some simple pre-processing steps in order to extract data from texts. To the extracted data, data mining algorithms can then be applied.

B. Cluster Analysis Cluster analysis is a technique for breaking data down into related components in such a way that patterns and order becomes visible. It aims at sifting through large volume of data in order to reveal useful information in the form of new relationships, patterns, or clusters, for decision making by a user. Clusters are natural groupings of data items based on similarity metrics or probability density models. Clustering algorithms map a new data item into one of several known clusters. In fact, cluster analysis has the virtue of strengthening the exposure of patterns and behaviour as


SREE VIDHYA: DERIVING INTELLIGENCE FROM DATA THROUGH TEXT MINING  43

more and more data becomes available [12]. A cluster has a centre of gravity which is basically the weighted average of the cluster. Membership of a data item in a cluster can be determined by measuring the distance from each cluster centre to the data point [13]. The data item is added to a cluster for which this distance is a minimum.

II.  Fuzzy Logic A. Fuzzy Theory The modelling of imprecise and qualitative knowledge, as well as handling of uncertainty at various stages is possible through the use of fuzzy sets. Fuzzy logic is capable of supporting, to a reasonable extent, human type reasoning in a natural form by allowing partial membership for data items in fuzzy subsets. Integration of fuzzy logic with data mining techniques has become one of the key constituents of soft computing in handling the challenges posed by the massive collection of natural data. Fuzzy logic is logic of fuzzy sets. A Fuzzy set has, potentially, an infinite range of truth values between one and zero. Propositions in fuzzy logic have a degree of truth, and membership in fuzzy sets can be fully inclusive, fully exclusive, or some degree in between [13].The fuzzy set is distinct from a crisp set in that it allows the elements to have a degree of membership. The core of a fuzzy set is its membership function: a function which defines the relationship between a value in the sets domain and its degree of membership in the fuzzy set. The relationship is functional because it returns a single degree of membership for any value in the domain [11]. µ = f ( s, x)

(1)

Here, μ is he fuzzy membership value for the element s is the fuzzy set x is the value from the underlying domain. Fuzzy sets provide a means of defining a series of overlapping concepts for a model variable through degrees of membership. The values from the complete universe of discourse for a variable can have memberships in more than one fuzzy set.

B. Fuzzy Clustering The central idea in fuzzy clustering is the non-unique partitioning of the data in a collection of clusters. The

data points are assigned membership values for each of the clusters. The fuzzy clustering algorithms allow the clusters to grow into their natural shapes [15]. In some cases the membership value may be zero indicating that the data point is not a member of the cluster under consideration. Many crisp clustering techniques have difficulties in handling extreme outliers but fuzzy clustering algorithms tend to give them very small membership degree in surrounding clusters [14]. The non-zero membership values, with a maximum of one, show the degree to which the data point represents a cluster. Thus fuzzy clustering provides a flexible and robust method for handling natural data with vagueness and uncertainty. In fuzzy clustering, each data point will have an associated degree of membership for each cluster. The membership value is in the range zero to one and indicates the strength of its association in that cluster.

III.  Related Work A. Outlier Detection Approaches There is no single universally applicable or generic outlier detection approach. Therefore, many approaches have been proposed to detect outliers. These approaches can be classified into four major categories based on the techniques used [18], which are: distribution-based, distance-based, density-based and clustering-based approaches. Distribution-based approaches [19, 20, 21] develop statistical models (typically for the normal behaviour) from the given data and then apply a statistical test to determine if an object belongs to this model or not. Objects that have low probability to belong to the statistical model are declared as outliers. However, distribution-based approaches cannot be applied in multidimensional scenarios because they are uni-variate in nature. In addition, a prior knowledge of the data distribution is required, making the distribution-based approaches difficult to be used in practical applications [18]. In the distance-based approach [6, 7, 22, 23], outliers are detected as follows. Given a distance measure on a feature space, a point q in a data set is an outlier with respect to the parameters M and d, if there are less than M points within the distance d from q, where the values of M and d are decided by the user.


44  HINDUSTAN JOURNAL, VOL. 6, 2013

The problem with this approach is that it is difficult to determine the values of M and d. Density-based approaches [17, 25] compute the density of regions in the data and declare the objects in low dense regions as outliers. In [17], the authors assign an outlier score to any given data point, known as Local Outlier Factor (LOF), depending on its distance from its local neighbourhood. A similar work is reported in [25]. Clustering-based approaches [9, 18, 19, 20] consider clusters of small sizes as clustered outliers. In these approaches, small clusters (i.e., clusters containing significantly less points than other clusters) are considered outliers. The advantage of the clustering-based approaches is that they do not have to be supervised. Moreover, clustering-based techniques are capable of being used in an incremental mode (i.e., after learning the clusters, new points can be inserted into the system and tested for outliers). Custem and Gath [19, 18] present a method based on fuzzy clustering. In order to test the absence or presence of outliers, two hypotheses are used. However, the hypotheses do not account for the possibility of multiple clusters of outliers.

sensitive to outliers, and hence may not give accurate results.

B. FCM Fuzzy Algorithm [10] Fuzzy c-means clustering involves two processes: the calculation of cluster centres and the assignment of points to these centres using a form of Euclidian distance. This process is repeated until the cluster centres stabilize. The algorithm is similar to k-means clustering in many ways but it assigns a membership value to the data items for the clusters within a range of 0 to 1. So it incorporates fuzzy set concepts of partial membership and forms overlapping clusters to support it. The algorithm needs a fuzzification parameter m in the range [1, n] which determines the degree of fuzziness in the clusters. When m reaches the value of 1 the algorithm works like a crisp partitioning algorithm and for larger values of m the overlapping of clusters tends to be more. The algorithm calculates the membership value μ with the formula,  1  1    d ji  m − 1 µ j ( xi ) =   p  1  1 ∑ k =1  d  m − 1  ji 

Jiang et. al. [20] presented a two-phase method to detect outliers. In the first phase, the authors proposed a modified k-means algorithm to cluster the data, and then, in the second phase, an Outlier- Finding Process (OFP) is proposed. The small clusters are selected and regarded as outliers, using minimum spanning trees.

where

In [9] clustering methods have been applied. The key idea is to use the size of the resulting clusters as indicators of the presence of outliers. The author uses a hierarchical clustering technique. A similar approach was reported in [22].

p is the number of specified clusters

Acuna and Rodriguez [20] performed the PAM algorithm followed by the Separation Technique (termed PAMST). The separation of a cluster A is defined as the smallest dissimilarity between two objects; one belongs to Cluster A and the other does not. If the separation is large enough, then all objects that belong to that cluster are considered outliers. In order to detect the clustered outliers, one must vary the number k of clusters until obtaining clusters of small size with a large separation from other clusters are obtained. In [22], the authors proposed a clustering- based approach to detect outliers. The k-means clustering algorithm is used. As mentioned in [13], the k means is

(2)

μj(xi) is the membership of xi in the j-th cluster dji is the distance of xi in cluster cj m is the fuzzification parameter

dki is the distance of xi in cluster Ck The new cluster centres are calculated with these membership values using the expression, where Cj =

∑ i  µ j ( xi ) mx ∑ i  µ j ( xi ) m

i

Cj is the centre of the j-th cluster

(3)

xi is the i-th data point μj the function which returns the membership m is the fuzzification parameter This is a special form of weighted average. We modify the degree of fuzziness in xi’s current membership and multiply this by xi. The product obtained is divided by the sum of the fuzzified membership. The first loop of the algorithm calculates membership values for the data points in clusters and the second loop recalculates


SREE VIDHYA: DERIVING INTELLIGENCE FROM DATA THROUGH TEXT MINING  45

the cluster centres using these membership values. When the cluster centre stabilizes (when there is no change) the algorithm ends (see Table 1).

Step 3: For the rest of the points not determined in step 2, remove a point temporarily and recalculate the objective function.

Table 1. Fuzzy C Means Algorithm

To detect the outliers in the rest of the clusters (if any), we (temporarily) remove the point from the data set and re-execute the c-means algorithm. If the removal of a point causes a noticeable decrease in the objective function value, the point is considered as an outlier; otherwise, it is not.

initialize p=number of clusters initialize m=fuzzification parameter initialize Cj (cluster centers) Repeat For i=1 to n :Update μj(xi) applying (3) For j=1 to p :Update Ci with(4)with current μj(xi) Until Cj estimate stabilize

Since the membership value varies strictly from 0 to 1, initializing the threshold value to 0.5 produces a better result in cluster centre calculation.

C. Limitations of the algorithm The fuzzy c-means approach to clustering suffers from several constraints that affect its performance [14]. 1. T he main drawback is from the restriction that the sum of membership values of a data point xi in all the clusters must be one as in (4), and this tends to give high membership values for the outlier points. So the algorithm has difficulty in handling outlier points. 2. S econdly the membership of a data point in a cluster depends directly on the membership values of other cluster centres and this sometimes happens to produce undesirable results.

p

µ ( xi ) = 1 j =1 j

(4)

In fuzzy c-means method a point will have partial membership in all the clusters. 3. T he expression (3) for calculating the new cluster centres finds a special form of weighted average of all the data points. The third limitation of the algorithm is that due to the influence (partial membership) of all the data members, the cluster centres tend to move towards the centre of all the data points [10]. 4. T he fourth constraint of the algorithm is its inability to calculate the membership value if the distance of a data point is zero.

IV. The Modified Fcm (Mfcm) In c-means algorithm, the membership of a point in a cluster is calculated based on its membership in other clusters. Many limitations of the algorithm arise due to this and in the new method the membership of a point in a cluster centre depends only on its distance in that cluster. For calculating the membership values, we use a simplee expression as given in (5). µ j ( xi ) =

max (d j ) − d ji max (d j )

(5)

where µ j ( xi ) is the membership of the xi in the jth cluster

max (d j ) is the maximum distance in the cluster c j d ji is the distance of xi in the cluster c j

The membership function (5) will generate values closer to one for smaller distances (dji) and a membership value of zero for the maximum distance. If the distance of a data point is zero then the function returns a membership value of one and thus it overcomes the fourth constraint of c-means algorithm. The membership values are calculated only based on the distance of a data member in the cluster and due to this, the modified FCM [25] does not suffer from the first and second constraints of c-means algorithm.

V.  Proposed Method

D.  Existing Outlier detection method The basic structure of the method is as follows; Step 1: Execute the FCM algorithm to produce a set of k clusters as well as the objective function. Step 2: Determine small clusters and consider the points belonging to these clusters as outliers.

The proposed method incorporates the idea of using a simple expression (5) to detect outliers.

A. The basic steps of the proposed algorithm, 1.  Perform modified FCM where membership value is calculated using a simple expression


46  HINDUSTAN JOURNAL, VOL. 6, 2013 Initialize p = number of clusters Initialize m (fuzzification parameter) Initialize

(cluster centers)

Initialize α (threshold value) Repeat For i=1 to n: Update µ j ( xi ) applying (5) For k=1 to p:

Sum=0 Count=0 For i=1 to n : If μ(xi) is maximum in Ck then If μ(xi)>=α Sum=Sum+xi count=count + 1 Ck=Sum/count 2.  Determine small clusters and consider the points that belong to these clusters as outliers. 3.  For the rest of the points, For each point i DO remove a point, pi, re-calculate the objective function applying (5) If μ(xi)>=α : then classify point pi as an outlier and return it back to the set; End DO End

VI.  Methodology An expert system which employs fuzzy c Means for the diagnosis of diabetes is developed in an environment characterized by Microsoft Window XP professional Operating System and the idea is implemented and executed using Weka Data Mining tool. An approach for analyzing clusters to identify meaningful pattern for determining whether a patient suffers from diabetes or not is presented. The system provides a guide for diagnosis of diabetes within a decision making framework. The process for the medical diagnosis of diabetes starts when an

individual consults a physician (doctor) and presents a set of complaints (symptoms). The physician then requests further information from the patients. Data collected include patient’s previous state of health, living condition and other medical conditions. A physical examination of the patient’s condition is conducted and in most cases, a medical observation along with medical test(s) is carried out on the patient prior to medical treatment. From the symptoms presented by the patient, the physician narrows down the possibilities of the illness that corresponds to the apparent symptoms and make a list of the conditions that could account for what is wrong with the patient. These are usually ranked in possibility order (low, moderate and high). The physician then conducts a physical examination of the patient, studies his or her medical records and ask further questions, as he goes in an effort to rule out as many of the potential conditions as possible. When the list has been narrowed down to a single condition, it is called differential diagnosis and provides the basis for a hypothesis of what is ailing the patient. Until the physician is certain of the condition present; further medical tests are performed or schedule such as medical imaging, scan, X-rays in part to confirm or disprove the diagnosis or to update the patient’s medical history. Upon completion of the diagnosis by the physician, a treatment plan is proposed, which includes therapy and follow-up (further meeting and test to monitor the ailment and progress of the treatment if needed). Review of diagnosis may be conducted again if there is a failure of the patient to respond to treatment that would normally work. The physician may carry out a precise diagnosis, which requires a complete physical evaluation to determine whether the patient has diabetes. The examining physician accounts for possibilities of having diabetes through an interview, physical examination and laboratory test. A thorough diagnostic evaluation may include a complete history of the following: a. Ancestors Medical history? b. When the symptoms started? Once the medical examination is done, the patient may suffer one of the following diabetic types [13]: Type 1 diabetes- it is determined usually by age, i.e. age below 25. Type 2 diabetes- it is determined by insulin level and age above 25.


SREE VIDHYA: DERIVING INTELLIGENCE FROM DATA THROUGH TEXT MINING  47

Gestational diabetes- determined in pregnant female.

diabetes database which was available in Diabetes Care Centre.

Once the process of medical examination is done, the patient medical report is generated either in structured or unstructured format. These reports can be analyzed and studied for further insights which lead to more patterns of information or diagnosis methods.

In the dataset constructed for this domain, the age feature simply represents the age of the patient. Every other feature was given a degree in the range of 0 to 3. Here, 0 indicates that the feature was not present, 3 indicates the largest amount possible, and 1, 2 indicate the relative intermediate values.

VII.  Results and Discussion In order to find the effectiveness of the new algorithm, we applied it with a small data set to demonstrate the functioning of the algorithm in detail. The algorithm is also tested with real data collected from Diabetes Care Centre. To design the Diabetes Risk Assessment System for diagnosis of diabetes, we design a system which consists of a set of parameters needed for diagnosis (here, we are using 10 basic and major parameters) as presented in Table 1. Table 1. Parameters for Diabetes Diagnosis S.No

SYMPTOMS

The names and id numbers of the patients were recently removed from the database. Number of Instances: 75 Number of Attributes: 11 The different form of symptoms of diabetes constitutes the parameters of the knowledge base. The fuzzy set of parameters represented by P which is defined as {P1,P2 ... Pn}

where Pj represents jth parameter and n represents the number of parameters (here n=10). The fuzzy C-means algorithm provides the rules for the partitioning of patients into a number of homogenous classes with respect to a suitable similarity measure. In this paper, the patients were classified into five form of diabetes, that are mentioned in Table 2.

1

Often Thirst

2

Excessive Hunger

3

Frequent Urination

4

Sudden Weight Loss

5

Blurred Vision

1

Heart

6

Numbness

2

Vision

7

Slow healing of Yeast Infection

3

Kidney

8

Hard to heal Infection

4

Nerves

9

Dry or Itchy skin

5

Gums or Teeth

10

Age

Table 2. Classes (Organ Affected) S.No

Risk of Organ to Failure

Table 3. Outlier Membership

The patients who suffer any of the six symptoms listed in the table 1 have all the chances to suffer any one of organ failure listed in Table 2.

A. Diabetes Datasets The database used for this work is Diabetes Database. The data sets are obtained from one of the Diabetes Health care organization in Chennai called SS Diabetes Diagnostic Centre. In this section, we examine performance of proposed algorithm and compare it with FCM algorithm on the

Methods

Final Cluster Centers

Outlier Membership

C1 - 2.79 , 47.65

.37

C2 - 3.29 , 204.4

.63

New

C1 - 1.52,166.20

0

Method

C2 - 5.49,172.11

0

C Means

The new algorithm is capable of giving very low membership values to the outlier points. From Table 3, it is clear that the new fuzzy clustering method we propose is better than the conventional c


48  HINDUSTAN JOURNAL, VOL. 6, 2013

means algorithm in handling outlier points and in the calculation of new cluster centers. Finally, after detecting and removing outliers, FCM is performed once again to derive possibility of persons prone to develop future risk of organ failure as shown in Table 4. Table 4. Evaluvation Result CLASSES

CLUSTERED INSTANCES

Heart

17(23%)

Vision

26(35%)

Kidney

14(19%)

Nerves

15(20%)

Gums & Teeth

3(4%)

VIII.  Conclusion A good clustering algorithm produces high quality clusters to yield low inter cluster similarity and high intra cluster similarity. Many conventional clustering algorithms like k-means and fuzzy c-means algorithm achieve this on crisp and highly structured data. But they have difficulties in handling unstructured data which often contain outlier data points. The proposed new fuzzy clustering algorithm combines the positive aspects of both crisp and fuzzy clustering algorithms. It is more efficient in handling the natural data with outlier points than both k-means and fuzzy c-means algorithm. It achieves this by assigning very low membership values to the outlier points. But the algorithm has limitations in exploring highly structured crisp data which is free from outlier points. The efficiency of the algorithm has to be further tested on a comparatively larger data set.

References [1] A hmed, S. R. (2004), “Applications of data mining in retail business”, Information Technology: Coding and Computing, 2, 455–459. [2] C arrier, C. G., & Povel, O. (2003), “Characterising data mining software”, Intelligent Data Analysis, 7, 181–192. [3] K racklauer, A. H., Mills, D. Q., & Seifert, D. (2004), “Customer management as the origin of collaborative customer relationship

management”, Collaborative Customer Relationship Management - taking CRM to the next level, 3–6. [4] M itra, S., Pal, S. K., & Mitra, P. (2002), “Data mining in soft computing framework: A survey”, IEEE Transactions on Neural Networks, 13, 3–14. [5] P arvatiyar, A., & Sheth, J. N. (2001), “Customer relationship management: Emerging practice, process, and discipline”, Journal of Economic & Social Research, 3, 1–34. [6] M .-S. Chen, J. Han, and P. S. Yu (1996), “Data mining: an overview from a database perspective”, IEEE Transaction on Knowledge and Data Engineering. [7] T . Hastie, R. Tibshirani, and J. Friedman (2001), “The Elements of Statistical Learning”, Springer. [8] R . Maitra (2002), “A statistical perspective on data mining”, J. Ind. Soc. Prob. Statist. [9] M . B. and D. J. Hand (eds.) (1999), “Intelligent data analysis”, Springer-Verlag New York, Inc. [10] U . M. Fayyad, G. Piatetsky-Shapiro, and P. Smyth (1996), “Knowledge discovery and data mining: Towards a unifying framework”, Knowledge Discovery and Data Mining, pages 82–88. [11] R ichard, L.S., and Ronald M.E. (2008), “Lessons from Theory & Research on Clinician-Patient Communication”, Karen G., Barbara K.R, K. Viswanth (eds.) Health Behavior and Health Education; Theory, Research, and Practice 4th edition, (11) p 236-269. [12] P avel Berkhin, “Survey of Clustering Data Mining Techniques”. [13] U . Fayyad and R. Uthurusamy (1996), “Data mining and knowledge discovery in databases”, Commn. ACM, vol. 39, pp. 24–27. [14] E . Cox (2005), “Fuzzy Modeling And Genetic Algorithms For Data Mining And Exploration”, Elsevier. [15] M aria Halkidi, “Quality assessment and Uncertainty Handling in Data Mining Process”. [16] R . Cruse, C. Borgelt, “Fuzzy Data Analysis Challenges and Perspective”


SREE VIDHYA: DERIVING INTELLIGENCE FROM DATA THROUGH TEXT MINING  49

[17] J . C. Bezdek (1973), “Fuzzy Mathematics in Pattern Classification”, Ph.D. thesis, Center for Applied Mathematics, Cornell University, Ithica, N.Y. [18] G ath, I and A. Geva, (1989), “Fuzzy Clustering for the Estimation of the Parameters of the Components of Mixtures of Normal Distribution”, Pattern Recognition Letters, 9, pp. 77-86. [19] C utsem, B and I. Gath, (1993), “Detection of Outliers and Robust Estimation using Fuzzy Clustering”, Computational Statistics & Data Analyses 15, pp. 47-61. [20] J iang, M., S. Tseng and C. Su, (2001), “Twophase Clustering Process for Outlier Detection”, Pattern Recognition Letters, 22: 691-700. [21] A cuna E. and Rodriguez C., (2004), “A Meta Analysis Study of Outlier Detection Methods in Classification”, Technical paper, Department of Mathematics, University of Puerto Rico at Mayaguez, In proceedings IPSI, Venice.

[22] A lmeida, J., L. Barbosa, A. Pais and S. Formosinho, (2007), “Improving Hierarchical Cluster Analysis: A New Method with Outlier Detection and Automatic Clustering”, Chemometrics and Intelligent Laboratory Systems 87: 208–217. [23] Y oon, K., O. Kwon and D. Bae, (2007), “An approach to Outlier Detection of Software Measurement Data using the K-means Clustering Method”, First International Symposium on Empirical Software Engineering and Measurement (ESEM 2007), Madrid. [24] I mianvan A.A. and Obi J.C. (2011), “Fuzzy Cluster Means Expert System for the Diagnosis of Tuberculosis”, Vol 11 Issue Version 1.0 April, Global Journals Inc. (USA). [25] T homas Binu, Raju, (2009), “A Novel Fuzzy Clustering Method for Outlier Detection in Data Mining”, International Journal of Recent Trends in Engineering, Vol. 1, No. 2.


HINDUSTAN JOURNAL, VOL. 6, 2013

Web Service Assortment through Genetic Algorithm and XML Deeptha R and Rajeswari Mukesh

Abstract — Owing to the rapid development of Web tools, web based applications gradually use diverse programming languages and platforms today. On the other hand Web service technologies were presented to ease the incorporation of web applications on heterogeneous platforms. Nevertheless, fewer works have been done on issues related to the eminence of composite services. The quality of Web services has received much attention as it relates to the service discovery process. This study uses the selection model along with the concept of quality of service (QoS) in order to improve the quality of service performance of current Web services in the discovery development. It also practices a selection model as the substance for selecting Web services, leading to simulations of generalizing QoS concert when implemented in sequence. Nevertheless, computation of optimal service composition needed exponential time in the number of services. As of this, genetic algorithm quickly discovers the best-fitting service composition. Lastly, we notch and sort each service composition created on the service requesters’ favorites towards QoS. The results of the experiment show that considering workflow QoS in selecting service composition improves the actual QoS presentation. At the same time, using genetic algorithm the service composition delivers an improvement in the resolution interval.1 Index terms — Web services, Genetic algorithm, Workflow, XML, QoS, Service selection, Deeptha R and Rajeswari Mukesh are in School of Computing Sciences, Hindustan University, Chennai, India. (e-mail: r.deeptha@gmail.com, rajeswarim@ hindustanuniv.ac.in)

I.  Introduction As generations of Web applications and technologies arise, different types of web service providers arise in relation to these new technologies. To allow these applications or services to perform in a heterogeneous operating system, numerous Web services specifications were developed. These stipulations include WSDL, XML, SOAP, and the UDDI data argument standard linking to Web services, the one that finds Web services based on one’s desires. The extra web services are obtainable, but it is more difficult to find the most suitable service for an unambiguous solicitation. Although Web services deliver a universal data conversation platform and offer methods that characterize the service recovery interfaces, a service supplicant may not be able to find appropriate Web services basically by consuming search keywords. Even if a service is created that caters to the functional necessities, it is indistinct if the quality routine of that service can satisfy the requester’s needs. Though the existing UDDI structure does not yet deliberate the quality of service (QoS) when searching for services, there is a need to consider the QoS [1]. Though previous studies have largely focused on the quality of a single provision, much work has previously been done in the field of Web services. But, real-world requests frequently need to incorporate facilities into a workflow [2]. This study seeks to control a path for service activists to find the service arrangement that delivers the expected QoS. Once a service is collected comprising of several sub-tasks, each sub-task will disturb the general tendency of the overall service. Though many models have been devised for Web service selection, making a composite Web service not only correct and reliable but also with an optimal QoS remains a significant challenge.


DEEPTHA AND RAJESWARI MUKESH: WEB SERVICE ASSORTMENT THROUGH GENETIC  51

II.  Related Work

A.  Web Service Selection Some web service tools are available that selects web services that are reliable. Figure.1 shows the various service selections presently available for web service selection. When selecting an implementation for one web service, a particular operation for extra web service in the composite web service framework should also be selected [3]. For example, when structuring a house broker web service, if we select a particular broker web service that only receives payments made by Master cards, before that we need to select a payment web service that admits Master cards. This kind of constraints is called dependency constraint. Furthermore, in the web service selection there might be certain web service operations that conflict with each other. When selecting an implementation for one web service, we must not select a particular implementation for additional web service in the compound web service framework. This kind of constraints is called conflict constraint [4]. In the web service selection system, both dependency constraints and conflict constraints must be measured. Numerous optimal Web Service Selec on WSDL-based Structure

Text

Ontology-based Seman cs

Seman cs

Keyword

Tree

Seman cs

WSMO

String

Graph

Seman cs

OWL-S

Surveying the current works, we classify existing approaches and researches into three types, ●● Active workflow provision engagement ●● QoS- attentive service selection ●● Transactional-attentive The approaches belonging to the first class, as in [9][10][11][12][13], mostly address the problem happening at the major step in design time. It automates the production of workflow. These existing studies do not use the perfect architecture and constraints. The importance to integrate transactional and QoS-attentive service selection has been stated in [14]. A selection algorithm taking into account transactional and QoS aspects is proposed. It is a remarkable work to integrate transactional and QoS attentive selection. However, it still asks for a pre-defined, handcrafted workflow and it only supports local-optimal QoS attentive because it is done in a transactional manner. According to the classification, most approaches only have one of the characteristics namely dynamic workflow, transactional or QoS attentive. In other words, each approach only automates solution or a part of the design time process. These detailed design process contains a sequential path, and not an optimal selection of web services. Therefore, our objective is to provide a one-stepped, full automation approach for Web service composition in design time.

III.  Web Service Assortment Through Ga and Xml

A.  Quality of Service IR model

Signature

Seman cs

WSDL-S

Fig. 1. Web Service Selection Hierarchy

web service selection glitches have been intensively considered and dissimilar methods have been suggested in the past few studies [5], [6], [7], [8]. These optimal web service selection and composition problems with constraints remain open. From the computational point of view, web service selection problem is usually a controlled and a combinatorial optimization problem. Thus genetic algorithms might be effective and operative to solve such problems.

QoS can be used to express the non-functional requirements of network community research. Some studies have defined QoS in distributed systems like how one can express the desired QoS for a system as well as how requests can be delivered to a resource manager to satisfy QoS requirements. QoS originated as a research topic in the fields of networking, system middleware and real time computing. At present, related research on the quality of Web services has contributed greatly to improving the quality of Web services [15]. Figure.2 shows the GA based implementation of house broker architecture. Although the broker mechanism can use the service’s basic description to easily obtain the required services, this only gives access


52  HINDUSTAN JOURNAL, VOL. 6, 2013

to information recorded during service registration and does not provide any indication of the state or quality of the service. Henceforth, in real implementation of the service, circumstances occasionally arise where the service is unable to function or the quality fails to satisfy users’ requirements. As mentioned in [16] several QoS problems are encountered in Web service facilities, such as in what way to control whether services conform to the performance necessities, and whether they could provide a high gradation of consistency for making key chores of a system. The discussion is then focused on QoS issues and proposed a new Web service discovery framework and a discovery model. In our proposed house broker architecture, web services are executed according to the best optimal solution of services given by user [17]. Once house broker service gets logged in, it evaluates the possible web service and chooses the optimal solution.

B.  Genetic algorithm Service Requester

Request

QoS

Service Provider GA

XML

Web Service

QoS Monitoring No

The genetic algorithm (GA) is a search technique developed by Holland [1970] used to find precise or estimated solutions to optimization and search problems [18]. The Genetic algorithms are a precise class of evolutionary algorithms that use techniques inspired by evolutionary ecology incorporating inheritance, mutation, selection, and crossover [19]. Control parameters of genetic algorithms include population size, boundary rate, chromosome code, and aptness function in addition to the termination condition [20]. Genetic algorithms can be employed in the search for optimal composition [21]. Though, binary genetic algorithms can find the optimal solution in this case, too much time is used in multipart encoding and decoding procedures, there are too many limitations and a large number of bits are prerequisite to represent them. As a result encoding and decoding will slow down the searching speed of the system. On the other hand, real-valued genetic algorithms do not have this drawback. In addition, most real world optimization glitches need real-valued parameters [22]. Using realvalued genetic algorithms to solve problems not only makes it more convenient to manage parameters, but also eliminates the complicated computation of encoding and decoding, and further enhances the correctness of the system. Our implementation relates real-valued coding schemes to the dynamic coding glitches in various workflow states and feature an enhanced genetic algorithm to select optimal web services composition tactics from many complex strategies on the basis of universal QoS constraints [23]. A superior fitness task and an alteration policy were planned in the work. The outcomes revealed that the enhanced genetic algorithm can advance efficiently the compound service plan that fulfilled the global QoS provisions. Also, the conjunction of genetic algorithm improved and presented a quickly convergent population diversity treatment for web service selection with global QoS limitations. In that study, an improved initial population policy and an evolution policy were planned and implemented on a variety of populations and a virtual matrix coding scheme. This GA implementation will choose the optimal broker service according to the QoS.

IV.  Conclusion Service Sa sfac on

Fig. 2. GA based Service Selection for House Broker

Service discovery is a key aspect in the SOA research community. In this work, we hve proposed a real-world and adaptive Web Service discovery framework which is to be created on ontology comparison. When the framework was built, the web service requestor, after accessing the desired


DEEPTHA AND RAJESWARI MUKESH: WEB SERVICE ASSORTMENT THROUGH GENETIC  53

web service, can find a list of comparable Web services in the order of similarity. To more closely match QoS to the requirements of service requesters, this study applies genetic algorithm to optimize the simulated deduction of workflow QoS. Based on the weights supplied by service activists, we advance the discovery process to composite services. Our tests show that actual QoS presentation of complex facilities is better when seeing workflow than when not seeing workflow in the initial stage of service selection. Using the weights can also move the selection closer to the QoS preferences of service requesters. The tests also display that genetic algorithm can condense the period required to obtain the optimal service arrangement. It is initially supposed that when there is solely one workflow sub-task, it would not matter whatever collection plan is accepted. However, reproduction shows that even when the service quantity is one, when there is a recursive progression applied more than once, dissimilar collection policies resolve selection of diverse tasks. As an outcome, the thought of complete workflow QoS is important in the primary phase of service selection.

V.  Future Work The proposed evaluation model has shown that the performance of the new genetic algorithm may not be as stable as that of the testing of the web service and that it considers only the front end of the system. Future work will focus on improving the performance of new genetic algorithm with database connectivity and testing functionalities.

References [1] B Benatallah, LZ Zeng, “QoS-Aware Middleware for Web Services Composition,” IEEE Transactions on Software Engineering, 30(5), 2004, pp- 311-327. [2] W Gaaloul, S Bhiri, and M Rouached, (2010) “Event-Based Design and Run time Verification Composite Service Transactional Behavior,” IEEE - Transactions on Services Computing, vol. 3, pp. 32-45. [3] L Aversano, MD Penta, and K Taneja- “A genetic programming approach to support the design of service compositions,” International Journal of Computer Systems Science & Engineering, vol. 21, pp:247- 254, 2006.

[4] E Silva, PL Ferreira, and M van Sinderen, “Supporting Dynamic Service Composition at Runtime based on End-user Requirements,” Proc. of 1st International Workshop on User-Generated Services Stockholm, Sweden-2009. [5] F Lécué, A Léger (2006) ”A formal model for Web service composition,” Proc. of 13th International Conference on Concurrent Engineering, pp. 3746. [6] K Fujii, T Suda (2009) ”Semantics-based context-aware dynamic service composition,”ACM Transactions on Autonomous and Adaptive Systems,-2009 vol. 4, pp. 1-31. [7] K Fujii and T Suda (2006) ”Semantics-based dynamic Web Service composition,” International Journal of Cooperative Information Systems, pp. 293-324 [8] S Lajmi, C Ghedira, and K Ghedira (2009)”CBR Method for Web Service Composition,” Advanced Internet Based Systems and Applications. vol. 4879, E. Damiani Eds.: Springer Berlin/ Heidelberg, pp-314-326. [9] J unli Wang, Zhijun Ding, Changjun Jiang (2006)“GAOM: Genetic Algorithm based Ontology Matching,” Proc. of the IEEE Asia Pacific Conference on Services Computing, pp617-620. [10] F uyong Yuan, Jian Liu, Chunxia Yin, Yulian Zhang (2008) “A Novel Methodology for Web Services Discovery in Gnutella like Networks”, IEEE-Third International Conference on Signal Image Technologies and Internet Based System, pp-231-238. [11] E yhab Al-Masri and Qusay H. Mahmoud, (2007) “WSCE: A Crawler Engine for Large-Scale Discovery of Web Services,” IEEE International Conference on Web Services, pp- 1104-1111. [12] U ddam Chukmol, Aïcha-Nabila Benharkat, Youssef Amghar, (2008) “Enhancing Web Service Discovery by using Collaborative Tagging System”, IEEE 4th International Conference on Next Generation Web Services Practices, pp.5459. [13] C olin Atkinson, Philipp Bostan, Oliver Hummel, Dietmar Stoll, “A Practical Approach to Web Service Discovery and Retrieval”, IEEE


54  HINDUSTAN JOURNAL, VOL. 6, 2013

International Conference on Web Services-2007, pp. 241-248. [14] S hen Derong, Yu Ge, C Yu, Kou Yue, N Tiezheng, (2005) “An Effective Web Services Discovery Strategy for Web Services Composition,” Proc. of the 5th International Conference on Computer and Information Technology, pp. 257-263. [15] M ohamed Gharzouli, Mahmoud Boufaida, (2009) “A Generic P2P Collaborative Strategy for Discovering and Composing Semantic Web services,” Fourth International Conference on Internet and Web Applications and Services, pp-449-454. [16] C hen Wu, Elizabeth Chang (2008) “Searching services on the Web-A public Web services discovery approach”, Third International IEEE Conference on Signal Image Technologies and Internet Based System, pp-321-328. [17] J Wang, Z Ding, C Jiang, (2006) “GAOMGenetic Algorithm based Ontology Matching”, Proceedings of the IEEE Asia - Pacific Conference on Services Computing, pp-617-620.

[18] F uyong Yuan, Jian Liu, Chunxia Yin, Y Zhang (2008) “A Novel Methodology for Web Services Discovery in Gnutella-like Networks,” Third International IEEE Conference on Signal Image Technologies and Internet Based System, pp231-238. [19] E yhab Al-Masri, Qusay H. Mahmoud (2007) “WSCE-A Crawler Engine for Large-Scale Discovery of Web Services”, IEEE International Conference on Web Services, pp. 1104-1111. [20] Chengwen Zhang, Y Ma, (2009) “Dynamic Genetic Algorithm for Search in Web Service Compositions Based on Global QoS Evaluations”, Eighth IEEE International Conference on Embedded Computing, [21] M ohamed Gharzouli, M Boufaida (2009) “A Generic P2P Collaborative Strategy for Discovering and Composing Semantic Web services”, 4th International Conference on Internet and Web Applications and Services, pp.449-454.


HINDUSTAN JOURNAL, VOL. 6, 2013

Improving the efficiency of Impulse Noise Estimation S.V. Priya and R. Seshasayanan

Abstract — Impulse noise estimation is as important as the filtering process, as the efficiency of filtering depends on the estimation techniques employed. In this paper, we propose certain enhancements to improve the efficiency of impulse noise estimation techniques. As the orientation of noise plays a major role in the estimation process, we utilize the directional sub-windows or kernels to find the directivity of the noise distribution. Based on the outputs delivered by the individual kernels, the decision making system decides if the pixel is corrupted by noise. Also the confidence level of the estimation process is stored for further processing along with the noise removal. Thus more flexibility is given to the noise estimator through the directional kernels to improve the efficiency of the estimation process. Also we consider MAD and S-Estimate as the two major estimators before a firm decision is made. MAD and S-Estimates are calculated for all the kernels and based on the outputs of each estimator the decision is made. Experiments reveal that the noise estimation efficiency can go as high as 90% detection even for highly corrupted images.1,2 Index terms — Directional windows, Impulse noise detection, ROAD, MAD.

I.  Introduction The image statistics proposed by Yiqiu Dong et al [1] was efficient to detect the random valued impulse noise even if the image is corrupted by 60% noise and a two stage algorithm was used to denoise the images. A universal

S.V. Priya is

Research Scholar, Faculty of Electronics Engineering, Sathyabama University, Chennai, India (e-mail: srisvpriya@gmail.com) R. Seshasayanan is Associate Professor, Dept. of ECE, Anna University, Chennai, India

noise filter introduced by Roman Garnett et al [2] used the local statistic based on rank-ordered absolute difference (ROAD) and it was used for denoising the mixed noises with both random valued impulse noise and Gaussian noise. The directional-difference based noise detector proposed by Xuming Zhang et al used four directional windows with predefined threshold value and compared the absolute mean of those four directional windows with the pixel of interest to classify the pixel [3, 13]. The iterative application of non-linear function is used to classify the pixel and the applied function progressively improving the separation gray level, thereby the difference between corrupted and uncorrupted pixel is increased [4]. The triangle-based linear interpolation algorithm with optimized tuning parameters using the differential evolution algorithm is used to classify the pixels and to enhance the details while filtering [5]. The sorted quadrant median vector (SQMV) scheme proposed by Chih-Hsing Lin et al, [7] utilizes the edge and texture information to classify the corrupted noise as impulse noise, Gaussian noise, or noise-free. The advanced boundary discriminative noise detection algorithm compares the difference between the pixel of interest and the pixels with high and low intensity in the window considered [7] and hence the current pixel can be classified. The pixels are divided into four groups using the robust outlyingness ratio (ROR) by Bo Xiong et al [8] and different decision rules are used to find the corrupted pixel using an iterative framework. The decision-treebased impulse noise detector used by Chih-Yuan Lien et al [9] along with edge-preserving filter reconstructed the intensity value of noisy pixels and the corresponding VLSI architecture run with a speed of 200MHz by using TSMC 0.18µm technology. A difference type noise detector proposed by S.-Q. Yuan et al [10] improves the misclassification and proves to be efficient. A novel operator proposed acts as a hybrid filter obtained by appropriately combining a median filter,


56  HINDUSTAN JOURNAL, VOL. 6, 2013

an edge detector, and a neuro-fuzzy network and the most distinctive feature of the proposed operator over most other operators is that it offers excellent line, edge, detail, and texture preservation performance while, at the same time, effectively removing noise from the input image [14]. A new filtering scheme based on contrast enhancement within the filtering window is proposed and it filters the image iteratively till the stopping criteria [15]. A novel filtering scheme based on threshold Boolean filtering, where the binary slices of an image are implemented is proposed in [16]. A modified boundary discriminative noise detection (BDND) is a powerful class of filters which effectively filters the image corrupted by impulse noise [17]. To make an accurate decision, an iterative switching median is used along with two robust and reliable decision criteria [18]. A switching bilateral filter (SBF) with a texture and noise detector for universal noise removal is proposed in [19] and the sorted quadrant median vector (SQMV) scheme includes important features such as edge or texture information.

The sub windows are denoted by W 1 , W 2 , ...W 12 . i, j i, j i, j the pixels considered centered around (i , j ) and are given as follows: 1 W = {(i − 2, j + 2), (i − 1, j + 1), (i − 1, j + 2), (i , j + 1), i, j (i , j + 2), (i + 1, j + 1), (i + 1, j + 2), (i + 2, j + 2)} 2 W = {(i + 1, j − 1), (i + 1, j + 1), (i + 1, j + 2), (i + 2, j − 2), i, j (i + 2, j − 1), (i + 2, j ), (i + 2, j + 1), (i + 2, j + 2)}

3 W = {(i − 2, j − 2), (i − 1, j − 2), (i − 1, j − 1), (i , j − 2), i, j (i , j − 1), (i + 1, j − 2), (i + 1, j − 1), (i + 2, j − 2)} 4 W = {(i − 2, j − 2), (i − 2, j − 1), (i − 2, j ), (i − 2, j + 1), i, j (i − 2, j + 2), (i − 1, j − 1), (i − 1, j ), (i − 1, j + 1)} 5 W = {(i + 1, j − 1), (i + 1, j ), (i + 1, j + 1), i, j (i + 2, j − 1), (i + 2, j ), (i + 2, j + 1)}

II.  Review of the State of the Art Impulse Noise Detectors A.  Statistics based Noise Detectors

Various statistics-based noise detectors [4, 13] have emerged with robust performance in terms of noise detection. These noise detectors play a vital role to detect the noisy pixel in a given image, so that the detected noisy pixels can be replaced with a predicted pixel value by suitable filters. The most recent and widely used noise detectors are, ROAD, ROLD, MAD, S-Estimate. A detailed analysis of these techniques are available in the literature.

III.  Impact of Directional Sub-Windows on Noise Detection We use a 5X5 main window for detecting noise pixels even in highly corrupted images. Any of the twelve sub-windows can be chosen from the 5X5 main window based on the spatial distribution of the noise pixels. By decomposing the main window into various sub-windows, the corrupted pixels in the edges can be calculated. Thus including the directionality in detecting the corrupted pixels certainly improves the detector efficiency.

6 W = {(i − 1, j − 1), (i − 1, j ), (i − 1, j + 1), i, j (i − 2, j − 1), (i − 2, j ), (i − 2, j + 1)} 7 W = {(i − 1, j + 1), (i − 1, j + 2), (i , j + 1), i, j (i , j + 2), (i + 1, j + 1), (i + 1, j + 2)} 8 W = {(i − 1, j − 1), (i − 1, j ), (i − 1, j + 1), (i , j − 1), i, j (i , j + 1), (i + 1, j − 1), (i + 1, j ), (i + 1, j + 1)} 9 W = {(i − 1, j − 2), (i − 1, j − 1), (i , j − 2), i, j (i , j − 1), (i + 1, j − 2), (i + 1, j − 1)} 10 W = {(i − 2, j + 1), (i − 2, j + 2), (i − 1, j ), i, j (i − 1, j + 1), (i − 1, j + 2), (i , j + 1)} 11 W = {(i , j − 1), (i + 1, j − 2), (i + 1, j − 1), i, j (i + 1, j ), (i + 2, j − 2), (i + 2, j − 1)} 12 W = {(i , j + 1), (i + 1, j ), (i + 1, j + 1), i, j (i + 1, j + 2), (i + 2, j + 1), (i + 2, j + 2)} 13 W = {(i − 2, j − 2), (i − 2, j − 1), (i − 1, j − 2), i, j (i − 1, j − 1), (i − 1, j ), (i , j − 1)}


PRIYA AND SESHASAYANAN: IMPROVING THE EFFICIENCY OF IMPULSE NOISE ESTIMATION  57

of the kernel chosen. The window size can be varied depending on the noise densities and can be dynamically updated for each centre pixel considered. The whole process is carried out for the complete image covering all the pixels segmented from the original image which have the foreground details. The noise detection for the various noise densities is as follows: x (i , j ) ∈ (Tmax , Tmin ) , where (Tmax, Tmin) are the 1.  threshold values from histogram of the given image. Fig. 1. Sub-Windows with various shapes

The coordinates of the sub-windows are shown in figure. 1 and are expressed as equations. From an exhaustive simulation performed on various images, we found that these thirteen sub-windows are suitable for a 5X5 window considered. For increased noise density the size of the window will be increased preserving the shape of the sub-windows. A decision based algorithm tracks the output of each of the sub-windows based on the hierarchical ROAD (HR) value (i.e) ROAD based on the cumulative rank (CR).

IV.  Iterative and Adpative estimation using s-estimate To estimate the rician noise in a rician distributed image, we follow an iterative and adaptive procedure which involves the calculation of S-estimate dynamically changing the window size from 5×5 to 21×21, based on the amount of noise pixels in the given local region. The S-estimate is calculated by taking a 5×5 window with various kernel directions and based on the threshold value (T), the centre pixel in the original image is flagged noisy. Based on the S-estimate computed for all the directional sub-windows, the pixel can be flagged as corrupted or not.

V.  Algorithm for Adaptive S-Estimate Based Estimation of Rician Noise The S-estimate calculated for a pixel of interest is not robust enough if the window size considered is constant and the performance of the estimation based on S-estimate can be improved by choosing the size

2.  S ( k ) = med i {med j ti − t j } and calculate S(k) over the window(x,y), where x, y = −(2n + 1) to + (2 n + 1) and k = 1, 2, 3...(2 n + 1)

2

3.  The S-estimate S(k) is synthesized for window size of different lengths. The optimum window size for the local region is fixed based on the S-estimate and the threshold values, thus dynamically changing the window size will yield the best estimation for various noise levels. 4.  Incremental S-index values = sort the estimated S values [S(1), S(2), … , S(2n+1),] 5.  max(i , j ) = max ( S ( k ) ) . From the maximum obtained S(k) and the threshold value and the value of the centre pixel taken, the pixel can be flagged corrupted or un-corrupted by Rician noise. This procedure can be iterated till the stopping criteria has been accomplished or for a predefined number of iterations.

VI.  Comparison of Noise Detection Performance The quality of the noise detector can be assessed based on its accuracy of detecting the corrupted pixel and also it should not misinterpret the uncorrupted pixel as corrupted. Hence we assess the quality of the noise detector using the fundamental two metrics [3]. •  N f denotes the number of noise-free pixels wrongly classified as corrupted. •  N m denotes the corrupted pixels which are misclassified as noise free one.


58  HINDUSTAN JOURNAL, VOL. 6, 2013

Using the Nf and Nm values for a given image the quality assessment of the detector can be performed. It is easy to detect a corrupted pixel when the image is corrupted only by salt and pepper noise.

Fig. 2. Dependency between rank of ROAD and the amount of noise added

If the image is corrupted by random value impulse noise, the detection process is tedious as the randomness of the pixel value increases the complexity of the detection process. There are few well known techniques, such a SD-ROM and ACWM filters with good detector performance, but they fail to maintain both the Nf and Nm value simultaneously. Fig. 2 shows the plot between applied noise percentage and the number of noise pixels detected for various ranks. The rank value indicates the least absolute difference values considered while calculating ROAD. We call this as DROAD, and stands for Dynamic ROAD, which dynamically selects the number of absolute differences that should be considered when concluding a pixel as corrupted. The plot in Fig.3 clearly shows that, the detection strictly depends on the rank value chosen for various noise percentages. From Fig.2 the rank value of 5 has peak detection for 55% noise which is again confirmed in Fig.3.

Fig. 3. Number of corrupted pixels detected for various ranks

Table 1. Noise

Noise level

detected (%) for various CR Cumulative Rank (CR)

1

2

3

4

5

6, 7

5%

22.85

39.65

64.15

85.24

97.31

99.10

10%

22.80

39.06

62.32

82.49

95.08

97.20

15%

22.46

38.40

60.25

79.63

93.13

95.51

20%

22.11

37.64

58.34

76.95

91.36

94.18

25%

21.35

36.40

55.93

73.92

89.44

92.82

30%

20.51

34.63

53.20

70.87

87.52

91.29

35%

19.57

33.04

50.64

68.00

85.62

89.77

40%

18.61

31.19

47.92

65.06

83.55

87.87

45%

17.70

29.20

45.18

62.33

81.49

86.06

50%

16.79

27.15

42.48

59.66

79.27

84.01

55%

16.11

25.68

40.39

57.69

77.44

82.12

60%

15.51

24.08

38.22

55.75

75.34

79.97

65%

14.93

22.85

36.61

54.15

73.30

77.74

70%

14.41

21.64

34.95

52.41

70.99

75.16

75%

14.26

21.12

34.33

51.79

69.19

73.00

80%

13.98

20.68

34.03

51.10

67.15

70.56

The plot between various rank values and the number of corrupted pixels detected by the detector is depicted in Fig. 4. From the simulation conducted, a rank value of 3 and 5 will be suitable for better detector performance. To further increase the efficiency of the detector, we go for finding the hierarchical dynamic ROAD (HDROAD), where the decision is based on the cumulative rank (CR) value. A CR=1 indicates that ROAD value corresponding to rank=1 are considered by the detector. A CR=7 shows that the ROAD values pertaining to rank = 1 to 7 are considered. From the simulation performed, the detection performance increases as CR value increases and gets saturated when CR=5 as shown in Fig. 4. This techniques is called as HDROAD, where the dynamicity is in terms of window size, shape and the rank of ROAD. Table I shows the data of noise detected for various CR. Table II compares the estimation efficiency of HD-ROAD and the S-Estimate.

Fig. 4. Detector performance for various Cumulative Ranks (CR)


PRIYA AND SESHASAYANAN: IMPROVING THE EFFICIENCY OF IMPULSE NOISE ESTIMATION  59 Table 2. Comparison of HD ROAD and S-Estimate based Estimation Techniques Noise level

HDROAD with Bets CR

S-Estimate with Dynamic Kernel

5%

99.10

99.33

10%

97.20

98.02

15%

95.51

96.21

20%

94.18

95.30

25%

92.82

93.27

30%

91.29

92.31

35%

89.77

90.56

40%

87.87

89.03

45%

86.06

87.29

50%

84.01

85.32

55%

82.12

83.75

60%

79.97

81.92

65%

77.74

78.99

70%

75.16

77.07

75%

73.00

75.76

80%

70.56

72.43

VII.  Conclusion The noise estimation algorithm based on HDROAD with best CR and the estimation based on S-Estimate are compared. S-Estimate is applied for all the directional sub-windows and the best known value and hence the best decision taken is considered further. Certainly, the S-Estimate applied for various directional kernels improve the efficiency of noise estimation. The improvement in the noise etimation efficiency gives more freedom to improve the efficiency of the noise removal filters. The calculation of S-Estimate is a computationally heavy process and needs to be optimized to make it suitable for fast estimation intended for real-time scenarios.

References [1] Y iqiu Dong; Chan, R.H.; Shufang Xu, “A Detection Statistic for Random-Valued Impulse Noise” IEEE Transactions on Image Processing, Vol.16, No.4, pp.1112-1120, 2007 [2] G arnett, R.; Huegerich, T.; Chui, C.; Wenjie He, “A universal noise removal algorithm with an impulse detector” IEEE Transactions on Image Processing, Vol.14, No.11, pp.1747-1754, 2005.

[3] X uming Zhang; Youlun Xiong, “Impulse Noise Removal Using Directional Difference Based Noise Detector and Adaptive Weighted Mean Filter” IEEE Signal Processing Letters, Vol.16, no.4, pp.295-298, April 2009 [4] G hanekar, U.; Singh, A.K.; Pandey, R., “A Contrast Enhancement-Based Filter for Removal of Random Valued Impulse Noise” IEEE Signal Processing Letter , Vol.17, No.1, pp.47-50, 2010 [5] C ivicioglu, P., “Removal of random-valued impulsive noise from corrupted images” IEEE Transactions on Consumer Electronics, Vol.55, No.4, pp.2097-2104, 2009 [6] C hih-Hsing Lin; Jia-Shiuan Tsai; Ching-Te Chiu, “Switching Bilateral Filter With a Texture/Noise Detector for Universal Noise Removal” IEEE Transactions on Image Processing, Vol.19, No.9, pp.2307-2320, 2010 [7] T ripathi, A.K.; Ghanekar, U.; Mukhopadhyay, S., “Switching median filter: advanced boundary discriminative noise detection algorithm” IET Image Processing, Vol.5, No.7, pp.598-610, 2011 [8] B o Xiong; Zhouping Yin, “A Universal Denoising Framework With a New Impulse Detector and Nonlocal Means” Image Processing, IEEE Transactions on, Vol.21, No.4, pp.1663-1675, 2012 [9] C hih-Yuan Lien; Chien-Chuan Huang; PeiYin Chen; Yi-Fan Lin, “An Efficient Denoising Architecture for Removal of Impulse Noise in Images” IEEE Transactions on Computers, Vol.62, No.4, pp.631-643, 2013 [10] Y uan, S. -Q; Tan, Y. -H, “Difference-type noise detector for adaptive median filter” Electronics Letters, Vol.42, No.8, pp.454-455, 2006 [11] H amed Vahdat-Nejad, Behrouz Tork-Ladani, Kamran Zamanifar, Nasser Nematbakhsh, “A Novel Algorithm for Impulse Noise Classification in Digital Images”, International Review on Computers and Software, Vol. 4. No. 6, pp. 627632, 2009 [12] P . G. Kuppusamy, R. Rani Hemamalini, “A VLSI Based Framework for Iterative and Adaptive Based Image Filter for Impulse Noise Removal”, International Review on Computers and Software, Vol. 8, No. 1, pp. 235-242, 2013


60  HINDUSTAN JOURNAL, VOL. 6, 2013

[13] S .V.Priya, R.Seshasayanan, “A Robust Noise Detector for High Density Noise Removal”, International Review on Computers and Softwares, Vol.8, No.11, pp 2727-2732. 2013 [14] Y uksel, M.E., “A hybrid neuro-fuzzy filter for edge preserving restoration of images corrupted by impulse noise,” Image Processing, IEEE Transactions on , vol.15, no.4, pp.928,936, 2006 [15] G hanekar, U.; Singh, A.K.; Pandey, R., “A Contrast Enhancement-Based Filter for Removal of Random Valued Impulse Noise,” Signal Processing Letters, IEEE, Vol.17, No.1, pp.4750, Jan. 2010 [16] A izenberg, I.; Butakoff, C.; Paliy, D., “Impulsive noise removal using threshold Boolean filtering based on the impulse detecting functions,” Signal

Processing Letters, IEEE, Vol.12, No.1, pp.6366, 2005 [17] J afar, I.F.; AlNa’mneh, R.A.; Darabkh, K.A., “Efficient Improvements on the BDND Filtering Algorithm for the Removal of High-Density Impulse Noise,” Image Processing, IEEE Transactions on, Vol.22, No.3, pp.1223-1232, 2013 [18] F ei Duan; Yu-Jin Zhang, “A Highly Effective Impulse Noise Detection Algorithm for Switching Median Filters,” Signal Processing Letters, IEEE, Vol.17, No.7, pp.647-650, 2010 [19] C hih-Hsing Lin; Jia-Shiuan Tsai; Ching-Te Chiu, “Switching Bilateral Filter With a Texture/Noise Detector for Universal Noise Removal,” Image Processing, IEEE Transactions on, Vol.19, No.9, pp.2307-2320, 2010


HINDUSTAN JOURNAL, VOL. 6, 2013

Review of Cyclic Redundancy Checking Algorithm Prakash V R and Kumaraguru Diderot P

Abstract — Cyclic redundancy check (CRC) is a traditionally used error detection method in networks. This paper describes CRC algorithm using the long division method and the representation of binary stream of data in polynomial form. CRC using modulo-2 arithmetic has also been explained. The algorithm is more useful in areas where memory requirement is limited.1, Index Term — Cyclic Redundancy Check, Algorithm, Polynomials.

I.  Introduction CRC [1] is a very powerful and easily implemented technique to obtain data reliability. The CRC technique is used to verify the integrity of blocks of data called Frames. Using this technique, the transmitter appends an extra n bit sequence to every frame called Frame Check Sequence (FCS). FCS holds redundant information about the frame that helps the receiver detect errors in the frame. CRC is one of the most commonly used techniques for error detection in data communications. The CRC was invented by W. Wesley Peterson in 1961; the 32-bit polynomial used in the CRC function of Ethernet and many other standards is the work of several researchers and was published in 1975.Cyclic redundancy codes (also known sometimes as cyclic redundancy checks) have a long history of use for error detection in computing. Book [2][3] are among the commonly cited standard reference works for CRCs. A treatment more accessible to non-specialists can be found in book [4]. A CRC can be thought of as a (non-secure) digest function for a data word that can Prakash V R and Kumaraguru Diderot P are in School of Electrical Sciences, Hindustan University, Chennai, India, (e-mail: pkguru@hindustanuniv.ac.in, vrprakash@hindustanuniv.ac.in).

be used to detect data corruption. Mathematically, a CRC can be described as treating a binary data word as a polynomial over GF (2) (i.e., with each polynomial coefficient being zero or one) and performing polynomial division by a generator polynomial [5] G(x). The generator polynomial will be called a CRC polynomial for short. CRC polynomials are also known as feedback polynomials, in reference to the feedback taps of hardware-based shift register implementations. The remainder of that division [6] operation provides an error detection value that is sent as a Frame Check Sequence (FCS) within a network message or stored as a data integrity check. Whether implemented in hardware or software, the CRC computation takes the form of a bitwise convolution of a data word against a binary version of the CRC polynomial. Error detection [7] is performed by comparing an FCS computed on a piece of retrieved or received data against the FCS value originally computed and either sent or stored with the original data. An error is declared to have occurred if the stored FCS and computed FCS values are not equal. However, as with all digital signature schemes, there is a small, but finite, probability that a data corruption that inverts a sufficient number of bits in just the right pattern will occur and lead to an undetectable error. The minimum number of bit inversions required to achieve such undetected errors (i.e., the HD value) is a central issue in the design of CRC polynomials. The essence of implementing a good CRC-based error detection scheme is picking the right polynomial. The prime factorization of the generator polynomial brings with it certain potential characteristics, and in particular gives a trade-off between maximum numbers of possible detected errors vs. data word length for which the polynomial is effective. Many polynomials are good for short words but poor at long words, and the converse. There are relatively few polynomials that are excellent for medium-length data words while still being good for relatively long data words.


62  HINDUSTAN JOURNAL, VOL. 6, 2013

Unfortunately, prime factorization of a polynomial is not sufficient to determine the achieved HD value for any particular message length. A polynomial with a promising factorization might be vulnerable to some combination of bit errors, even for short message lengths. Thus, factorization characteristics suggest potential capabilities, but specific evaluation is required of any polynomial before it is considered suitable for use in a CRC function.

II.  The CRC Generation Process CRC algorithms augment bit streams with functions of the content of the streams. In this way it is easier for CRC algorithms to detect errors. To avoid self-failures the functions used by CRC algorithms need to be as ‘close’ to 1:1 as possible. CRC algorithms treat each bit stream as a binary polynomial B(x) and calculate the remainder R(x) from the division of B(x) with a standard ‘generator’ polynomial G(x). The binary words corresponding to R(x) are transmitted together with the bit stream associated with B(x). The length of R(x) in bits is equal to the length of G(x) minus one. At the receiver side CRC algorithms verify that R(x) is the correct remainder. Long division is performed using modulo-2 arithmetic. Additions and subtractions in module-2 arithmetic are ‘carry-less’. In this way additions and subtractions are equal to the exclusive OR (XOR) logical operation. Table 1 shows how additions and subtractions are performed in modulo-2 arithmetic.

dividend is equal to ‘1000111011000’. The long division process begins by placing the 5 bits of the divisor below the 5 most significant bits of the dividend. The next step in the long division process is to find how many times the divisor ‘11011’ ‘goes’ into the 5 most significant bits of the dividend ‘10001’. In ordinary arithmetic 11011 goes zero times into 10001 because the second number is smaller than the first. In modulo-2 arithmetic, however, the number 11011 goes exactly one time into 10001. To decide how many times a binary number goes into another in modulo-2 arithmetic, a check is being made on the most significant bits of the two numbers. If both are equal to ‘1’ and the numbers have the same length, then the first number goes exactly one time into the second number, otherwise zero times. Next, the divisor 11011 is subtracted from the most significant bits of the dividend 10001 by performing an XOR logical operation. The next bit of the dividend, which is ‘1’, is then marked and appended to the remainder ‘1010’. The process is repeated until all the bits of the dividend are marked. The remainder that results from such long division process is often called CRC or CRC ‘checksum’ (although CRC is not literally a checksum).

Table 1: Modulo-2 Arithmetic

0+0 = 0-0 = 0

Figure 2: Accelerating the Long Division Using Table Lookups [8][9]

0+1 = 0-1 = 1 1+0 = 1-0 = 1 1+1 = 1-1 = 0

Figure

1: Long Division using Modulo-2 Arithmetic

Figure 1 shows a long division example. In the example, the divisor is equal to ‘11011’ whereas the

III.  Polynomial Arithmatic While the division scheme described in the previous section is very similar to the check summing schemes called CRC schemes, the CRC schemes are in fact a bit weirder, and we need to delve into some strange number systems to understand them. The word you will hear all the time when dealing with CRC algorithms [10] is the word “polynomial”. A given CRC algorithm will be said to be using a particular polynomial, and CRC algorithms in general are said to be operating using polynomial arithmetic. Instead of the divisor, dividend (message), quotient, and remainder being viewed as positive integers, they are viewed as polynomials with binary coefficients. This is done by treating each


PRAKASH AND KUMARAGURU DIDEROT : REVIEW OF CYCLIC REDUNDANCY  63

number as a bit-string whose bits are the coefficients of a polynomial. For example, the ordinary number 23 (decimal) is 17 (hex) and 10111 binary and so it corresponds to the polynomial: 1*x^4 + 0*x^3 + 1*x^2 + 1*x^1 + 1*x^0 or, more simply: x^4 + x^2 + x^1 + x^0 Using this technique, the message, and the divisor can be represented as polynomials and we can do all our arithmetic just as before, except that now it’s all cluttered up with Xs.

IV.  Binary Arithmatic With No Carries Adding two numbers in CRC arithmetic is the same as adding numbers in ordinary binary arithmetic except there is no carry. This means that each pair of corresponding bits determines the corresponding output bit without reference to any other bit positions. For example: 10011011 +11001010 ------------01010001 ------------There are only four cases for each bit position: 0+0=0 0+1=1 1+0=1 1+1=0 (no carry) Subtraction is identical: 10011011 -11001010 ------------01010001 ------------with 0-0=0 0-1=1 (wraparound)

1-0=1 1-1=0 In fact, both addition and subtraction in CRC arithmetic are equivalent to the XOR operation, and the XOR operation is its own inverse. This effectively reduces the operations of the first level of power (addition, subtraction) to a single operation that is its own inverse. This is a very convenient property of the arithmetic. By collapsing of addition and subtraction, the arithmetic discards any notion of magnitude beyond the power of its highest one bit. While it seems clear that 1010 is greater than 10, it is no longer the case that 1010 can be considered to be greater than 1001. To see this, note that you can get from 1010 to 1001 by both adding and subtracting the same quantity: 1010 = 1010 + 0011 1010 = 1010 - 0011 This makes nonsense of any notion of order. Having defined addition, we can move to multiplication and division. Multiplication is absolutely straightforward, being the sum of the first number, shifted in accordance with the second number. 1101 x 1011 -------1101 1101. 0000 1101. ---------1111111 Note: The sum uses CRC addition ----------Division is a little messier as we need to know when “a number goes into another number”. To do this, we invoke the weak definition of magnitude defined earlier: that X is greater than or equal to Y iff the position of the highest 1 bit of X is the same or greater than the position of the highest 1 bit of Y. In dealing with CRC multiplication and division, it is worth getting a feel for the concepts of MULTIPLE and DIVISIBLE.


64  HINDUSTAN JOURNAL, VOL. 6, 2013

If a number A is a multiple of B then what this means in CRC arithmetic is that it is possible to construct A from zero by XORing in various shifts of B. For example, if A was 0111010110 and B was 11, we could construct A from B as follows: 0111010110 = .......11…… +....11........... +...11............ .11............... However, if A is 0111010111, it is not possible to construct it out of various shifts of B so it is said to be not divisible by B in CRC arithmetic. Thus we see that CRC arithmetic is primarily about XORing particular values at various shifting offsets. The division yields a quotient, which we throw away, and a remainder, which is the calculated checksum. This ends the calculation. Usually, the checksum is then appended to the message and the result is transmitted. In this case the transmission would be: 11010110111110. At the other end, the receiver can do one of two things: a.  Separate the message and checksum. Calculate the checksum for the message (after appending W zeros) and compare the two checksums. b.  Checksum the whole lot (without appending zeros) and see if it comes out as zero!

simply use the divide instruction of whatever machine we are on. The first is that we have to do the divide in CRC arithmetic. The second is that the dividend might be ten megabytes long, and today’s processors do not have registers that big. So to implement CRC division, we have to feed the message through a division register. At this point, we have to be absolutely precise about the message data. In all the following examples the message will be considered to be a stream of bytes (each of 8 bits) with bit 7 of each byte being considered to be the most significant bit (MSB). The bit stream formed from these bytes will be the bit stream with the MSB (bit 7) of the first byte first, going down to bit 0 of the first byte, and then the MSB of the second byte and so on. With this in mind, we can sketch an implementation of the CRC division. For the purposes of example, consider a polynomial with W=4 and the polynomial=10111. Then, perform the division; we need to use a 4-bit register:

The augmented message is the message followed by W zero bits.) This might look a bit messy, but all we are really doing is “subtracting” various powers (i.e. shifting) of the polynomial from the message until there is nothing left but the remainder.

VI CRC Software Implementation

These two options are equivalent. However, in the next section, we will be assuming option b because it is marginally mathematically cleaner.

Following are the steps for implementing a CRC in software. The steps for CRC computation are followed at the transmitter side and that for CRC Checking are followed at the receiver side.

A summary of the operation of the class of CRC algorithms:

Steps of CRC computation at transmitter

1. C hoose a width W and a polynomial G (of width W).

3. D ivide M’ by G using CRC arithmetic. The remainder is the checksum.

●● T o compute an n-bit binary CRC, line the bits representing the input in a row, and position the (n+1)-bit pattern representing the CRC’s divisor (called a “generator polynomial”) underneath the left-hand end of the row.

V. CRC Implementation

●● S tart with the message to be encoded: 10001110 = X7 + x 3 + x 2 +x 1

2. Append W zero bits to the message. Call this M’.

To implement a CRC algorithm all we have to do is CRC division. There are two reasons why we cannot

●● T his is first padded with zeroes corresponding to the bit length n of the CRC. Here is the first calculation for computing a 16-bit CRC:


PRAKASH AND KUMARAGURU DIDEROT : REVIEW OF CYCLIC REDUNDANCY  65

10001110 0000000000000000 <--- input right padded by 16 bits 10001000000100001 x16+x12+x5+1

<---

divisor

(17

bits)

=

--------------------------------------------

checks its MSB bit. If value of MSB is ‗0‘ then data is copied as it is into temporary register r1 otherwise the data is XOR‘ed with the generator polynomial to store the result in the register r1. Then data is left shifted by one bit and same operation is repeated.

000001100001000010000000 <--- result

At last the result of CRC computation called as checksum is assigned to crc register.

Three Steps

reg [7:0] message;

The divisor is XOR‘ed into the input

reg [16:0] poly; // store generator polynomial

Note: in other words, the input bit above each 1-bit in the divisor is toggled

reg[23:0] data_in1;

The divisor is then shifted one bit to the right, and the process is repeated until the divisor reaches the right-hand end of the input row. Here is the entire calculation:

reg[16:0] r1; // to store intermediate result reg [15:0] crc; // output port to store crc computation result i.e checksum integer i; // loop counter

●● 1 00011100000000000000000 <--- input right padded by 16 bits

poly = 17’b10001000000100001;

●● 10001000000100001 <--- divisor

always @ (data_in1, message)

●● 000001100001000010000000 <--- result

begin

●● 10001000000100001 <--- divisor

data_in1 = {message, 16’b0000000000000000}; // append 16 zero bits to the message

●● 0100101000000000100 <--- result ●● 10001000000100001 <--- divisor ●● ----------------- -------------------------●● 0111000001000110 <---remainder (16 bits) Since the leftmost divisor bit zeroed every input bit it touched, when this process ends the only bits in the input row that can be nonzero are the n bits at the righthand end of the row. These n bits are the remainder of the division step, and will also be the value of the CRC function called as checksum. ●● T he validity of a received message can easily be verified by performing the above calculation again, this time with the check value added instead of zeroes.

if (data_in1[23] == 0) r1 = data_in1[23:7]; else r1 = data_in1[23:7] ^ poly[16:0]; for (i = 6; i >=0; i = i-1) begin r1 = {r1[15:0], data_in1[i]}; if (r1[16] == 0) r1 = r1; else r1 = r1 ^ poly; end

The remainder should equal zero if there are no detectable errors.

end

Algorithm of Software Implementation

CRC is basically used to detect errors from noise in digital data transmission. The technique is also sometimes applied to data storage devices, such as a disk drive. They also have been turned to verify the integrity of files in a system in order to prevent

The program initializes the register named ‗poly‘ to binary value of CRC-CCITT generator polynomial equivalent, appends 16 zero bits to the message and

VII Conclusion


66  HINDUSTAN JOURNAL, VOL. 6, 2013

tampering and suggested as a possible algorithm for manipulation detection codes.

References [1] H ardware Design and VLSI Implementation of a Byte-wise CRC Generator Chip. IEEE Transactions on consumer Electronics, 41 (1): 195-200, February 1995. [2] P eterson, W. & E. Weldon, Error-Correcting Codes, MIT Press, Second Edition, 1972. [3] L in, Shu & D. Costello, Error Control Coding, Prentice-Hall, 1983. [4] W ells, R., Applied coding and information theory for engineers, Prentice-Hall, 1999. [5] T . B. Pei and C. Zukowski (1992), “HighSpeed Parallel CRC Circuits in VLSI”, IEEE Transaction on Communications, vol. 40, no. 4.

[6] M . Braun, J. Friedlich, J. Lembert, and Grun, Parallel CRC computation in FPGAs, FPL’96 Workshop on Field Programmable Logic and Applications, Darmstadt, Germany, Sep. 1996. [7] P . Hlavka, V. Rehak, A. Smrcka, P. Simecek, D. Safranek, and T. Vojnar, 2011, supported by the CESNET activity ―Programmable hardware. Formal Verification of the CRC Algorithm Properties. [8] A n approach for a Standard Polynomial for Cyclic. [9] R edundancy Check. International Journal of Computer. [10] Applications (0975 – 8887), December 2011. [11] L attice semiconductorCorporation, April 2011 Reference Design RD1105. Cyclic Redundancy Checks in USB.


HINDUSTAN JOURNAL, VOL. 6, 2013

Optimization of Temporally Ordered Routing Algorithm (TORA) in Ad-hoc Network D. Helen and D. Arivazhagan

Abstract — The Temporally Ordered Routing Algorithm (TORA) is a distributed routing protocol for ad-hoc network. TORA is able to operate more effectively whenever the topological variations occur. TORA is a reactive protocol, the routes are formed between source to destination on a demand basis. TORA proceeds with the link reversal algorithm and Directed Acyclic Graph (DAG) to sustain the path at the destination. The topological diversity may affect the network parameters such as network size, bandwidth and network connection. If the network size is increased then the usage of the available bandwidth is decreased. It indicates the delay in the transmission. TORA can be put on Medium Access Control (MAC), to provide the multi-hop routing. In this paper, use of TORA along with MAC is proposed to overcome the bandwidth limitation. The bandwidth usage may increase by introducing multiple channel with different rates. The network can be utilized most effectively by using multiple channels in the network.1,2 Index terms — Distributed, Topology, Reactive, Multi-hop, Bandwidth, Channel.

I.  Introduction Ad-hoc networks are formed without any predefined architecture [14, 15]. The ad-hoc network is dynamic, distributed [7] and provides multi-hop routing between D. Helen is Research Scholar, Department of Information Technology, AMET University , Chennai. (e-mail: helensaran15@gmail.com). D. Arivazhagan is HOD, Department of Information Technology, AMET University , Chennai. (e-mail: it_ manager@ametindia.com).

the host and the network. The TORA [1] is an adhoc network, where it is planned to recreate the route whenever the topology has altered. TORA guarantees that the routes are created loop-free, and provide numerous routes between source to destination [2]. TORA can also perform more efficiently at the point of link flops and it can propagate the data packets from the point of link failure. TORA works more reliably in a larger network whenever the topology modification has ensued. The network shape may vary whenever the topological changes have to occur. This may degrade the network parameters such as network size, bandwidth and connection. If the network size is expanded then the bandwidth availability will be reduced in the network. In the proposed paper, the bandwidth utilization is enhanced by using multiple channel in the ad-hoc network. This can be achieved by applying the TORA in Medium Access Control (MAC). The MAC is widely accepted by IEEE 802.11 [17]. The MAC protocol is used to share the medium by several hosts. The paper plans to use multiple channel according to the MAC protocol which allows the increase of the throughput without delay in packet delivery.

II.  Protocol Overview The Temporally Ordered Routing Algorithm (TORA) is an efficient routing protocol. The goal of the protocol is to utilize the bandwidth most effectively in the network when the topological fluctuations happen. The process of the protocol can be described in the following way. The origin host wants to define the path to the target. Routes are reestablished whenever the topological variation occurs. TORA performs effectively during the topology alteration and network partition. The idea of the protocol is to increase the bandwidth utilization by introducing multiple channel with different rates in the ad-hoc network. The TORA can work on every destination


68  HINDUSTAN JOURNAL, VOL. 6, 2013

during routing. TORA routine uses three control packets. They are, Query (QRY), Update (UPD) and Clear (CLR). QRY packets are used by the originating host to examine the destination host. This examination is accomplished via flooding [10]. Update (UPD) packets are needed to arrange and remain with the routes. During route arrangement and preservation every host maintains a rate which is determined by the height of the host. Links are assigned from the origin to the target with the height of the neighboring host. CLR packets are used to eradicate the routes. Detaching of the route happens, when the host is no longer needed or during network partition. The elimination of route at the host is fixed and its height is null. Thus the bandwidth upstairs may be reduced by introducing multiple channel in the network. The network information may be upgraded during the topological change and link failure. The illegal links are removed by the update and packets are cleared.

III.  Process of TORA TORA fundamentally performs the following functions to form the network: Route Formation: The route must be created from the original host to the neighbor host. The routes are created with directed links from the source to target. Route Maintenance: The routes are recreated whenever the topological changes occur or during network partition. Route Elimination: The routes are deleted whenever the links are no longer needed and all the invalid routes are detached from the network. TORA builds the network with Directed Acyclic Graph (DAG) at the destination. The path in the DAG is directed depending upon the height of the neighboring host. During the topological change, routes are renewed instantly at the destination. Relation to the adjacent host with an unacquainted or null height is considered as an undirected link which is not used for flooding. The route creation technique converts an undirected network into a DAG at the end point by conveying direction to the links. A graph G(H,E) is a directed graph with H host and E edges and each edge in the graph has a related path. The graph without any cycle is known as DAG as shown in fig 1. Link Reversal algorithm works with DAG. If there is no downlink in the network link reversal algorithm changes the direction of the path.

Fig. 1. Overview of the Directed Acyclic Graph (DAG).

Each destination, has a quintuple which is related to every host. A new reference level is defined every time when a host drops its last downlink due to the network partition. The quintuple is defined as H=(τi, oidi, ri, δi, i) where τi - time of a network partition oidi -Source ID used to define the reference level ri - single bit where 0 indicates an exclusive level and 1 indicates a replicated level δi - common reference level i - host ID The quintuple describes the host. The height of two hosts may be the same depending on their reference level. Assume that “i” is the original host and “j” is the neighbor host. If the height of “j” is greater than “i” then the link is uplink from “i” to “j”, otherwise it is downlink. Whenever the host “i” has a fresh neighbor “k”, host “i” accommodates the new host by determining the height to the new host and preserves the link situation in an array.

IV.  Link Reversal Algorithm in TORA In situation, such as, when the host has no downstream link to the network in DAG, the link reversal algorithm works most effectively. The algorithm, changes the direction of the link in DAG whenever the host does not have any downstream link. The link reversal is done in a two ways [13]: 1) Full Reversal Method: In this method each host other than target does not have any outbound link in which situation it reverses the direction of the entire network. 2) Partial Reversal Method: Each host maintains the list of its neighbor hosts. The reversal method


HELEN AND ARIVAZHAGAN: OPTIMIZATION OF TEMPORALLY ORDERED ROUTING ALGORITHM  69

changes the direction if the host does not belong to that list. Fig: 2 explains the link creation and maintenance according to TORA.

Fig. 2. (a) Foot creation (showing link direction assignment) (b) Route maintenance in TORA.

VI.  Simulation Results The throughput of the TORA is based on the number of packets spread in the network. The throughput of the host is compared between TORA and LMR (Light Weight Mobile Routing) [12]. Fig. 3 shows that TORA performs more effectively than LMR based on the network utilization. The throughput is increased by using TORA along with multiple channel MAC protocol. Thus the average delay between source and destination may be overcome by increase in the network utilization. Output of fig 3 illustrates that TORA is better than LMR in the sense that bandwidth utilization is increased by using TORA along with the MAC protocol.

Bandwidth Utilization

V.  Applicability of TORA Whenever the topological deviation has occurred it may affect the following parameters of the network. ●● Network Size. ●● Bandwidth. ●● Network Connection. TORA is able to work more effectively in a larger network. If the network structure changes due to the arrival of a new entering node it leads to increase in the size of the network, decrease in the available bandwidth usage and makes alteration in the network connection. One of the unique characters of ad-hoc network is the limited bandwidth. If the network size is increased or network connection is extended then the available bandwidth usage may decrease and it may lead to delay in packet delivery. TORA can apply Medium Access Control (MAC), to run in a multi-hop environment. To overcome the delay in packet delivery in the proposed paper, we propose multiple channels with different bandwidths. Using multiple channels allows maximum utilization of bandwidth in the network. The utilization can be measured as Utilization=pkt_size * successful_pkt time * no_channel where pkt_size refers the total number of bits in the packet, successful_pkt determines number of packets received by destination, time represents the average time to deliver the packet and no_channel is the total number of channels in the network.

Fig. 3. Bandwidth Utilization TORA vs LMR

VII.  Conclusion This paper suggested the extreme distributed routing algorithm TORA. The algorithm is well matched for a ad-hoc network. The protocol is scheduled to recreate the network whenever the links get fragmented. The TORA combines with link reversal algorithm and DAG to sustain the route at the destination. The paper has proposed the use of the TORA along with MAC to increase the network utilization to overcome the limited usage of bandwidth.

References [1] R oyer, E.; Toh C.K. (1999) “A review of current routing protocols for ad hoc mobile wireless networks”, IEEE Personal communications, pp.46-55


70  HINDUSTAN JOURNAL, VOL. 6, 2013

[2] J . Jaffe and F. Moss, “A responsive distributed routing algorithm for computer networks”, IEEE Transaction on Communications, COM-30, No.7, 1982. [3] E . Gafni and D. Bertsekas, “Distributed algorithms for generating loop-free routes in networks with frequently changing topology”, IEEE Transaction on Communications, COM-29, No.1, 1981. [4] S .Murthy and J.J GarciaLuna Aceves, “An Efficient Routing Protocol for Wireless Networks,” ACM Journal of Mobile Networks and Applications, Special issue on Routing in Mobile Communication Networks, Vol.1, No.2, pp.183-197, 1996 [5] P ark, V., Corson, M. S: “A performance comparison of TORA and Ideal Link-State routing”, USA. [6] C Perkins and P. Bhagwat, “Highly dynamic destination sequenced distance vector routing (DSDV) for mobile computers”, ACM SIGCOMM. [7] C . E. Perkins, E. M. Royer, and S. R. Das, “Ad Hoc On- Demand Distance Vector (AODV) Routing”, Internet Draft, draft-ietf- manetaodv10.txt, work in progress, 2002.

[8] A nuj K. Gupta, Dr. Harsh Sadawarti and Dr. Anil K. Verma, “Performance analysis of AODV, DSR & TORA Routing Protocols,” International Journal of Engineering and Technology, Vol.2, No.2, 2010. [9] S uresh Kumar and Jogendra Kumar, “Comparative Analysis of Proactive and Reactive Routing Protocols in Mobile Ad-Hoc Networks (Manet)”, Journal of Information and Operations Management, Vol. 3, Issue 1, 2012. [10] D . Bertsekas and R. Gallager, Data Networks (Prentice-Hall, 1982). [11] V . Park and M. S. Corson, “A Highly Adaptive Distributed Routing Algorithm for Mobile Wireless Networks”, Proc. of IEEE INFOCOM ‘97, Kobe, Japan. [12] M .S. Corson and A. Ephremides, A distributed routing algorithm for mobile wireless networks, Wireless Networks 1, 1995. [13] Charles E. Perkins, Ad hoc networking, 2000, Addison-Wesley Professional. [14] P ark, V.; Corson, M. S: Internet draft; Temporally ordered Routing Algorithm (TORA), version 1, Functional Specification, 2001. [15] I EEE Computer Society. IEEE 802.11 Standard, IEEE Standard For Information Technology.


HINDUSTAN JOURNAL, VOL. 6, 2013

Evaluation of Evaporative Heat Transfer Characteristics of CO2/Propane Refrigerant Mixtures in a Smooth Horizontal Tube Using CFD. Jeya Pratha.S and Mahendran.S

ABSTRACT — This study presents an evaluation of the evaporative heat transfer characteristics of CO2/propane refrigerant mixtures in a smooth horizontal tube using CFD (Computational Fluid Dynamics). First a theoretical design has been made to understand the operating conditions and flow regimes of CO2/propane refrigerants at various heat flux, inlet temperature and several mass compositions. The copper tube with outer diameter of 5 mm and length of 1.44 m was selected as the test section. The inner diameter of test tube is 4.0 mm. The heat transfer characteristic has been theoretically designed by considering heat fluxes from 15 to 60 Kw/m2, inlet temperatures from -5 to 10 oC and for several compositions (75/25, 50/50, 25/75 wt%). Among CO2/propane refrigerant mixtures, the heat transfer characteristics are much better than those of any other compositions when the composition is 25/75 (wt%). Finally a suitable 3-D model has been created with the specified dimensions using CFX software for analyzing the heat transfer characteristics of refrigerant mixtures.1 Index Terms — Evapourative heat transfer, Computational fluid dynamics, Refrigerant, mixture global warming.

I.  Introduction Conventional refrigerants such as CFC, HCFC, and HFC with good chemical and thermophysical characteristics Jeya Pratha.S and Mahendran.S are in School of Mechanical Sciences, Hindustan University, Chennai, India, (e-mail: sjayap@hindustanuniv.ac.in)

have caused environmental issues of stratospheric ozone depletion and global warming. From this aspect, the use of conventional refrigerants such as CFC, HCFC and HFC refrigerants has been restricted to protect the environment and we pay much attention to investigating new refrigerants for the air-conditioning and refrigeration industry. There has been a strong need to develop alternative refrigerants with the lower direct global warming potential (DGWP) and the lower indirect global warming potential (IDGWP). As a result, among many natural refrigerants such as ammonia, hydrocarbons, carbon dioxide, water, air, etc., we chose carbon dioxide which has gained remarkable attention because of its zero ozone depletion potential (ODP) and far smaller global warming potential (GWP). Moreover, carbon dioxide has excellent thermodynamic characteristics such as non-flammability and non toxicity. However, it has a few disadvantages such as high operating pressure and low efficiency. Therefore, we propose mixtures of CO2 and propane (R290) which may be a promising refrigerant since they can contemporarily reduce the problems related to high operating pressure of CO2 and the flammability of hydrocarbon refrigerant, because propane (R290) is flammable, but can be an excellent refrigerant in lowcharge refrigeration units due to higher energy efficiency and good environmental compatibility. Global warming refers to the rising average temperature of Earth’s atmosphere and oceans and its projected continuation. In the last 100 years, Earth’s average surface temperature increased by about 0.8 °C (1.4 °F) with about two thirds of the increase occurring over just the last three decades. An increase in global temperature will cause sea levels to rise and will change the amount and pattern of precipitation, and a probable


72  HINDUSTAN JOURNAL, VOL. 6, 2013

expansion of subtropical deserts. Proposed responses to global warming include mitigation to reduce emissions, adaptation to the effects of global warming, and geoengineering to remove greenhouse gases from the atmosphere or reflect incoming solar radiation back to space. The Kyoto Protocol is the only legally binding emissions agreement and only limits emissions through the year 2012.Nonetheless, in the 2010 Cancun Agreements, member nations agreed that urgent action is needed to limit global warming to no more than 2.0 °C (3.6 °F) above pre-industrial levels. Current scientific evidence, however, suggests that 2 °C is the “threshold between ‘dangerous and ‘extremely dangerous’ climate change”, that this much warming is possible during the lifetimes of people living today and that steep reductions in global emissions must be made by 2020 in order to have a 2-out-of-3 chance of avoiding global warming in excess of 2 °C. All refrigerants in use today can contribute to global warming as greenhouse gases and this is why there are regulations in place to limit their use. These refrigerants will eventually be phased out and replaced with more palatable alternatives. Chemicals with the highest global warming potential are hydrochlorofluorocarbons, such as those found in refrigeration and cooling systems. The values for hydrochlorofluorocarbons range from 120 to 12,240 over their atmospheric lifetime. When these numbers are broken down, it takes only one molecule of refrigerant gas to cause harm to the ozone layer. The refrigerant R-113 Trichlorotrifluoroethane has one of the highest global warming potential values at 4800, while the refrigerant R-114 Dichlorotetrafluoroethane has one of the lowest values at 3.9. The alternative refrigerants being developed have no impact on global warming and are being used in the production of all types of new refrigeration and air conditioning systems.

II.  Literature Survey Jin Min Cho et al [1] investigated a study which presents evaporative heat transfer characteristics of CO2/ propane refrigerant mixtures in horizontal, and vertical smooth and micro-fin tubes. The effect of mass flux, heat flux, inlet temperature and several compositions have been investigated and analyzed. The copper tube with outer diameter of 5 mm and length of 1.44 m was selected as the test sections. Average inner diameters of test tubes are 4.0 mm and 4.13 mm, respectively. The tests were conducted at mass fluxes from 212 to 656 kg/ m2, heat fluxes from 15 to 60 kW/m2, inlet temperatures

from -10 to 30°C, and for several compositions (75/25, 50/50, 25/75 wt %). Among CO2/propane refrigerant mixtures, the heat transfer characteristics are much better than those of any other compositions when the composition is 75/25 (wt %). Finally, the heat transfer coefficients of CO2/propane refrigerant mixtures in the vertical tube are 5–10% higher than those in horizontal tubes. In Min Soo Kim et al [2], heat transfer characteristics show different tendency according to the tube orientations such as horizontal, vertical and inclined positions. In this study, evaporative heat transfer characteristics and pressure drop of CO2 and CO2/propane mixtures flowing upward are investigated in inclined smooth and micro-fin tubes. Smooth and micro-fin tubes with outer diameter of 5 mm and length of 1.44 m with inclination angle of 450 were chosen as test tubes. Average inner diameters of test tubes are 4.0 mm (smooth tube) and 4.13mm (micro-fin tube). The tests were conducted at mass fluxes from 212 to 656kg/ m2 s, saturation temperatures from -5 to 10 oC and heat fluxes from 15 to 60 Kw/m2 for CO2. In addition, for CO2/propane mixtures, the test was carried out at inlet temperatures from -10 to 30 0c for several compositions (75/25,50/50,25/75wt%) with the same mass fluxes, heat fluxes applied for CO2. Heat transfer coefficients in inclined tube are approximately 1.8–3 times higher than those in horizontal tube and the average pressure drop of inclined tube exists between that of horizontal and vertical tubes. Mathur, G.D et al [3] noted that Carbon dioxide among natural refrigerants has gained a considerable attention as an alternative refrigerant due to its excellent thermophysical properties. In-tube evaporation heat transfer characteristics of carbon dioxide were experimentally investigated and analyzed as a function of evaporating temperature, mass flux, heat flux and tube geometry. Heat transfer coefficient data during evaporation process of carbon dioxide were measured for 5 m long smooth and micro-fin tubes with outer diameters of 5 and 9.52 mm. The tests were conducted at mass fluxes of from 212 to 656 kg/m2, saturation temperatures of from 0 to 20 0C and heat fluxes of from 6 to 20 kW/m2. The difference of heat transfer characteristics between smooth and micro-fin tubes and the effect of mass flux, heat flux, and evaporation temperature on enhancement factor (EF) and penalty factor (PF) were presented. Average evaporation heat transfer coefficients for a micro-fin tube were approximately 150 to 200% for 9.52 mm OD tube and 170 to 210% for 5 mm OD tube higher than those for the smooth tube at the same test conditions.


JEYA PRATHA AND MAHENDRAN:EVALUATION OF EVAPORATIVE HEAT TRANSFER  73

The effect of pressure drop expressed by measured penalty factor of 1.2 to 1.35 was smaller than that of heat transfer enhancement. Carbon di oxide among natural refrigerants has gained considerable attention as an alternative refrigerant due to its excellent thermophysical properties. In the study by S.H. Yoon et. al [4], transcritical refrigeration cycle using carbon dioxide was considered, and the evaporation process was investigated by experiment and analysis. The paper presents the measured heat transfer coefficients and pressure drop during evaporation process of carbon dioxide in a horizontal smooth tube. The test section was made of a seamless stainless steel tube with the inner diameter of 7.53 mm, and length of 5 m. Heat was provided by a direct heating method to the test section. Experiments were conducted at saturation temperatures of −4 to 20 °C, heat fluxes of 12 to 20 kWm−2 and mass fluxes of 200 to 530 kgm−2 s−1. A comparison of different heat transfer correlations applicable to evaporation of carbon dioxide has been made. Based on the experiments for the evaporation heat transfer, useful correlation was developed. The ideal refrigeration or heat pump cycle for a given purpose is defined in G. Lorentzen [5] by the boundary conditions of the application and is completely independent of the refrigerant used. The real cycle should approach the theoretical ideal as closely as practically possible. The thermodynamic and heat transfer properties of the refrigerant are important in this respect. Natural substances such as ammonia, propane and carbon dioxide are often better than the present halocarbons in this regard. By using simple methods of safety it is possible to use these three natural fluids for practically all conventional refrigeration and heat pump systems.

heat flux condition, a long test section was established. The magnetic pump circulates subcooled liquid from the liquid receiver to the pre-heater. The subcooling heat exchanger and pre-heater are installed to adjust the inlet quality of the refrigerant to a desired value. The mass flow meter is installed before the pre-heater to measure the flow rate of the refrigerant in the liquid phase. After exiting the mass flow meter, refrigerants enter the test section and then they are evaporated while flowing through the tube. The subcooled liquid is heated by the test tube which is heated by the supplied electricity with a low voltage, high current power supply (Joule heating). After leaving the test section, CO2/propane refrigerant mixture is cooled down in another counter flow heat exchanger and the vapor generated in the test section is condensed. Pump draws the liquid from the condenser to complete the cycle. With the reference of this working model, a theoretical design has been done to evaluate the results of various heat transfer coefficient.

IV.  Experimental Setup

III.  Work Progressed

Fig. 1. Experimental setup

In our work, in order to find the heat transfer characteristics of the proposed mixtures of refrigerants, an experimental apparatus has been chosen from [1]. A schematic diagram of the experimental apparatus and test section to investigate the evaporation heat transfer characteristics inside a tube is referred as that the test loop consisting of a magnetic gear pump, mass flow meter (estimated error of 0.23%), preheater and test section is shown in fig 1. A test section of OD 5 mm and thickness 4 mm over a length of 1.4 m used is shown in fig. 2 and 3. The inlet quality of the test section was adjusted by the electric power input supplied to the pre-heater. In order to avoid the thermal entry length effect on heat transfer, and to achieve high exit qualities at a low

Fig. 2. Test Section - Cross section


74  HINDUSTAN JOURNAL, VOL. 6, 2013

5). The model made of copper with thermal conductivity of 406.7 (w/mk) and a free Tetrahedral meshing is given to get better results. (fig.6)

Fig. 3. Test section

V.  Theoretical Design for Evaluating The Heat Transfer Coefficient To find the heat transfer coefficient of proposed mixtures at various heat flux, inlet temperature and mass compositions, a test section [1] of OD 5 mm and thickness 4 mm over a length of 1.4 m (In fig.1) has been chosen. Here the evaluation of heat transfer coefficient is done by using the expression, Q=h*As *(Tf - Ts) Fig. 4. Cross sectional view of the model

where Q – Heat Transfer rate in Kw As – surface area of the test section in m2 Tf – Inlet fluid temperature in K Ts – Saturation temperature of the refrigerant mixture at the corresponding mass compositions in liquid phase in K.

Note: The saturation temperature of the refrigerant mixtures are chosen from REFPROP for its corresponding mass composition and its pressure.

VI.  Model for Analysis The 2-D model created for analyzing the heat transfer characteristics for various Heat flux, Inlet temperature and Mass compositions contains 4 mm inner diameter and overall length of 1.4 m is shown below (Fig.4 and

Fig. 5. Isometric view of the model


JEYA PRATHA AND MAHENDRAN:EVALUATION OF EVAPORATIVE HEAT TRANSFER  75

reduced operating pressure is one of the great benefits and may enhance the COP of the system. Therefore, CO2/propane refrigerant mixtures can be a promising refrigerant by reducing operating pressure of CO2. When the composition of CO2 in the mixture is higher, the heat transfer coefficient is comparatively low than that of any other compositions because of relatively strong contribution of CO2 during heat transfer. On the contrary, when the composition of propane increases the heat transfer coefficient is getting increased. The graphical representation placed below provides the characteristics of heat transfer with various mass compositions, heat flux and inlet temperature. For Inlet Temperature = -50c

Fig. 7. Graph showing variation of heat transfer coefficient with inlet temperature -50c

For Inlet Temperature = 00c

Fig. 6. Meshed model

VII.  Experimental Results (Fig 7-10) In the work, reported in this paper, the evaporative heat transfer characteristics of CO2/propane refrigerant mixtures were investigated for various mass fluxes, heat fluxes, inlet temperatures and several compositions. In addition, evaporative heat transfer coefficients according to flow type such as horizontal flow was also investigated. When the composition of CO2/propane is 75/25 (wt%), the value of heat transfer coefficient was the least. Moreover, when using 75/25 of CO2/propane mixture for refrigeration or air-conditioning system,

Fig. 8. Graph showing variation of heat transfer coefficient with inlet temperature 00c


76  HINDUSTAN JOURNAL, VOL. 6, 2013

For Inlet Temperature = 5 (0c)

Fig. 9. Graph showing variation of heat transfer coefficient with inlet temperature 50c

For Inlet Temperature = 10 (0c)

of CO2 (75/25 CO2/propane) because of partial dry-out along the circumference because the surface tension and viscosity of carbon dioxide is much smaller than those of conventional refrigerants, but for other mixtures and pure propane, heat transfer coefficients have a tendency to decrease at a low mass quality region and they increase again due to convective boiling. When the composition of CO2/propane is 75/25 (wt%), the value of heat transfer coefficient was the least and for (25/75) is the maximum. In a future work with the designed values of heat transfer coefficients of various parameters, a suitable heat transfer analysis has to be done by using CFX/ FLUENT software with the 2-D model created to validate them. With this better study of problem associated during solving the heat transfer analysis in a 2-D model, a 3-D model has to be designed to validate the results obtained theoretically. This requires a better knowledge about the problems associated with phase change in flow through cylinders, surface tension and friction associated with them.

References [1] J in Min Cho, Yong Jin Kim, Min Soo Kim (2010), “Experimental studies on the characteristics of evaporative heat transfer and pressure drop of CO2/propane mixtures in horizontal and vertical smooth and micro-fin tubes”, in International Journal of Refrigeration 33, 170 – 179.

Fig. 10. Graph showing variation of heat transfer coefficient with inlet temperature 100c

VIII.  Conclusion This paper presents the measured heat transfer coefficients during evaporation process of carbon dioxide in a horizontal smooth tube. Carbon dioxide among natural refrigerants has gained considerable attention as an alternative refrigerant due to its excellent thermophysical properties. In tube evaporation heat transfer characteristics of carbon dioxide were experimentally investigated and analyzed as a function of evaporating temperature, heat flux and tube geometry. Heat transfer coefficients show a tendency to decrease at the beginning of evaporating process for pure CO2 and mixture with high composition

[2] J in Min Cho, Yong Jin Kim, Min Soo Kim (2010), “Experimental studies on the evaporative heat transfer and pressure drop of CO2 and CO2/ propane mixtures flowing upward in smooth and micro-fin tubes with outer diameter of 5 mm for an inclination angle of 450”, in international Journal of Refrigeration 33, 922 – 931. [3] J in Min Cho, Min Soo Kim (2007), “Experimental studies on the evaporative heat transfer and pressure drop of CO2 in smooth and micro-fin tubes of the diameters of 5 and 9.52 mm”, in International Journal of Refrigeration 30, 986 – 994 . [4] S .H. Yoon, E.S. Cho, Y.W. Hwang, M.S. Kim, K.D. Min, Y.C. Kim (2004), “Characteristics of evaporative heat transfer and pressure drop of carbon dioxide and correlation development”, International Journal of Refrigeration 27, 111 119.


JEYA PRATHA AND MAHENDRAN:EVALUATION OF EVAPORATIVE HEAT TRANSFER  77

[5] J ung, D.S., Lee, H.S., Bae, D.S., Ha, J.C., (2005), “Nucleate boiling heat transfer coefficients of flammable refrigerants on various enhanced tubes”, International Journal of Refrigeration 28, 451–455. [6] K im, J.H., (2005), “Studies on the vapor-liquid equilibria of carbon dioxide/propane mixture and their performance in an airconditioning system”, Ph.D. thesis, School of Mechanical and Aerospace Engineering, Seoul National University, Korea. [7] K im, Y.J., Cho, J.M., Kim, M.S., (2008), “Experimental study on the evaporative heat transfer and pressure drop of CO2 flowing upward in vertical smooth”, and micro-fin tubes with the diameter of 5 mm. International Journal of Refrigeration 31, 771–779. [8] L ee, H.S., Phan, T.T., Yoon, J.I., (2006), “Characteristics of hydrocarbon refrigerants on evaporating heat transfer and pressure drop.”, International Journal of Air-Cond. Ref. 14, 102– 109. [9] M athur, G.D., (1998), “Heat transfer coefficient for propane (R-290), isobutene (R600), and 50/50 mixture of propane and isobutene”, ASHRAE Trans.: Symp., 1159–1172.

[10] K im, Y.J., Cho, J.M., Kim, M.S., (2008), “Experimental study on the evaporative heat transfer and pressure drop of CO2 flowing upward in vertical smooth and micro-fin”, tubes with the diameter of 5 mm. International Journal of Refrigeration 31, 771–779. [11] Y .C. Kim, K.J. Seo, J.T. Chung (2002), “Evaporation heat transfer characteristics of R-410A in 7 and 9.52 mm smooth/micro-fin tubes”, International Journal of Refrigeration 25, 716 - 730. [12] E .W. Lemmon, M.O. McLinden, M.L. Huber (2002), “Reference Fluid Thermodynamic and Transport Properties (PEFPROP)”, NIST Standard Reference Database 23, Version 7.0. Gaithersburg (MD, USA):National Institute of standard and Technology. [13] G . Lorentzen (1994), “Revival of carbon dioxide as a refrigerant”, International Journal of Refrigeration 17 (5) 292 - 301. [14] D .S. Jung, M. McLinden, R. Radermacher, D. Didion (1989), “A study of flow boiling heat transfer with refrigerant mixtures”, International Journal of Heat Mass Transfer 32, 1751 – 1764.


HINDUSTAN JOURNAL, VOL. 6, 2013

Design and Fabrication of Ultimate Chilling System T. S. Ravikumar and S. Saravanan

Abstract — This paper demonstrates the working of ultimate chilling system which can be used in the pharmaceuticals industry, R&D centres to preserve medicines etc. Ultimate chilling system is a three stage cascade refrigeration system. The main purpose of this system is to produce temperature less than -60˚C and to maintain that temperature constant in a conditioned space. This system has one main system and two sub systems. Each system has a different refrigerant (R134A, R404A, and R23). The main system has R23 as refrigerant. Sub system 1 has R134A as refrigerant and sub system 2 has R404A as refrigerant. Sub system 1 evaporator acts as a condenser for sub system 2, then sub system 2 evaporator acts as a condenser for the main system. By using these three systems temperature less than -60˚C can be achieved. The “coefficient of performance” calculations are also presented. 1

bigger operating pressure range. Cascade refrigeration system is most commonly used in low temperature application such as pharmaceuticals industry, to preserve medicines; research & development etc., This system can achieve refrigeration effect at a slower rate. Cascade refrigeration system can achieve up to -50˚C. To rectify this drawback, in ultimate chilling system three systems are used which consists of one main system and two subsystems. This system can achieve refrigeration effect at a faster rate and it can achieve temperature less than -60˚C. It can maintain temperature for any requirement. In this R23 refrigerant is used in the main system. R404A and R134A refrigerants are used in subsystems.

Index terms — Chilling system, cascade refrigeration system, refrigerant

I.  Introduction A cascade refrigeration system can be considered to be equivalent to two independent vapor-compression systems linked together in such a way that the evaporator of the high-temperature system becomes the condenser of the low-temperature system. However, the working media of the two systems are separated from each other. This therefore, allows the use of different refrigerants working at different temperature ranges to achieve the desired effect, which would otherwise, need to be achieved by a single refrigerant working at a T. S. Ravikumar and S. Saravanan are in School of Mechanical Sciences, Hindustan University, Chennai, India (email: mech@hindustanuniv.ac.in)

Fig. 1. Block diagram of ultimate chilling system.

II.  Components of Ultimate Chilling System The components of different subsystems are as follows:

A.  Subsystem 1: Refrigerant: R134A, reciprocating compressor, air cooled condenser, capillary tube & tube in tube heat exchanger1.


RAVIKUMAR AND SARAVANAN:DESIGN AND FABRICATION OF ULTIMATE CHILLING SYSTEM    79

B.  Subsystem 2: Refrigerant: R404A, reciprocating compressor, air cooled condenser, capillary tube & tube in tube heat exchanger 2.

C.  Main System: Refrigerant: R23, reciprocating compressor, air cooled condenser, capillary tube, evaporator coil & fan.

III.  Working Principles of Ultimate Chilling System When the system is switched on, timers are used to switch on the subsystem 2 and the main system. First subsystem1 is switched on after two minutes delay subsystem 2 is switched on. After four minutes delay main system is switched on. Subsystem1 (R134A) compressor starts running and compresses the low temperature and low pressure gas refrigerant from the suction line to high temperature and high pressure gas refrigerant. Then the refrigerant is sent to the air-cooled condenser. In this, the gas refrigerant condenses to liquid refrigerant. Liquid refrigerant is passed through a capillary tube to the tube in tube heat exchanger1. In capillary tube liquid refrigerant expands at a low temperature and a low pressure. Tube in tube heat exchanger1 acts as an evaporator for subsystem 1 and secondary condenser for subsystem 2. In tube in tube heat exchanger, low temperature and low pressure liquid refrigerant from subsystem1 (R134A) condense the refrigerant from the subsystem2 (R404A). Then low temperature and low pressure gas refrigerant from the tube in tube heat exchanger is sent to the suction line of compressor. The cycle continues in subsystem 1. In Subsystem2 (R404A) compressor starts running. It compresses the low temperature and low pressure gas refrigerant from the suction line to a high temperature and high pressure gas refrigerant. Then the refrigerant is sent to air-cooled condenser. In this gas refrigerant condenses to liquid refrigerant. Liquid refrigerant is passed to tube in tube heat exchanger1. In this further condensation takes place. Then low temperature liquid refrigerant is sent to the tube in tube heat exchanger2 through capillary tube. In capillary tube liquid refrigerant expands at a low temperature and low pressure. Tube in tube heat exchanger 2 acts as an evaporator for subsystem 2 and secondary condenser

of the main system. In tube in tube heat exchanger2 low temperature and low pressure liquid refrigerant from subsystem2 (R404A) condense the refrigerant from the main system (R23). Then low temperature and low pressure gas refrigerant from the tube in tube heat exchanger is send to the suction line of compressor. The cycle continues in subsystem 2. In the main system (R23) compressor starts running. It compresses the low temperature and low pressure gas refrigerant from the suction line to a high temperature and high pressure gas refrigerant. Then the refrigerant is sent to the air-cooled condenser. In this, gas refrigerant condenses to liquid refrigerant. Liquid refrigerant is passed to the tube in tube heat exchanger2. In this, further condensation takes place. Liquid refrigerant is passed through a capillary tube to an evaporator coil. In capillary tube liquid refrigerant expands at a low temperature and low pressure. Refrigerant absorbs heat in an evaporator coil. Then low temperature and low pressure gas refrigerant from an evaporator is send to the suction line of the compressor. The cycle continues in the main system.

Fig. 2. Schematic diagram of ultimate chilling system.

IV.  Design Calculation The following outlines the design calculations:

A.  Evaporator Calculation: (Before Fabrication) Available data: 1. Evaporator load = 3. 516 KW 2. Shell and coil evaporator 3. I nternal diameter of the copper pipe = 7. 9375 x 10-3 m 4. Outer diameter of the copper pipe = 9. 525 x 10-3 m


80  HINDUSTAN JOURNAL, VOL. 6, 2013

5. Inlet evaporator temperature = -65°C (208 K) 6. Outlet evaporator temperature = -35°C (238 K)

Nusselt number = 0.023 x (1.56 x 106)0.8 x (3.5)0.3 =3016. 115 Nusselt number=h0DO/K

7. Mass flow rate of R23= 1799 Kg/sec

3016.11 =

8. Velocity = 32. 546 m/sec

hi × 7.9375 × 10 −3 = 23115.59 kJ/kg 0.073

h0 =23115. 59 w/ m2k

Table 1. From R23 property table Temperature

Thermal conductivity

Prandtl number

°C

K(w/m K)

-Pr-

-65

0. 073

4

-35

0. 073

3. 5

To find U: U =

1 = 11505. 022W/m 2 k 1 1 + hi ho

To find Q: Q =m CP ΔT Watts

Solution:

Q = 1799 x 0.950 x (238 – 208) = 51271. 5 W

Inside flow:

Q=51271. 5 W

32.546 × 7.9375 × 10 −3 = 1.168 × 106 Re= μdi/ν = 0.211 × 10 −3

To find AREA:

Nusselt number =0.023 Re0.8 prn Nusselt no =0.023 x (1.168 x 106)0.8 x 40.3= 2490. 577 Nusselt number=hiDi/K hi × 7.9375 × 10 −3 = 22905.46 W / m 2 k 2490.577 0.073

Outside flow: 32.546 × 9.525 × 10 0.198 × 10 −6

51271. 5 =11505. 022 x A x (238 – 208)W A=0. 1485 m2 To find length of the evaporator: A=pDL 0. 1485 = p x 9.524 x 10-3 x L

hi = 22905. 46 w/ m2k

Re = μdo/ν =

Q =UA ΔT Watts

L =4. 964 ≈ 5m −3

= 1.56 × 106

After Fabrication total length of the evaporator coil =5m

Nusselt number =0.023 Re0.8 prn

V.  Coefficient of performance calculation for ultimate chilling system:

Fig. 3. Pressure(P)-Enthalpy(H)diagram

Fig. 4. Temperature (T)-Entropy(S) diagram


RAVIKUMAR AND SARAVANAN:DESIGN AND FABRICATION OF ULTIMATE CHILLING SYSTEM    81

A.  Main System (R23):

COP2 =

1-2 =isentropic compression in the compressor 2-3=constant pressure heat rejection in the condenser 3-4=isentropic expansion in the expansion device 4-1=constant pressure heat addition in an evaporator

COPR 23 × COPR 404 A 5.30 × 4.83 = = 2.347 COPR 23 + COPR 404.4 + 1 5.30 + 4.83 + 1

COPULTIMATE CHILLING SYSTEM =

COP2 × COPR134 A 2.347 × 3.02 = = 1.11 COP2 + COPR134 A + 1 2.347 + 3.02 + 1

COPULTIMATE CHILLING SYSTEM = 1.11 Expressions Inside flow:

B.  Subsystem2 (R404A): 5-6 =isentropic compression in the compressor 6-7=constant pressure heat rejection in the condenser

Re= μdi/ν

(1)

Nusselt number=hiDi/K

(2)

Nusselt number = 0.023 Re0.8 prn (3)

7-8=isentropic expansion in the expansion device

Outside flow:

8-5=constant pressure heat addition in an evaporator

Re= μdo/ν

(4)

Nusselt number=hoD0/K

(5)

C.  Subsystem1 (R134A):

Nusselt number = 0.023 Re0.8 prn (6) 9-10 =isentropic compression in the compressor 10-11=constant pressure heat rejection in the condenser 11-12=isentropic expansion in the expansion device 12-9=constant pressure heat addition in an evaporator Available data: Table 2. From refrigerant temperature and pressure chart refrigerant

temperature

pressure

k

bar

KJ/kg

KJ/kg

238

8. 5

340. 7

-

293

30. 7

365. 6

208. 6

253

3. 071

355. 16

-

303

14. 28

378. 14

244. 03

263

2. 006

244. 52

-

323

13. 85

276. 01

149. 41

R23

R404A

R134A

1 ( W/m 2 k ) 1 1 + hi ho Q =m CP ΔT watts U=

(7) (8)

Q =UA ΔT Watts h −h COPR23 = 1 4 h2 − h1 COPR 404 A

(10)

h −h = 5 8 h6 − h5

COPR134 A = COP2 =

(9)

(11)

h9 − h12 h10 − h11

(12)

COPR 23 × COPR 404 A COPR 23 × COPR 404 + 1

COPULTIMATE CHILLING SYSTEM =

COPZ × COPR134 A COPZ × COPR134 A + 1 (14)

VI. Abbreviations Re = Reynolds Number (non-dimensional)

Solution: COPR23

h −h 340.7 − 208.6 = 1 4 = = 5.30 h2 − h1 365.6 − 340.7

(13)

μ= velocity (m/s) ν = kinematic viscosity (m2/s)

h5 − h8 355.16 − 244.03 = = 4.83 h6 − h5 378.14 − 355.16

di=internal diameter of copper pipe(m)

COPR 404 A = COPR134 A =

h9 − h12 244.52 − 149.41 = = 3.02 h10 − h11 276.01 − 244.52

K= Thermal conductivity (W/m K)

hi =inner heat transfer coefficient(W/ m2k)

Pr= Prandtl number (non-dimensional)


82  HINDUSTAN JOURNAL, VOL. 6, 2013

do=outer diameter of copper pipe (m) ho =outer heat transfer coefficient(W/ m2k) Q= heat transfer (watts) m= Mass flow rate (kg/sec) CP=specific heat(kj/kg k) ΔT=temperature difference (K) U=overall heat transfer coefficient (W/ m2 k) A=area (m2)

VII. Conclusions In this paper the design and fabrication of an ultimate cooling system carried out has been described. The result obtained from the project reported in this paper is: Conditioned space temperature= -60°C Coefficient of performance =1. 11

References [1] A SHRAE, (1990) “Refrigeration Systems and Applications Handbook”, ASHRAE Inc. Atlanta. [2] A rora (2002), “Refrigeration and Air conditioning”, 2nd edition, Tata McGraw Hill, New Delhi. [3] C ourse In Refrigeration & Air-conditioning by Sc Arora, Domkundwar.

[4] D evanshu pyasi (2010), “Performance analysis of 404a/508b Cascade Refrigeration cycle for low temperature”, International Journal of Engineering Science and Technology (IJEST), Vol. 2, No. 8, pp. 302-306. [5] H ossein Amooie (2012), “performance analysis of co2/nh3 cascade refrigeration system using ANNs”. Journal of Advanced Computer Science and Technology, Vol. 5, No. 4, pp. 222-225. [6] K apadia (2011), “Comparative Assessment of a Cascade refrigeration cycle with different refrigerant pair”, International Conference on Current Trends in Technology, Vol. 7, No. 5, pp. 106-116. [7] M urat HOS_ OZ (2005), “Performance Comparison of Single-Stage and Cascade Refrigeration Systems using R134a as the Working Fluid”, Turkish Journal of Engineering Environmental Science, Vol 2, No. 15, pp. 27-32. [8]

rekh A. D., Tailor P. R. , 2012. “Thermodynamic P Analysis of R507A-R23 Cascade Refrigeration System.” International Journal of Aerospace and Mechanical Engineering, Vol. 2, No. 72, pp. 342-345.

[9] S tocker WF, “Industrial refrigeration handbook”, McGraw Hill, New York, 1998. [10] T ailor (2012), “Thermodynamic Analysis of R507A-R23Cascade Refrigeration System”, International Journal of Aerospace and Mechanical Engineering, Vol. 9, Vo. 24, pp. 22-27.


HINDUSTAN JOURNAL, VOL. 6, 2013

Review of Electrical Discharge Machining Process K.Viswanathan, P. Sengottuvel and J. Arun

Abstract — Machining of hard materials like, hardened steel, super alloys, carbides, ceramics with intricate shapes and profiles poses challenges when carried out with conventional machining process. Non-conventional machining processes are used to machine materials of different characteristics like conductive, non-conductive, composites of any size and shape and with high precision and surface quality [1]. Electrical Discharge Machining (EDM) is one of the most effective non-conventional processes best suited for machining of hard materials. EDM is a thermal material removal process and achieves high metal removal rate, better surface finish, and greater dimensional accuracy with less tool wear. EDM finds wide application in aerospace, automobile, defence, medical and other industries [9]. In this paper different EDM process and the critical parameters that affect the process are described. 1,2,3 Index terms — Electrical Discharge Machine, Material Removal Rate, Tool Wear Rate, Surface Roughness.

I.  Introduction Conventional machining such as turning, milling and drilling show ineffectiveness in machining of advanced materials, like composites, ceramics, super alloys as it results in poor MRR, excessive TWR and

K.Viswanathan is Associate Professor, School of Mechanical Sciences, Hindustan University, Chennai, India P. Sengottuvel is Professor, Mechatronics Engineering, Bharath University, Chennai, India. (email: sengottuvel@yahoo.com) J. Arun is PG Student, Sri Shanmuga College of Engineering & Technology, Tiruchengodu, Tamilnadu, India

increased SR [9]. The advanced materials have superior properties like high strength, high bending stiffness, good damping capacity, low thermal expansion, better fatigue characteristics. These make them a potential material for industrial application [8]. Manufacturing industries face challenges posed by these advanced materials which are hard to machine, requiring high precision, surface quality and increased machining cost. Non-conventional machining process overcomes these hurdles and offers a fitting solution with advanced methodology. The EDM, one of the non-conventional machining processes, has firmly established its use in the production of forming tool, dies and machining of advanced materials [7].

II.  History of EDM Process The Russian scientists Boris and Natalya investigated (1943) the wear caused by sparking between tungsten electrical contacts. This phenomenon was reversed to use controlled sparking as an erosion method [2]. In 1947, American scientists developed a process to remove broken drills and taps from aluminium castings by sparking. The process initially started with 60 sparks per second which further developed upto 1000 sparks per second. First EDM machine was produced in 1950. Die sinking machine became reliable and produced surfaces with controlled quality. During 1976 first CNC – EDM machine was produced. Research on EDM process control emerged in 1980 [3]. During 1990 use of fuzzy control, neural networks, surface methodology, central composite design, and Taguchi optimisation lead to further developments [1].

III.  EDM Working Principle EDM is a material removal process from work piece by recurring current discharge between two electrodes – the


84  HINDUSTAN JOURNAL, VOL. 6, 2013

tool and the workpiece. Both electrodes are separated by a dielectric liquid and subject to an electric voltage as shown in Figure.1 [3]. Tool electrode is moved downwards towards the work material until the spark gap is small enough, when the intensity of electric field between electrodes becomes greater than the strength of dielectric, allowing current to flow between electrodes [4]. The impressed voltage is great enough to ionize the dielectric. Short duration discharges (measured in microseconds) are generated in a liquid dielectric gap, which separates tool and workpiece. The material in the form of debris is removed with the erosive effect of the electrical discharges from tool and workpiece.EDM does not make direct contact between the electrode and the workpiece [9]. This eliminates mechanical stresses, chatter and vibration problems during machining [11]. EDM is a thermal material process and has three phases such as Ignition phase, Discharge phase and End of pulse [2]. The thermal energy generates a channel of plasma between the cathode and anode at a temperature of 8000 to 12,000°C [6]. When the pulsating direct current supply occurs at the rate of 15,000–30,000 Hz, the plasma channel breaks down [5]. This causes a sudden reduction in the temperature allowing the dielectric fluid in circulation to implore the plasma channel and flush the molten material from the pole surfaces in the form of microscopic debris. EDM processes are extensively used in prototype production in aerospace, auto, electronics and medical industry [1]. Also EDM processes are used in Coinage die making, small hole drilling and machining of intricate shapes and profiles [9].

sinking), Wire EDM, Dry EDM and Rotary disk electrode EDM [4].

A.  Die Sinking In Die Sinking EDM process, the tool electrode is the replica of the machined profile of the work material [12] .This process enables manufacturing of accurate and complex shaped cavities. Die Sinking EDM consists of an electrode and workpiece submerged in dielectric fluids. Figure 2 shows the schematic diagram of Die Sinking EDM, electrode and workpiece are connected to a suitable power supply, which generates an electric potential between both the parts [8]. As electrode approaches the work piece, dielectric breakdown occurs in fluids forming a plasma channel and spark jumps. As base metal is eroded, spark gap increases. Hence electrode is lowered automatically so that process is continued [12].

Fig. 2. Schematic diagram of Die-sinking EDM [KozakJ. Rajurkar K.P.2000]

Die sinking provides close tolerances, high surface finish and a finished product with little or no burring. It can also cut exotic materials and hard materials with little or no polishing after the process. Die sinking cuts very thin or delicate materials without damaging [12].

B.  Wire EDM Fig. 1. Schematic diagram of EDM [Bhavesh A. Patel et al, 2013]

IV.  Types of EDM The Electrical Discharge Machines are classified based on the working principles as Sinker EDM (Die

EDM wire cutting uses a metallic wire to cut a programmed contour in a workpiece. Thin wire is fed from a spool, and it is held between upper and lower guides, which are made of diamond. These guides are CNC controlled and can move in multi axis independently. This gives rise to the ability to cut intricate shapes like circle at bottom and square at the top [13]. Cutting forces are low and hence, less


VISWANATHAN ET AL.: REVIEW OF ELECTRICAL DISCHARGE MACHINING PROCESS  85

residual stress is caused. Extrusion dies and blanking punches are very often machined by wire cutting. The wire is usually made of brass or stratified copper, and is between 0.1 and 0.3 mm diameter and accuracy of machining is up to one µm [3]. Wire EDM can be either one cut or it will be roughed and skimmed. For one cut, the wire ideally passes through a solid part and drops a slug or scrap piece when it is done. During the skim cut, the wire is passed back over the roughed surface again with a lower power setting and low pressure flush. A skim pass can remove the material up to 0.005 mm or as little as 0.0003 mm. In roughing (i.e. the first cut) the water is forced into the cut at a high pressure in order to provide plenty of cooling and eliminate eroded particles as fast as possible whereas in skimming the water flows gently over the burn so as not to deflect the wire [8].

C.  Dry EDM Dry EDM uses gas-liquid mixture as the two phases of the dielectric fluid. It has the advantage of the concentration of liquid and properties of dielectric fluid to meet the desired performance responses. In dry EDM, tool electrode is formed to be thin walled pipe [10]. High MRR can be obtained cutting high strength engineering materials in the presence of oxygen high-pressure gas or air supplied through the pipe. The role of the gas is to remove the debris from the gap and to cool the inter electrode gap. The technique was developed to decrease the pollution caused by the use of liquid dielectric that leads to production of vapour during machining and the cost to manage the waste. Helium and argon gas can be used as a dielectric medium to drill holes using copper electrodes [14]. Introducing oxygen gas into the discharge gap increases the material removal rate in water as a dielectric medium.

D.  Rotary Disc Electrode EDM Machining by using Rotary Disk Electrode is developed in recent years. Study of micro electro mechanical Systems (MEMS) have resulted in the manufacture of small size products such as micropumps, micro-engines and micro-robots that have been successfully used in industrial applications [11]. The technique of precision machining for such small devices has become increasingly important. Rotary disc electrode electrical discharge machining is one

of the variant process in which unwanted material is removed in the form of debris by a series of recurring electrical discharges created by electric pulse generators in microseconds between rotary tool called disc and workpiece in the presence of dielectric fluid like kerosene or distilled water. Experimental study reveals that in rotary EDM, material removal rate is improved [7].

V.  Important parameters of EDM The various important parameters of EDM’s are: ●● S park On-time (pulse on time or Ton) is duration in μs between the current allowed to flow per cycle. Material removal is directly proportional to the amount of energy applied during this on-time period. This energy is really controlled by the peak current and the length of the on-time [1]. ●● S park Off-time (pause time or Toff ) is the time duration in μs between the sparks is off-time. This time allows the molten material to solidify and to wash out the arc gap. This parameter affects the speed and the stability of the cut. The too short offtime will cause sparks to be unstable [9]. ●● A rc gap (or gap) is the distance between the electrode and workpiece during the process of EDM. It may be called as spark gap. Spark gap is maintained by servo system [11]. ●● D ischarge current (current) is directly proportional to the material removal rate and is measured in Amps [7]. ●● D uty cycle (τ) is a percentage of the on-time relative to the total cycle time. It is calculated by dividing the on-time by the total cycle time [6] ●● V oltage (V) is a potential measured by volt, which influences the material removal rate ●● O ver cut is a clearance per side between the electrode and the workpiece after the marching operation. ●● M RR, TWR and SF are the main process outputs of EDM. Overall lower process efficiency and high TWR are the challenges in EDM process, Coating of tool electrode with novel materials for tool material or tool wear compensation in addition to


86  HINDUSTAN JOURNAL, VOL. 6, 2013

conventional method of using multiple electrodes are suggested [9]. ●● Influence of parameters: EDM process consists of a RC type generator. This generator can produce pulses from few tens of nanoseconds to a few micro-seconds. Experiments are being conducted for machining a specimen of 50mm diameter of 6mm thickness by using copper electrode with four geometries circle, square, rectangle and triangle. The power supply can vary voltage levels from 45V to 120 V. The input parameters are capacitance, discharge voltage and electrode materials and the responses included were MRR, and the SF.The S/N ratios are used to analyze the responses for a given set of input parameters [4]. Experimental results for SF and MRR vs. current intensity are depicting that as the pulse on time and pulse off time difference increases the MRR and SR both give negative results and that MRR decreases and SR increases. But as they come nearer to each other both the output parameters show good results [3].

VI.  Conclusion EDM has emerged as the most cost effective and high precision machining process in the past years. The machining capacity to remove hard and difficult to machine parts has made EDM as one of the most important machining processes. A review of the research trends for the last 50 years in EDM process; its applications and influence of critical parameters have been presented. EDM plays a significant role in medical, optical, jewellery, automotive and aeronautic industry. Such applications require machining of high strength temperature resistant (HSTR) materials, which demand strong research and development and prompt EDM machine tool manufacturers to improve the machining characteristics. Hence, further research is required to explore effective means of improving the performance of the EDM process.

References [1] S engottuvel, P., Satishkumar, S. and Dinakaran, D., “Optimization of Multiple Characteristics of EDM Parameters Based on Desirability Approach

and Fuzzy Modeling” Journal of Procedia Engineering, Vol.64, pp.1069-1078 (2013). [2] A nand Pandey and Shankar Singh, (2010) “Current research trends in variants of Electrical Discharge Machining” International Journal of Engineering Science and Technology, Vol.2(6), pp.2174-2181. [3] A morim F.L, and Weingaertner W.L, “Diesinking electrical discharge machining of a high-strength copper-based alloy for injection moulds” Journal of the Brazilian Society of Mechanical Sciences and Engineering, Vol.26(2), 2004. [4] B havesh A. Patel, and Patel D. S, “Influence of electrode geometry and process parameters on surface quality and MRR in EDM using Artificial Neural Network”, Vol.3(1), pp.16451647 (2013). [5] B ojorquez, B. Marloth, R.T, and Es-Said, O.S., “Formation of a crater in the work piece on an electrical discharge machine”, Engineering Failure analysis, Vol.9, pp.93–97 (2002). [6] G eough, J.A. Electro discharge machining in Advanced McMethods of Machining, Chapman & Hall, London, p.130. 1988 [7] H assan, EI-Hofy, Advanced Machining Processes. McGraw-Hill, pp. 36-37 (2005). [8] K ozakJ.Rajurkar K.P., “Selected Problems of Hybrid Machining Processes”, Advances in Manufacturing Science and Technology, Vol.109, pp: 360-366 (2001). [9] S engottuvel, P., Satishkumar, S. and Dinakaran, D., “Multi Objective Optimization of Process Parameters During Electrical Discharge Machining of Inconel 718 Using Desirability Approach”, International Journal of Applied Mechanics and Materials, Vol.159, pp.176-180, (2012). [10] M anish Vishwakarma, VishalParashar and Khare V.K., “Advancement in Electric Discharge machining on metal matrix composite materials in recent: A Review”, International Journal of Scientific and Research Publications, Vol.2(3), pp: 2250-3153, 2012.


VISWANATHAN ET AL.: REVIEW OF ELECTRICAL DISCHARGE MACHINING PROCESS  87

[11] M aradiaa U, Boccadorob M, Stirnimannc J, Beltramib I, Kustera, and Wegenera K, “Diesink EDM in meso-micro machining” Proc. of 5th Conference on High Performance Cutting. Institute of Machine Tools and Manufacturing, ETH Zurich, Zurich 8092, Switzerland, 2012. [12] S ingh S, Maheshwari S. and Pandey P.C, “Investigations into the electric discharge machining of hardened tool steel using different

electrode materials”, Journal of Materials Processing Technology, Vol.149, pp. 272–277, 2004. [13] S engottuvel, P., Satishkumar, S. and Dinakaran, D., “Optimization of Electrical Discharge Machining Parameters for Inconel 718 Using Grey Relational Analysis. SME ID no: JME111119”, Journal of Manufacturing Engineering, Vol.6, 255-259, 2011.


HINDUSTAN JOURNAL, VOL. 6, 2013

Community Colleges to SEmpower the Youth to Transcend Social Barriers Aby Sam and Akkara Sherine

Abstract — India is faced with a problem of unskilled and unqualified school drop-outs who complain of non-availability of jobs. Though the country claims to have a large number of educational institutions, one of the greatest concerns is the low Gross Enrolment Ratio (GER) of 12.5% which is far less than the global average. Adolescents aged between 10-19 years in India, form 21.4 percent of the total population (National Youth Policy 2000). The need of the hour is a skilled adolescent populace for which Hindustan Community College (HCC) has taken the initiative to work for the welfare of the people and prevent the drop-out students from becoming street vendors, labourers and antisocial elements which is a threat to the social fabric of the country. The author has coined the term “SEmpower” wherein “S” refers to the skills needed to equip or ‘Empower’ the youth through skills and need based education. The paper presents the case study and the success story of the students enrolled in HCC over the period of three years. The statistics claims that 91.6% have been placed and 8.4% have opted for higher education.1,2 Index terms — People-skills, skill-based and needbased education, SEmpower-“S”-skills to Empower.

I.  Introduction Hindustan Group of Institutions has taken an initiative to provide need-based and skill-based education to the adolescents who have been deprived of higher Aby Sam is Director, Hindustan University, Chennai, India, (e-mail: abysam@hindustanuniv.ac.in) Akkara Sherine is in Department of Applied Sciences, Hindustan University, Chennai, India, (e-mail: sherinej@hindustanuniv.ac.in)

education due to financial and social constraints. Hindustan Community College (HCC) was established in the year 2010, in association with Indian Centre for Research and Development of Community Education (ICRDCE), Chennai and Tamil Nadu Open University (TNOU) and Industries. HCC offers a number of skill development courses of one year duration aimed at empowering the rural and economically backward students from neighbouring villages, who have dropped out from formal education system or those who could not pursue higher education due to various economic constraints. The target group enrolled in the programme were school dropouts working as construction workers, casual labourers, house maids and cleaners and school drop outs. The main aims of the establishment of Hindustan Community College are: ●● To provide skill-based education, provide an apt livelihood, and enhance education and eligibility for employment for the poor, marginalized and disadvantaged sections of the society without any disparity. ●● O ffer appropriate, need driven programmes keeping abreast with the latest trends in the field of education, emphasizing on skills-set requirements of the employers. ●● T o open new routes to career shaping of students, focus on holistic developments, develop the students to face the real world and develop people skills and help them transcend the social barriers. Need analysis formed an important step to find the employment needs in the local area before the establishment of the community college. The courses offered comprised Desktop publishing, Computer Application and Data Entry, Hardware Servicing, Barbending and Steel Reinforcement, Health Assistance,


ABY SAM AND AKKARA SHERINE:COMMUNITY COLLEGES TO SEMPOWER  89

Ophthalmic Technical course and Beautician course. The syllabus also included training in Communication Skills, Computing Skills, Work Skills, and hands on experience for employment in collaboration with more than ten industries.

II.  The Concept of Community College The Community College is an alternative system of education, which is aimed at the empowerment of the disadvantaged and the underprivileged (Urban poor, Rural poor, Tribal poor and Women) through appropriate skills development leading to gainful employment in collaboration with the local industry and the community and achieve skills for employment and self employability of the above sections of people in the society. It is an innovative educational alternative that is rooted in the community providing holistic education and eligibility for employment to the disadvantaged. The Community College promotes job oriented, work related, skill-based and life coping education. The key words of the Community College system are access, flexibility in curriculum and teaching methodology, cost effectiveness and equal opportunity, collaboration with industrial, commercial and service sectors of the local area, responding to the social needs and issues of the local community, internship and job placement within the local area, promotion of self employment and small business development, declaration of competence and eligibility for employment.

Indian education system has created a large pool of men and women with robust scientific and technological capabilities, sensitive humanist and philosophical thoughts and profound capability. The statistics represented in the bar chart given below as Fig. 1, Fig. 2 and Fig. 3 indicates the growth of education system in the country in the Universities, Colleges and the strength of students enrolled during the period 19502013.

Fig. 1. Students enrolled in Universities

Fig. 2. Students enrolled in Colleges

III.  Economic Scenario in India India is the largest democracy in the world with 1.3 billion population and the 10th largest economy in the world by GDP. It is the third largest economy by purchasing power parity. India is the youngest nation in the world having 54% population under the age group of 25 years. The average age by 2020 is 29 years as compared to 40 in US, 34 in China, 47 in Europe and 46 in Japan. There is a total workforce of 459 million people. The challenge existing is the lack of penetration of nation’s economic and infrastructural progress into the hinterland (rural areas) which is a case of unbalanced development.

IV.  Education Scenario in India India boasts of having the third largest education system in the world. In the past 65 years (post independence),

Fig. 3. Strength of students in 1950 & 2013

As against 30 Universities, 700 Colleges and 400,000 students in 1950 now there are 19 - 20 million students in 600 Universities and 35000 colleges. Yet the current GER is 12 – 12.5% which is far below global average. Govt. of India has set a highly impressive and aggressive target of achieving 30% GER by 2020, which means enrolment of 40 million students as against 18 - 20 million now at the tertiary level. The most important question that arises here is the fate of the youth.


90   HINDUSTAN JOURNAL, VOL. 6, 2013

V.  Statistics of School Dropouts at Various Levels According to a survey in 2001 and 2008 the drop out percentage of students at primary level, middle school level and secondary level is as indicated in the chart.

This research paper focuses on the fate of all those youth who drop out of formal education system between the age of 11 – 17 years and the remedy, responsibility and initiatives focusing on under privileged youth. These children in the age group 11-17 years are left to face life and its challenges without appropriate skills and knowledge and finally become part of cheap labour forces to be exploited. Few may even land up in the hands of anti-social elements like illicit liquor trade, smuggling, theft, and underworld activities etc. which become a threat to social and economic fabric of the nation.

VI.  Concept of Community College

Fig. 4. Drop-out rates in school--2008

. Fig. 5. Drop-out rates in school --2001

The survey in 2008 as represented in Fig. 4 indicates that 160 million students enroll in primary school, out of which 10 to 12% only reach the tertiary level (higher education), 90% dropout at various stages due to various reasons, like affordability, poor economic condition, other priorities etc. In the state of Tamil Nadu according to statistics by National Information Centre (2001) as represented in Fig. 5 the dropout rate was 14.5% at primary, 35.6% at middle school, 57.5% at high school and 82.3% at higher secondary level. The statistics indicates that roughly 50% dropout at each stage of schooling.

The main keywords of the community college is based on “Including The Excluded Giving The Best To The Least” and it is an alternative system of education, aimed at the empowerment of the disadvantaged and the underprivileged (Urban poor, Rural poor, Tribal poor and Women). This leads to gainful employment through appropriate skill development in collaboration with the local industries and the community or self employment. The Community college is an innovative educational alternative that is rooted in the community. In India exclusion from main stream of society is mainly due to social exclusion based on caste, religion etc. which is now brought to a minimal scale through various efforts of State & Central Governments, NGOs etc. The economic exclusion is based on poverty unemployment and social insecurity. The second keyword is to give “The Best to Least” i.e. to extend the benefits of social and economic reforms, technological and scientific advancement and industrial growth to people living in rural areas and difficult terrains. India has the largest share of youth population which needs to be channelized into diverse and multilevel occupational areas. Only 5% of Indian labour workforce between the age 20-24 years have obtained the required skills-set as expected by the employers. Skill development has become a national agenda in India and is getting a major policy thrust. India has set an ambitious target of creating a 500 million globally employable workforce by 2022. Community Colleges in India serve the purpose of fulfilling the needs of the school-dropouts and enable them to acquire the necessary skills for livelihood and formal qualification for social status and social recognition. The term “SEmpower” refers to empower and equip the youth with the requisite


ABY SAM AND AKKARA SHERINE:COMMUNITY COLLEGES TO SEMPOWER  91

skills based on the needs of the industry. The sole aim of the Hindustan Community College is to SEmpower the youth and overcome the social barriers. Luckerson Victor in the article, “Career Strategies: Can Community Colleges Put Americans Back to Work?” explains the role of community colleges. He states that “community colleges have long played a key role as an entry way to better career opportunities for adults in the workforce” [1]. Research proves that the regular courses and curriculum in most of the colleges are insufficient. Specific skills-set training is the need of the hour for training the adolescents to meet the requirements of the employers. The term ‘people skills’ or ‘soft skills’, a complex concept, has several synonyms. It is called “World Skills” in the US and Australia; “Employability skills”, in Canada and Australia; “Core skills” or “Common skills” in the UK; “Key skills” in the UK, Australia and Germany; “Critical enabling skills” in Australia, “Transferrable skills” in France; “Trans-disciplinary goals” in Switzerland; “Process independent qualifications” in Denmark; and “Basic skills,” “Necessary skills,” and “Workplace know-how” in the US. Robert W. Glenn and Katheryn C. Keene of the Issues Management Group summarized the findings of the “Smyth County Workforce Development Demand Profile 2003” thus: There was virtually an across-the-board unanimous profile of skills and characteristics needed to make a good employee. While often referred to as “soft skills” in virtually all other studies, these interviews clearly prove that such skills are as important, than traditional hard skills to an employer looking to hire--regardless of industry or job type. The most common traits, mentioned by virtually every employer, were positive work ethic, attitude and desire to learn and be trained [2]. People Skills are the most sought after skills by the employers. Explicit training in these skills will ensure better performance in the world of work and also help the individuals to tackle problems in reallife situations. Introduction of people skills training as a part of the curriculum in Hindustan Community College ensured the youth to succeed in education, job training, independent living, community participation, and ultimately help in the workplace.

VII.  Which Skills are Needed to Succeed? National Collaborative on Workforce and Disability (NCWD) for Youth in the article, “Helping Youth

Develop Soft Skills for Job Success: Tips for Parents and Families.” explains the importance of skills: In the 1990s, several initiatives attempted to classify the types of skills needed to succeed in the workplace and adult life. Included among these efforts was the 1991 Secretary of Labor’s Commission on Achieving Necessary Skills (SCANS) and the Equipped for the Future Framework (EFF), which was the result of a 10-year initiative by the National Institute for Literacy (NIFL). The NIFL effort is the most holistic in that it addresses some key foundational “hard skills,” specifically reading, writing, and mathematics skills along with the important soft skills needed not only in the workplace but as members of families and society [3].

VIII.  Drop-Out Crisis According to Sunita Chugh, “The dropout problem is pervasive in the Indian education system. Many children, who enter school, are unable to complete secondary education and multiple factors are responsible for children dropping out of school” [4]. The probable risk factors are prevalent even before the students enroll in school. The main causes for the drop outs are poverty, parents with poor educational background, weak family structure, pattern of schooling of sibling, and lack of pre-school experiences. The school drop-outs are primarily attributed to economic status and poor parental education. To reiterate this point several researchers have consistently found that socio-economic status, most commonly measured by parental education and income, is a powerful predictor of school achievement and dropout behavior (Bryk and Thum, 1989) [5]. To better help the students stay in the education system, schools are adopting a whole school approach where the entire school community is involved in providing support to students who are potential drop-outs, and resources are synthesized to meet the three aspects of these students’ needs – emotional, behavioural and learning needs.

IX.  Role of Community Colleges in India Getting high school dropouts back on the path to graduate from high school, enroll in higher education and enter a promising career requires collaboration among the stakeholders namely the students, parents, educational


92   HINDUSTAN JOURNAL, VOL. 6, 2013

institutions and employers. Community colleges must play a big role in these efforts. Community colleges can play a big role in increasing the Gross Enrolment Ratio (GER) in the country too. Human Resource Development Minister of India, Kapil Sibal, opined in the global summit that, “community colleges could play a significant role in addressing the shortage of skilled workforce . . . admitting that skill development was a big challenge for the government, he said plans were on the anvil to start 100 colleges during the current academic session to impart vocational training. Of these 100 community colleges, Canada will collaborate in setting up of 10 colleges” [6]. The Indian Centre for Research and Development of Community Education (ICRDCE) has pioneered the formation of Community Colleges in India and initiated the implementation of an alternative system of education for the poor and downtrodden from 1995. Dr. Fr. Xavier Alphonse, S.J., Director ICRDCE, spoke about the Community Colleges as an Alternative System of Education and mentioned in the interview that “the vital aspect in the community colleges is the industryinstitution linkages . . . the three components that are focused in the entire curriculum of the community college system revolves around the - Attitude, Skills and Knowledge” [7]. The community college programme provides a specially customised and more holistic programme for students better suited to vocational education. It aims to help these students stay in the education system and equip them with the necessary skills and values for work life. The students’ performance is evaluated by the teachers handling life skills and work place skills. Self analysis of the individual students is also taken into account as part of the self-evaluation process to enhance the confidence of the students.

X.  Role of Community Colleges at Global Level Community colleges have a major role in global prosperity. It accounts for a special responsibility for workforce development and through partnerships with business and industry. The major role of community colleges is to develop skills-set requisites of potential employers, provide job training, retraining, certification, and skills improvement. In addition, they assume primary responsibility, in the public system, for offering developmental courses, programs, and other educational

services for individuals who seek to develop the skills needed to pursue college-level study or enter the workforce. Community colleges across the globe cater to the students’ requirements by providing a conducive and comfortable environment where the ideas, needs and contributions of all the students are respected and taken care of. Apart from the primary focus of catering to the workforce needs the academic and personal support services are also provided at the community colleges. The importance and the role of community colleges by President Barack Obama in the article “Building American Skills through Community Colleges” emphasize that, “In an increasingly competitive world economy, America’s economic strength depends upon the education and skills of its workers. In the coming years, jobs requiring at least an associate degree are projected to grow twice as fast as those requiring no college experience. To meet this need, two national goals were set: by 2020, America will once again have the highest proportion of college graduates in the world, and community colleges will produce an additional 5 million graduates” [8]. As the largest part of the nation’s higher education system, community colleges enroll more than 6 million students and are growing rapidly. They feature affordable tuition, open admission policies, flexible course schedules, and convenient locations. Community colleges are the “unsung heroes” of the American education system, President Obama said during a White House summit on community colleges. Community Colleges are particularly important for students who are older, working, or need remedial classes. Community colleges work with businesses, industry and government to create tailored training programs to meet economic needs like nursing, health information technology, advanced manufacturing, and green jobs. In the article by Elizabeth Redden, “The ‘Community College’ Internationally,” the need for skills development and its recognition at global level is reiterated by Tully Cornick, Executive Director of Higher Education for Development, which coordinates collaborations between American colleges and the United States Agency for International Development. According to Cornick, “There is a recognition around the world, and it manifests itself somewhat differently (in different countries), that community colleges, as one element of higher education system, have something very significant to offer to segments of the population – youth at risk, or those who left school and realize that they need skills development” [9].


ABY SAM AND AKKARA SHERINE:COMMUNITY COLLEGES TO SEMPOWER  93

In Australia, learning offered by community colleges has changed over the years. By the 1980s many colleges had recognised a community need for computer training and since then thousands of people have been up-skilled through IT courses. In Philippines a community school functions as elementary or secondary school at daytime and towards the end of the day converts into a community college. This type of institution offers night classes under the supervision of the same principal, and the same faculty members who are given part-time college teaching load. Industries play an important role in community colleges. In several community colleges across India, many industry representatives offer to teach the students based on the skills-set requisites. This is a boon for the stakeholders namely the students, the institution and the industry. The guidelines for the curriculum are also offered by the industries and it is incorporated in the community college. This further enhances the career prospects of the students enrolled in such colleges.

XI.  Vision and Mission of the Community Colleges world-wide The Vision of the Community College is to be of the Community, for the Community and by the Community and to produce responsible citizens. The Community College promotes job oriented, work related, skill - based and life coping education. The key words of the Community College system are access, flexibility in curriculum and teaching methodology, cost effectiveness and equal opportunity in collaboration with industrial, commercial and service sectors of the local area and responding to the social needs and issues of the local community, internship and job placement within the local area, promotion of self employment and small business development, declaration of competence and eligibility for employment.

XII.  Views of the Industrial Collaborators The Industrial Collaborators find the Community College System being initiated by service minded organisation and the delivery of the Community College System is excellent because it determines the future of the student and helps the school dropouts. They also observe that there is a close scrutiny and feedback on the student throughout the training. The system is most suitable for the economically weaker sections.

The community Colleges show the way for the poor to come up. The Industrial Collaborators feel that they are also sharing in the mission, reaching out to the poor and the most deserving. The Community College is a bold concept in the field of education [10].

XIII.  21st Century Skills Development and Community College Curriculum The fundamental changes in the economy, jobs, and businesses are driving new, different skill demands. Today more than ever, individuals must be able to perform non-routine, creative tasks if they are to succeed. While skills like self-direction, creativity, critical thinking, and innovation may not be new to the 21st century, they are newly relevant in an age where the ability to excel at non-routine work is not only rewarded, but expected as a basic requirement. Whether a high school graduates plans to enter the workforce directly, or attend a vocational school, community college, or university, it is a requirement to be able to think critically, solve problems, communicate, collaborate, find good information quickly, and use technology effectively. These are today’s survival skills—not only for career success, but for personal and civic quality of life as well. To succeed in academics, career and life in the 21st century, students must be supported in mastering both content and skills. “Curriculum and Instruction: A 21st Century Skills Implementation Guide” produced by the Partnership for 21st Century Skills gives a detailed account of the 21st century skills that include: Core subjects--21st century content: global awareness, financial, economic, business and entrepreneurial literacy, civic literacy and health and wellness awareness, learning and thinking skills: critical thinking and problem solving skills, communication skills, creativity and innovation skills, collaboration skills, contextual learning skills and information and media literacy skills, information and communications technology literacy, Life skills: leadership, ethics, accountability, adaptability, personal productivity, personal responsibility, people skills, self-direction and social responsibility [11]. For students, proficiency in 21st century skills— the skills, knowledge and expertise students must master to succeed in college, work and life—should be the outcome of a 21st century education. To be “educated” today requires mastery of core subjects, 21st century themes and 21st century skills. And both


94   HINDUSTAN JOURNAL, VOL. 6, 2013

students and educators need learning environments that are conducive to results [12]. Hindustan Community College, “Life and Career Skills” syllabus is based on the aspects listed out by the 21st Century Skills mentioned above. Today’s life and work environments require far more than thinking skills and content knowledge. The ability to navigate the complex life and work environments in the globally competitive information age requires students to pay rigorous attention to develop adequate life and career skills. The curriculum of the community college also includes English as part of the core subject. In 2006, Caser Lotto & Barrington conducted a survey of 400 business executives and managers, asking respondents to rank the relative importance of 20 skills and fields of knowledge to the job success of new workforce entrants at three education levels: high school, two-year college or technical school, and four-year college. The respondents ranked three skills among the top five most important skills and fields of knowledge for all three groups of new entrants: (1) professionalism/ work ethic, (2) teamwork/collaboration, and (3) oral communication. In comparison, science knowledge was ranked 17th in importance in the list of 20 skills and fields of knowledge for high school graduates and 16th in importance for two- and four-year college graduates. Intercommunication and communication skills rank high among the skills-set requisites of the employers [13]. Young people who learn to communicate in highly effectively ways – listening well, speaking, and writing clearly and persuasively – within and outside of the digital world will have a key skill to be highly competitive in today’s world (Envision EMI White Paper 2010) [14].

XIV.  Statistics of Students in HCC Statistics of students enrolled in Hindustan Community College from the year of its establishment 2010, indicates that there has been a constant increase in the intake of the students who are economically poor, and it caters to the needs of all categories of students below 10th, 10th failed, 10th passed, 12th failed, 12th passed and school drop outs. The Community Colleges have been serving the socially backward groups Scheduled Castes (SC), Scheduled Tribes (ST), Most Backward Class (MBC) and Backward Class (BC). They account

for 92%, thus transcending social barriers. HCC also serves 87% of the economically weaker sections who have their monthly family income below Rs. 3000.

Fig. 6. HCC--Statistics of students enrolled, passed and placed

The above graphical representation indicates the number of students enrolled in the course, students passed and students placed in three batches namely I Batch 2009-2010, II Batch 2010-2011 and III Batch 2011-2012 of Hindustan Community College. The study of the graph above indicates that in Batch I—39 students were enrolled, 39 students passed and 34 students were placed. In Batch II—75 students were enrolled, 75 passed and 62 students were placed. In Batch III—84 students were enrolled, 84 students passed and 77 students placed. The statistics indicates 100% pass percentage, 91.6% placed and survey showed that the remaining 8.4% students opted for higher education.

XV.  Highlights of the Hindustan Community College (HCC) Curriculum The curriculum of the Hindustan Community College has four distinct parts: life skills, work skills, internship and preparation for employment. The aim of the curriculum offered for the students at HCC is to bring the classroom and the real world together. The students are engaged in experiential learning driven by questions and problems that are interesting, relevant


ABY SAM AND AKKARA SHERINE:COMMUNITY COLLEGES TO SEMPOWER  95

and thought-provoking. The teaching methodology adopted in various community colleges is as follows: lecture, interactive, discussion, seminar and tutorial methods. The discussions with the teachers handling the classes at Hindustan Community college revealed that 69% of the teachers opt for interactive method and 31% opt for lecture methods of teaching methodology. Since the statistics reveals that teachers are opting for interactive methodology it indicates that the teachers have realized the importance of interactive teaching methodology over the lecture method. The students are given opportunities to collaborate on diverse teams, speak, listen, resolve conflicts, think critically and creatively to set goals, create plans, solve problems and make decisions, all while interacting with their academic peers. The activities embedded in the curriculum are also focused on the self-awareness of the students and also self-examination, assessment and monitoring of their progress, and encourage them to prepare personal and leadership development plans. This promotes a high level of self-awareness about their strengths, areas of need, beliefs, values, and goals. The evaluation and assessment of skills done by the Community College is based on: self-assessment, assessment of the life skills and work skills by teachers and internship supervisor at the works spot. The students in community colleges are trained to have sound personal ethics and it refers to the character of an individual which forms an important component of people skills. “Personal Ethics develops the Personal Effectiveness of the students and it results in effective learning outcomes and peps up their job prospects” Sherine et al.,(2012) [15]. Hindustan Community College syllabus emphasizes the importance of personal ethics as an important component of people skills. Martin Luther King Jr. says: Intelligence plus character--that is the goal of true education. The complete education gives one not only power of concentration, but worthy objectives upon which to concentrate. The broad education will, therefore, transmit to one not only the accumulated knowledge of the race but also the accumulated experience of social living. If we are not careful, our colleges will produce a group of close-minded, unscientific, illogical propagandists, consumed with immoral acts. Be careful, “brethren!” Be careful, “teachers”! [16]

XVI.  Conclusion This research paper concludes highlighting the importance of community college and its unique feature in giving skills training to students empowering the target learners in life coping skills and working skills. The teaching methodology, the curriculum and the assessment methods adopted in this institution are unique and beneficial to the students. The study shows that the majority of the students belonging to the economically backward groups of society, and the school drop outs transcending the social barriers, are the beneficiaries of the programme offered by the community college. It is noteworthy to mention that a student of HCC was selected for one year training in the USA as part of the Indo-US Community Scholarship scheme initiated by the US Consulate in New Delhi, India. Hindustan Community College fulfills the needs of the students and has successfully conducted the courses for three years. The institution has a record of 91.5% placement of students and the remaining 8.5% have opted for higher education. If one Institution can reach out to 200 – 300 deprived and excluded section of youth, empowering them with required skills and helping them to come out their societal and economic barriers to a world of new hope, new status and life style, think of the changes, countless organisations, institutions and corporate-big or small, can make.

References [1] V ictor Luckerson, “Can Community Colleges Put Americans Back to Work” Time.com. 28 Nov. 2012. Web 4 Feb. 2013. <http://business.time. com/2012/11/28/can-community-colleges-putamericans-back-to-work/#ixzz2JDwgXqw2>. [2] G lenn, Robert and Katheryn Keene.“Smyth County Industry Council. Workforce Demand Profile 2003.” The Issues Management Group. 25 Jan. 2004. [3] N ational Collaborative on Workforce and Disability for Youth, “Helping Youth Develop Soft Skills for Job Success: Tips for Parents and Families.” Web 4 Feb. 2013. <http://www.ncwdyouth.info/information-brief-28>.


96   HINDUSTAN JOURNAL, VOL. 6, 2013

[4] C hugh, Sunita. “Dropout in Secondary Education: A Study of Children Living in Slums of Delhi” NUEPA Ocassional Paper 37, 2011. http://www.nuepa.org/Download/ Publications/Occasional%20Paper%20 No.%2037.pdf [5] B ryk and Thum, (1989): The Effects of High School Organization on Dropping out: An Exploratory Investigation, American Educational Research Journal 26(3) 353-383. [6] I ndia to set up 100 community colleges: Sibal Press Trust of India / New Delhi September 06, 2012, Web. 6 Feb. 2013. <http://www.businessstandard.com/generalnews/news/india-toset100-community-colleges-sibal/53051/>. [7] C hristian Manager “Community Colleges as an Alternative System of Education” Interview with Fr. Xavier Alphonse. Christian Manager. http:// www.cimindia.in/cm/Au-sp20-28.pdf [8] “ Building American Skills through Community Colleges.” http://www.whitehouse.gov/sites/ default/files/100326-community-college-factsheet.pdf .

[11] “ Curriculum and Instruction: A 21st Century Implementation Guide” Partnership for 21st Century Skills. http://p21.org/storage/documents/ p21-stateimp_curriculuminstruction.pdf [12] “ 21st Century Skills Education & Competitiveness: A Resource and Policy Guide” Partnership for 21st Century Skills.http://www. p21.org/storage/documents/21st_century_skills_ education_and_competitiveness_guide.pdf [13] C asner-Lotto J, Barrington L. “Are they really ready to work”? Washington, DC: Conference Board, Partnership for 21st Century Skills, Corporate Voices for Working Families, and Society for Human Resource Management; 2006. Web. 5 Feb. 2013. http://www.conference-board. org/Publications/describe.cfm?id=1218 [14] B uilding the Foundation for a Lifetime of Success” Envision EMI White Paper June, 2010. Web 5 Feb 2013. http://www.envisionemi.com/ pdf/Whitepaper_Experiential_Leadership_ Education.pdf

[9] E lizabeth Redden, The ‘Community College’ Internationally, Web. June 16, 2010 http://www. insidehighered.com/news/2010/06/16/intl

[15] A kkara Sherine, A Rajkumar and N. Jose Pravin, “Study on People Skills Enhances Learning Outcomes and Peps up Job Placement using Combined Overlap Block Fuzzy Cognitive Maps (COBFCMS)” http://www.ijcaonline.org/ archives/volume57/number8/9136-3336

[10] R esearch study on ‘Impact & Prospects of the Community College system in India’ Aug 2003--Madras Centre for Research and Development of Community Education–Chennai

[16] M artin Luther King Jr., “The Purpose Of Education.” Morehouse College Student Paper, The Maroon Tiger, 1947. http://www.drmartinlutherkingjr. com/thepurposeofeducation.htm


HINDUSTAN JOURNAL, VOL. 6, 2013

Continuous Professional Development: A Proposal for an Integrated Programme in Teaching English as a Second Language P. Bhaskaran Nair

Abstract — The term ‘professionalism’ has become the buzzword in almost all walks of life, erasing border lines between occupations and professions. Even the most ancient occupations -- agriculture and trade -- are getting professionalized. Teaching, which has hitherto been considered as everyone’s cup of tea, and a field which anyone had anytime access to, too has started demanding a high degree of professionalism. Teaching English as a second language (TESL) has become an internationally acclaimed programme in higher education with an optimal degree of professionalism embedded in it. In India too, a few universities have started offering pedagogical-cum- professional oriented programmes in TESL, along with the conventional programmes in literature. This paper tries to present the rationale for introducing a seven-year integrated programme in teaching English as a second language with formal or action research following it, so that teaching English becomes a profession in its true sense.1 Index Terms — Continuous professional development, Teaching English as a second language, Action research, Integrated programme.

I.  TESL in India: The Maladies Just have a look at some of the strange happening in the field of (or in the name of ) English language education. First, anyone, with or without even a bachelor’s degree in any discipline, ‘becomes’ a teacher one fine morning P. Bhaskaran Nair is in School of Applied Sciences, Hindustan University, Chennai, India (e-mail: bhaskaranpnair@yahoo.co.in).

and starts teaching English! Many schools appoint professionally unqualified people as teachers, and later get them ‘deputed’ for training courses such as BEd. and DTE. Secondly, a few others with a bachelor’s degree and a professional degree (BEd.) in some other discipline teach English with the blessings of the governments. Some of them have somehow managed to get a pass minimum in the examination in the common English component after repeated efforts! Thirdly, a young man or woman, with a minimum degree of performance in the PG programme straight away walks into a PG class and starts ‘lecturing’ to the class without any professional training in the field! Fourthly, a BEd. programme, which is expected to give the teacher trainees the basics of pedagogic principles and an initiative into teaching profession has been reduced to an academic ritual of four to seven months, across the universities in India. If this is the pre-service scenario of teaching professionals, the quality of in-service training programmes which are meant for enhancing academic excellence, is still worse. Higher education sector does not seem to believe in excelling academic quality of teachers through any kind of pre-service professional training like a diploma or degree as we do have for teaching in schools. Programmes such as Refresher Courses and Orientation Programmes for a few days conducted periodically by Academic Staff Colleges have become an instance of academic ritual, if not farce. (Exceptions may be there.) In-service teacher training programmes in English for school teachers are usually conducted in the regional language, to the greater comfort of the ‘resource persons’ as well as the participants! The real but pathetic picture of the in-service teacher training programmes in general


98  HINDUSTAN JOURNAL, VOL. 6, 2013

has been presented by a practicing teacher in a small volume, (See Hemraj Bhat, 2010). What about the student community? Those poor creatures, who are led by the carrot ahead of them called marks and grades in the examinations, seldom get dissatisfied with their performance either in the class or in the examinations. While in class, regional language comes to their help (quite often, following the model of their English teacher)! In the public examinations at the end of class X and XII, the strongest claimants for ‘moderation marks’ are Mathematics and English. Thus, students get through these hurdles without much effort on their part. Finally, how do the prime stakeholders, the parents, respond or react to these eventualities? There seem to have quite a few options before them. Adapt to the system and, thereby be part of it, as the majority does it. Secondly, express their dissatisfaction to the mainstream education system by admitting their children in the socalled self-financing English medium schools, without knowing that things are worse there, inside! Thirdly, send them to professional courses (mainly, Engineering and Medicine) where there is no English component in the entrance examination, and thereby ensure a double benefit—getting rid of the nuisance called English and ensure a better career and a financially more secure future for their children!

II.  Still, Not Beyond Remediation Remediation is not something impossible. But the teacher-researchers’ lamentation over the poor learner performance at conferences and seminars does not address the issue at all. The obvious reason, of course is: “Physician, heal thyself first!” The quality of the content as well as language of the papers presented at these academic gatherings ( in the areas of both language and literature) as revealed from the published abstracts and proceedings themselves will attest the remark made above. The key academic issues underlying the facts listed above can only be addressed by bringing in professionalism to the academic programmes. The term ‘professionalism’ needs to be clearly defined in this context. In general use, a ‘professional’ is a trained and qualified specialist who displays a high

standard of competent conduct in their practice. …The term ‘professionalism’ is regularly used in a constitutive sense to refer to practitioners’ knowledge, skills, and conduct. In discussions on teacher education, professionalism issues are often addressed through questions such as What should teachers know? and How should teachers go about their business?(Leung,C.2009, p. 49). One effective way of addressing this malady which afflicts teaching of English—at least to avoid such a fate in future—rests with the university departments which have academic autonomy and have a say in such matters. A seven-year integrated programme in English Literature and Teaching Second Language (ELTSL) can address most, if not all of the shortcomings of the existing UG and PG programmes in English, currently being offered by the Indian universities. A proposal to this effect follows.

III.  Structure of the Programme The fourteen-semester Integrated programme can be perceived in three stages: The first six semesters as a UG programme, the following four semesters as a PG programme (with provision for lateral entry through an entrance examination), and the last four semesters, as a professional programme. At each terminal point, that is, at the end of the sixth, tenth and fourteenth semester the student is entitled for different degrees such as the bachelor’s, Master’s and Professional’s. As a result, the student can terminate the course at the end of the first stage and still can have vertical growth, taking diversion into journalism, Mass Media, Communication and so on; or pursue professional programmes such as Law. In the same way, new admissions can be made into the programme at the beginning of the seventh semester on the basis of an entrance examination which does justice to the course contents of the first six semesters. Again, the student must be free to leave the programme at the end of the tenth semester with a Master’s degree.

IV.  Course Contents It will be rather untimely to restrict the contents of the whole programme at this point of proposal; but at the same time, it seems imperative to suggest a broad


BHASKARAN NAIR: ONTINUOUS PROFESSIONAL DEVELOPMENT    99

outline of the contents spread over the three stages. They are as follows: Stage i. The Under graduate English language: 50% English and Other Literatures in Translation: 50% Stage ii. The Post graduate English language: 40 % English and other literatures: 60% Stage iii. Professional Teaching English language and literature: Theory 75% Practice: 25% The allocation outlined above, no need to say, is tentative. Alteration is part of the system. The details of the syllabi can be worked out at the appropriate time.

V.  Research / CPD The pedagogic functioning of the university department which offers the integrated programme proposed above does not end with producing a professional who is expected to meet the challenges of the profession well. It further demands the department to promote academic research in the field of TESL along with the conventional areas such as literature and cultural studies. It must also sponsor ongoing (and, never ending) academic events leading to continuous professional development (CPD) in the form of inservice teacher education, action research and so on. That is to say, it is the department’s duty to provide a permanent platform for its former students who have entered the teaching profession to periodically meet and discuss issues related to the teaching-learning of English. Such periodic events can be in the form of workshops, conferences, self help groups (SHGs), special interest groups(SIGs), non-governmental organizations (NGOs) and so on.

VI.  Action Research in Focus As hinted above, the programme does not come to an end by the end of the fourteenth semester; it is only the first half, the second half being CPD in the form of in-service teacher education programme, action research etc. along with formal academic research.

Action research is less known in academic circles, especially among college teachers. What is action research? A cluster of mutually complementing definitions (since each is liable to be incomplete by itself) is given below. “Action research is a process of systematic reflection, enquiry and action carried out by individuals about their own professional practice’ (Frost,2002,p.25). “ Action research is a term used to describe professionals studying their own practice in order to improve it” (GTCW,2002,p.15). “ Educational action research is an enquiry which is carried out in order to understand, to evaluate, and then to change, in order to improve some educational practice (Bassey, 1998,p. 93). “Action research combines a substantive act with a research procedure; it is action disciplined by enquiry, a personal attempt at understanding while engaged in a process of improvement and reform (Hopkins,2002,p.42). “When applied to teaching, [action research] involves gathering and interpreting data to better understand an aspect of teaching and learning and applying the outcomes to improve practice (GTCW, 2002,p.15). “Action research is a flexible spiral process which allows action (change, improvement) and research (understanding, knowledge) to be achieved at the same time (Dick,2002). “ Action research is … usually described as cyclic with action and critical reflection taking place in turn. The reflection is used to review the previous action and plan the next one’ (Dick, 1997). “ Action research is … an approach which has proved to be particularly attractive to educators because of its practical, problem-solving emphasis…” (Bell, 1999,p.10). (The compilation of definitions above has been quoted from Action Research (p.3- 4) Put together these definitions will tell us how important it is for a teacher to be an active action researcher as well, so that professionalism can be attained in teaching career.


100  HINDUSTAN JOURNAL, VOL. 6, 2013

VII.  Expected Outcome One may doubt the relevance of claiming a special if not superior status for the English Department. Introducing a volume on English language teacher education, the editors Anne Burns and Jack C. Richards state: One of the simple facts in the present time is that the English language skills of a good proportion of citizenry are seen as vital if a country is to participate actively in the global economy and to have access to the information and knowledge that provide the basis for both social and economic development. Central to this emphasis are English teaching and English language teachers. There is consequently increasing demand worldwide for competent English teachers and for more effective approaches to their preparation and professional development. The expected outcome of a University department of English offering such a course and engaging in the ongoing follow up programmes of CPD is multidirectional. First, the maladies listed at the outset of this paper which are currently gripping the body called English language teaching, can be prevented the most successful way. ( As the saying goes, prevention is better than cure.) The products of such a programme will be well equipped to meet any challenge of an English classroom, since (s)he is likely to have acquired adequate competence in the fields such as Theoretical linguistics, Applied linguistics, Second language acquisition, Educational psychology, Second language pedagogy and English literature. Secondly, such a University department will be functioning as a beacon (to borrow a phrase from Jawaharlal Nehru’s celebrated convocation address at Allahabad University, interestingly in the context of

defining the role of a University in the independent India). How Nehru envisages the role of a university in terms of guiding the youth, and thereby the entire nation is applicable in its true spirit in the case of a teachingresearch department as well. Think of the day in which teachers of English—of all levels, from pre-primary to university—located far and near, old students or not, being constantly in touch with the university department for consultation, guidance, meeting, sharing and thus contributing to the infinite growth of the department, and in turn, the whole University. Into that University department, let me awake!

References [1] B hat, Hemraj, (2010), “The Diary of a School Teacher”, Tr. Sharada Jain. Bangalore, Azim Premji Foundation. [2] B urns, Anne. (1999), “Collaborative Action Research”, Cambridge, Cambridge University Press. [3] B urns, Anne and J.C. Richards. (2009), “Second Language Teacher Education”, Cambridge, Cambridge University Press [4] C ostella, Patrick.J.M. (2003), “Action Research”, London, Continuum Publishers. [5] L eung, Contant. (2009), “Second Language Teacher Professionalism”. In Burns, Anne and J.C. [6] R ichards. (2009), “Second Language Teacher Education”, Cambridge, Cambridge University Press. [7] T udor, Ian. (1996), “Learner-centredness as English Language Education”, Cambridge, Cambridge University Press.


HINDUSTAN JOURNAL, VOL. 6, 2013

Librarianship in Digital Era E. Boopalan, K. Nithyanandam and I. Sasirekha

Abstract — In this age of Information Technology, there have been so many opportunities for the librarians for involvement in an informationbased society including electronic and multimedia publishing, internet based-information services, global networking, web based digital resources etc. Digital libraries require the digital librarians (DL) to be essentially a type of specialist librarian who has to manage and organize the digital library, handle the specialized tasks of massive digitization, storage, access, digital knowledge mining, digital reference services, electronic information services, search co-ordination, and manage the archive and its access. This article highlights the roles and functions of a DL in information retrieval, content delivery, navigation, and browsing. It denotes the DL’s interface functions, roles, skills and competencies for the management of digital information systems in the important areas of imaging technologies, optical character recognition, markup languages, cataloguing, metadata, multimedia indexing and database technology, user interface design, programming, and Web technology.1, Index terms — Digital Era, Digital Library, Library Management, Librarianship, Cloud Computing

I.  Introduction Most people believe that internet can replace the library and they can get all types of informational sources from it. Therefore, the librarians should take on the challenge of guiding the users on how to evaluate and identify the accurate and correct sources using the right method. E. Boopalan, K. Nithyanandam and I. Sasirekha are in School of Applied Sciences, Hindustan University, Chennai, India (e-mail: boopalan6@gmail.com, cl@ hindustanuniv.ac.in, i.sasirekha@gmail.com)

This can be achieved only if the librarians are well prepared and are aware of the new transformational changes occurring in the libraries. Traditionally librarians are known as individuals working in the library building and are responsible in carrying out tasks such as: cataloging, acquisition, circulation, customer service, user education, etc. They are also involved in acquiring, organizing and preserving the printed materials besides helping and guiding the readers in searching and locating the information they need. In the last decade this situation has been rapidly changed due to the advancement in information technology, communication and internet connection. For centuries librarians have been known as the information providers using the manual and traditional methods but now due to current trends, librarians need to adapt to the new working environment which will help them in providing faster, complete and effective ways of accessing information for their users. The ultimate goal of a Digital librarian is to facilitate access to information just-in-time to the critical needs of end users and additionally to facilitate electronic publishing. The digital librarian plays a distinctive and dynamic role in easy accessing of computer held digital information including abstracts, indexes, full-text databases, sound and video recording in the digital formats. For finding the right information at the right time, the research, education and training, learning and developmental work and disseminating to the user in required format are the basic requirements of Digital librarian.

II.  Role of a Digital Librarian In this digital era, librarians have to change themselves as the information profession is changing. The new generation of library users or “technology savvy users” realized that they need help from the librarians to guide and teach them how to search and access information


102  HINDUSTAN JOURNAL, VOL. 6, 2013

using the latest technology and internet facilities provided by the library. In order to change, librarians need to be aware and be ready to take on the new responsibility such as: A.  Information Organizer and Provider – Able to provide services and instructions regardless of place or time. B.  Librarian as an Instructor – The most important task carried by the new version of Librarian/ Information Professional is in educating their library users. Librarians often carry out information searches requested by the users especially in Academic Libraries. This situation can be quite burdensome for some librarians since a small number of librarians need to serve a very large number of requests from various library users. It is necessary for the users to do the information searching or research themselves and to help them in this matter; librarians need to educate them first. From time to time, the number and variety of information sources available, whether from printed sources or via the World Wide Web, have increased greatly, and users are having difficulties in keeping up with all of the choices now open to them. Therefore, librarians need to educate their library users on how to search, find, evaluate and store the information. Librarians in most IPTA/IPTS have greatly enhanced their provision of user education, especially in regards to electronic sources of information, which is now known as the most dominant activity by Librarians. Cataloging, Circulation and Customer Service is slowly being delegated to paraprofessionals. Librarians now have the same important roles as academic staff whereby they are involved in giving training and guidance especially in the use of electronic journals from many different publishers, abstracts and indexes databases, databanks, CD-ROM publications, document delivery services, citation styles, evaluation criteria for internet sources and many more. In short, librarians are the information literacy experts. C.  Navigator, Browsing and Filtering Experts – Librarians should be aware of the new trends in technology and approaches. They should be able to know and understand how the digital reference services works and extract the electronic information from digital information sources. D.  Consultant – Librarians should be prepared to answer any queries by users. There is no reason for them to delay the process. In order to adapt to the digital era,

ways for the librarians to answer all questions enquired by the library users either through email, facebook, twitter, skype, telephone, etc. need to be identified. A library can also upload videos about itself in Youtube for users who want to know more about that particular library.

III.  Digital Library Access Tools There are various tools available to use in digital information systems and they facilitate in accessing, searching, browsing, navigating, retrieving, indexing, storing, organizing and dissemination of digitized information. The list given below is the digital information sources and pools, and these are used as digital access tools which ultimately aim to facilitate universal access to all: Online public access catalogues (OPACs): metadatabases (describe, provide link to other databases/ digital information sources; online databases (KnightRider, OCLC, MEDLINE). Internet-based tools: e-mail networks, mailing lists, electronic conferences, World Wide Web, Website home pages, Wide Area Information Services (WAIS), Web browsers, Gopher systems, and Veronica Archie, FTP, Telnet, Usenet, Newsgroups, BBS, List servers, discussion groups. Digital networks/networking: BLAISE, MEDLINE, NICNET, DELNET, AGRIS, INIS and all sorts of networks. Further, access is needed to: ●● Hypertext/Hypermedia. ●● Multimedia (high bandwidth computer networks). ●● Multimedia networking protocols. ●● Cellular and pager networks. ●● Electronic publishing tools. ●● Net-dwelling software agents. ●● Electronically fax/commercial vendors. ●● Telephone/TV

IV.  Role of Cloud Computing in Libraries Cloud computing is a completely new technology and it is known as the third revolution after PC and Internet.


BOOPALAN ET AL : LIBRARIANSHIP IN DIGITAL ERA  103

Cloud computing is an enhancement of distributed computing, parallel computing, grid computing and distributed databases. Among these, grid and utility computing are known as predecessors of cloud computing. Cloud computing has large potential for libraries. Libraries may put more and more content into the cloud. Using cloud computing, user would be able to browse a physical shelf of books, CDs or DVDs or choose to take out an item or scan a bar code into his mobile device. All historical and rare documents would be scanned into a comprehensive, easily searchable database and would be accessible to any researcher. Many libraries already have online catalogues and share bibliographic data with OCLC. More frequent online catalogues are linked to consortium that share resources. Data storage in cloud is a main function of libraries, particularly for those with digital collections. Storing large digital files can stress local server infrastructure. The files need to be backed up, maintained, and reproduced for patrons. This can strain the data integrity as well as hog bandwidth. Moving data to the cloud may be a leap of faith for some library professionals. On the surface it is believed that library would have some control over this data or collections. However, with faster retrieval times for requests and local server space it could improve storage solutions for libraries. Cloud computing or IT infrastructure that exists remotely, often gives users increased capacity and less need for updates and maintenance, and has gained wider acceptance among librarians.

V.  Cloud Computing in Libraries The advantages of Cloud computing are: ●● Cost saving ●● Flexibility and innovation ●● User centric

VI.  Cloud libraries The examples of Cloud Libraries includs: ●● OCLC ●● Library of Congress (LC) ●● Exlibris ●● Polaris ●● Scribd ●● Discovery Service ●● Google Docs / Google Scholar ●● Worldcat ●● Encore

VII.  Competencies and Skills of a Digital Librarian The competency of a digital librarian is represented by different sets of skills, attitudes and values that enable a digital librarian to work as digital information professional or digital knowledge worker and digital knowledge communicator. The following are the skills and competencies required for a digital librarian in the management of digital information systems and digital libraries:

A.  Internet, WWW: ●●

Navigation, browsing, filtering;

●● Retrieving, accessing, digital document analysis; ●● D igital reference services, electronic information services; ●● S earching network databases in a number of digital sources and Websites;

●● Openness

●● C reating home pages, downloading techniques;

●● Transparency

●● WEB publishing, electronic publishing;

●● Interoperability ●● Representation

●● A rchiving digital documents, locating digital sources;

●● Availability anytime anywhere

●● Digital preservation and storage;

●● Connect and Converse

●● Electronic messaging, connectivity skills;

●● Create and collaborate

●● WEB authoring.

content

conversion,


104  HINDUSTAN JOURNAL, VOL. 6, 2013

B.  Multimedia, digital technology, digital media processing: ●● M ultimedia indexing, image processing, objectoriented processing; ●● I nteractive digital visualization;

communications

and

●● C ataloguing and classification of digital documents, digital content; ●● S earching and retrieval of text, images and other multimedia objects; ●● Speech recognition, image visualization; ●● A dvanced processing capabilities exploiting digital medium; ●● C onferencing techniques teleconferencing, video conferencing.

including

librarian, digital information professional, cybrarian, and information broker. A different view of the future might be one where a ``digital library’’ is more like a ``knowledge warehouse’’, where a complex system of professionals whose expertise supports access to information acts as an intermediary to a variety of digital and other sources (Kuny and Cleveland, 1998). It is especially important to understand that the ultimate goal is not just to facilitate access to digital information just-in-time to the critical want of endusers and additionally electronic publishing, but to create and develop digital knowledge channels, digital knowledge sources which allow synergy between partners leading to mutual exchange and enrichment of digital knowledge domain.

IX.  Challenges in Digital Era The challenges to be faced in the digital era are:

C.  Digital information system, online, optical information:

●● New generation of learners

●● I nterfacing online and off-ramps, twists and turns of digital knowledge;

●● Privacy/Confidentiality

●● Development of digital information sources;

●● Technology Challenges

●● Digitization of print collections;

●● Manpower

●● Competency to manage CD-ROM network station;

●● Collection of e-digital resources

●● D evelopment of machine readable catalogue records;

●● Organizational structure

●● Design and development of databases; ●● D esign and development of software agents for digital libraries; ●● Conversion of print media into digital media; ●● Knowledge in digital knowledge structures.

VIII.  Transformation as a Digital Librarian In future, the librarian would mostly become an online worker, supporting the citizen/worker by selling services. Finding relevant information faster than the competitors, faster than a non-information-worker can do, and surviving on the basis of superior knowledge of the networks and digital information resources available through them would be his main concern. We already have the words to describe these roles: digital

●● Copyright Act ●● Online/Virtual Crimes and Security

●● Preservation/Archiving Digital Resources ●● Lack of clarity in vision

X.  Conclusion The digital librarian will become the guardian of digital information and will be the vehicle to preserve democratic access to information. The digital librarian’s role will be increasingly towards offering consultancy to the users in their efforts in providing digital reference services, electronic information services, navigating, searching and retrieval of digitized information through Web documents that span the Universal Digital Library or the Global Digital Library. The digital librarian will be an embodiment of a digital information professional or digital knowledge worker, who will ensure that the digital libraries are used effectively and with ease. Digital librarians with newly acquired skills can


BOOPALAN ET AL : LIBRARIANSHIP IN DIGITAL ERA  105

play a meaningful and leading role in the networked information society of the millennium. Digital librarians add values and can make digital libraries truly useful and user friendly. The knowledge that ``digital librarians’’ bring to this knowledge environment would make sense of a multiplicity of digital collections and resources, provide access to a network of key contacts, identify cost-effective strategies for information retrieval, and assist users in the publication and creation of new knowledge.

References [1] B orgman, C. L., Bates, M. J., Cloonan, M. V., Efthimoadis, E. N., Gilliland- Swetland, A. J., Kafai, Y. B., et al. (1996). “Social aspects of digital libraries”. Retrieved 2012 from http://dlis. gseis.ucla.edu/DL/UCLA_DL_Report.html [2] E guavoen, O. E. L. (2011). “Attitudes of Library Staff to the Use of ICT: The Case of Kenneth

Dike Library”, University Of Ibadan, Nigeria. Ozean Journal of Social Sciences 4(1). [3]

adehan, O. A., & Ali, H. (2010). “Educational F Needs of Librarians in the Digital Environment: Case Studies of Selected Academic Libraries in Lagos State”, Nigeria. Library Philosophy and Practice.

[4] G reen, C. (2009). “Rethinking how and where digital knowledge is stored, shared, tagged and licensed in the 21st century: New role for librarians?” Retrieved 2012, from http://www. slideshare.net/CollegeLibrarians/new-role-forlibrarians-1486401 [5] H awkins, B. L. (2001). “Information access in the digital era”. Retrieved 2012 from http://net. educause.edu/ir/library/pdf/erm0154.pdf [6] W illiam, B.K. and Saffady, S. (1995), “Digital library concepts and technologies for the management of collections: an analysis of methods and costs’’, Library Technology Reports, Vol. 31, May-June, pp. 221.


Forthcoming Conferences 1. I CICT 2014 : International Conference on Information and Communication Technologies (Proceedings - Elsevier’s Procedia Computer Science) 3rd to 5th December 2014 Kochi, Kerala, India Website: http://icict.cusat.ac.in/ 2. 2014 International Conference on Circuits, Devices and Systems (ICCDS 2014) 13th to 14th December 2014 Xiamen, China Website: http://www.iccds.org/ 3. 2014 International Conference on Mechanical Properties of Materials (ICMPM 2014) 18th to 20th December 2014 Barcelona, Spain Website: http://www.icmpm.org/ 4. International Science and Technology Conference 18th to 20th December 2014 Doha, Qatar Website: http://iste-c.net/ 5. Twelfth AIMS International Conference on Management 2nd to 5th January 2015 Kozhikode, Kerala, India Website: http://www.aimsinternational.org/aims12 6. 2015 3rd International Conference on Communication and Electronics Information (ICCEI 2015) 5th to 6th January 2015 Bali, Indonesia Website: http://www.iccei.org/ 7. International Conference on Pervasive Computing 8th to 10th January 2015 Pune, Maharashtra, India Website: http://www.icpc2015.org 8. Second International Conference on Recent Trends in Engineering and Technology – 2015 10th to 11th January 2015 Cochin, Kerala, India Website: http://www.ijbrmm.org/ ICRTET_cochin

9. 2 015 - 2nd International Conference on Information, Communication and Computer Networks ( ICI2CN ) 14th to 15th January 2015 London, United Kingdom Website: http://www.ici2cn.com 10. 2015 3rd International Conference on Electrical Energy and Networks (ICEEN 2015) 17th to 18th January 2015 Kuala Lumpur, Malaysia Website: http://www.iceen.org/ 11. International Conference on Mathematical and Computational Sciences (IC-MACS 2015) 22nd to 24th January 2015 Kannur, Kerala, India Website: http://www.ic-macs.com 12. 4th National Conference on “ Recent Developments in Mechanical Engineering (RDME-2015) 29th to 30th January 2015 Pune, India Website: http://www.rdmemescoe. com 13. International Conference Data Mining, Civil and Mechanical Engineering (ICDMCME’2015) Feb. 1-2, 2015 Bali (Indonesia) 1st to 2nd February 2015 Bali, Indonesia Website: http://www.iieng. org/2015/02/02/58 14. 2015 3rd International Conference on Intelligent Mechatronics and Automation (ICIMA 2015) 2nd to 3rd February 2015 Singapore Website: http://www.icima.org/ 15. 3rd International Conference on Model-Driven Engineering and Software Development 9th to 11th February 2015 Angers, France Website: http://www.modelsward. org 16. Second International Conference on Signal Processing

17.

18.

19.

20.

21.

22.

23.

and Integrated Networks, SPIN2015 19th to 20th February 2015 Noida, Delhi/NCR, India Website: http:// www.spin2015.com 2015 4th International Conference on Information and Industrial Electronics (ICIIE 2015) 6th to 7th March 2015 Melaka, Malaysia Website: http:// www.iciie.org/ International Conference on Electrical, Electronics and Information Technology 13th to 14th March 2015 Tirupati, India Website: http://worldairco.org/ ICEEIT%20Mar%202015/ ICEEIT%20Mar%202015.html 2015 3rd International Conference on Information and Computer Networks (ICICN 2015 19th to 20th March 2015 Florence, Italy Website: http://www.icicn.org/ 2015 International Education Conference in San Juan 22nd to 26th March 2015 San Juan, Puerto Rico Website: http://cluteinstitute.com/ conferences/2015SanJuanEducati on.html 2015 7th International Conference on Digital Image Processing (ICDIP 2015) 9th to 10th April 2015 Los Angeles, United States of America Website: http://www.icdip.org/ 2015 International Conference on Bioinformatics and Computer Engineering (ICBCE 2015) 18th to 19th April 2015 Chengdu, China Website: http:// www.icbce.org 2015 International Conference on Bioinformatics and Computer Engineering (ICBCE 2015)


HINDUSTAN JOURNAL, VOL. 6, 2013

24.

25.

26.

27.

28.

29.

30.

18th to 19th April 2015 Chengdu, China Website: http://www.icbce.org International Summit on Big Data Analysis and Data Mining 4th to 6th May 2015 Lexington, Kentucky, United States of America Website: http:// datamining.conferenceseries.com/ 2015 The 7th International Conference on Computer Engineering and Technology (ICCET 2015) 13th to 14th April 2015 Paris, France Website: http://www.iccet.org/ 2015 International Conference on Bioinformatics and Computer Engineering (ICBCE 2015) 18th to 19th April 2015 Chengdu, China Website: http:// www.icbce.org/ International Conference for Academic Disciplines (Vienna) Conference 19th to 23rd April 2015 Vienna, Austria Website: http:// www.internationaljournal.org/ vienna.html The Fourth International Conference on Computer Science & Computational Mathematics (ICCSCM 2015) 7th to 8th May 2015 Langkawi, Malaysia Website: http://www. iccscm.com International Conference on Engineering, Management and Social Sciences 8th to 9th May 2015 London, United Kingdom Website: http:// worldairco.org/ICEMS%20 May%202015/ICEMS%20 May%202015.html International Conference on Civil and Environmental

31.

32.

33.

34.

35.

36.

Engineering (ICOCEE – Cappadocia 2015) 20th to 23rd May 2015 Nevsehir, Turkey Website: http://www.icocee.org/ 19th International Symposium VLSI Design and Test VDAT2015 26th to 29th June 2015 Ahmedabad, India, Gujarat, India Website: http://vdat2015.org ERES 2015 - 10th International Conference on Earthquake Resistant Engineering Structures 29th June to 1st July 2015 Opatija, Croatia (Hrvatska) Website: http://www.wessex. ac.uk/eres2015 International Conference on Renewable Energy and Sustainable Environment (RESE – 2015) 3rd to 5th August 2015 Pollachi, Tamilnadu, India Website: http:// www.drmcet.ac.in/conference IEEE Technically Co-Sponsored Science and Information Conference 2015 28th to 30th July 2015 London, United Kingdom Website: http:// thesai.org/SAIConference2015 BIM 2015 - International Conference on Building Information Modelling in Design, Construction and Operations 9th to 11th September 2015 Bristol, United Kingdom Website: http://www.wessex. ac.uk/bim2015 International Conference on Innovations in Intelligent Systems and Computing Technologies (ICIISCT2015) 18th to 20th September 2015

37.

38.

39.

40.

41.

42.

Udaipur, Rajasthan, India Website: http://sdiwc.net/ conferences/iciisct2015/ International Conference and Expo on Computer Graphics and Animation 21st to 22nd September 2015 San Antonio, Texas, United States of America Website: http://computergraphicsanimation.conferenceseries.com/ Global Conference on Renewable Energy (GCRE) 19th to 21st October 2015 Patna, Bihar, India Website: http://www.weentech. co.uk/gcre2015/ The International Conference on Software Engineering, Mobile Computing and Media Informatics (SEMCMI2015) - Part of The Fourth World Congress on Computing and Information Technology (WCIT) 27th to 29th October 2015 Kuala Lumpur, Malaysia Website: http:// sdiwc.net/conferences/ semcmi2015/ Global Cancer Summit 18th to 20th November 2015 Banglore, Karnataka, India Website: http://www. globalcancersummit.com/ 3rd International Conference on HIV/AIDS, STDs & STIs Nov 30- Dec 02, 2015 Atlanta, USA Website: http://hiv-aids-std. conferenceseries.com/ 2015 14th International Building Simulation Conference 7th to 9th December 2015 Hyderabad, India Website: http://bs2015.in/



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.