A Real Time Hand Gesture Recognition Technique by using Embedded device

Page 1

Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 1, 043 - 046

A Real Time Hand Gesture Recognition Technique by using Embedded device.

entertainment, recreation, health-care, nursing, etc. [6] In human-human interaction, multiple communication modals such as speech, gestures and body movements are frequently used. The standard input methods, such as text input via the keyboard and pointer/location information from a mouse, do not provide a natural, intuitive interaction between humans and robots. Therefore, it is essential to create device for natural and intuitive communication between humans and robots. Furthermore, for intuitive gesture-based interaction between human and robot, the robot should understand the meaning of gesture with respect to society and culture. The ability to understand hand gestures will improve the naturalness and efficiency of human interaction with robot, and allow the user to communicate in complex tasks without using tedious sets of detailed instructions. This interactive system uses robot eye’s cameras or web cameras to identify humans and recognize their gestures based on hand poses. Robot is different from the computer machine in the sense that it has to take inputs and give outputs like human being. In other words, the robot has to behave like a human way while being operational as a machine. Robots are being used not only in factories but also in mines, construction sites, public places and homes. A new category of robots, also called as Field and Service Robots (FSR) are supposed to perform a variety of tasks in unknown, unstructured and changing environments. Research in Human- Robot Interaction (HRI) [1] mainly talks about the use of robots in coordination with humans. Despite significant amount of literature available in “artificial intelligence” research, robots and computers are still machine like devices and research being carried out to make as human-like machine as possible. In the subtasks, where high-level cognition or intelligence is needed, the robot has to ask for help from operator. A gesture is defined as a string of movements with specific Breaks that are reached progressively over time. [3]Gestures correlate to movement and a change of position as a function of time. Some simple gestures commonly have only one position to reach from the beginning to the end of the gesture. Other gestures cover multiple positions and rarely remain in a stationary pose. Modeling and recognizing a gesture is a difficult challenge since gestures occur dynamically both in shape and in duration. This also makes it difficult to delineate

ES

Abstract— This paper explores how to interact with the Humanoid robot using the user defined hand gesture. for this scan line algorithm is used. We explain how to design humancomputer interface, and to regulate and set the motion of humanoid robot. And then the planning motion sequences are stored in the motion database A real-time hand gesture recognition system is developed for human-robot interaction of service robot. Gesture-based interface offers a way to enable untrained users to interact with robots more easily and efficiently. Proposed system presents a human-robot interface system where robot will perform the same gesture like human perform. System deals with a method to recognize hand-gesture in system. The system uses single camera to recognize the user's hand-gesture. It is hard to recognize hand-gesture since a human-hand is the object with high degree of freedom and there follows the self-occlusion problem, the well-known problem in vision-based recognition area. However, when we use multiple images & scan line algorithm to increases the processing speed, this will increase the human computer interaction by using real time hand gestures.

Prof. R. W. Jasutkar Computer Science & Engg., dept G.H.Raisoni.College of Engg. Nagpur, India r_jasutkar@yahoo.com

T

Ms. Shubhangi J. Moon M.E IV sem( Embedded System & Computing) Computer Science & Engg., dept. G.H.Raisoni.College of Engg. Nagpur, India shu.chand@gmail.com

A

Keywords— Gesture, Hand gesture recognition, robotics, Human Computer Interaction (HCI).

IJ

I. INTRODUCTION The term Gesture is defined as “movement to convey meaning” or "the use of motions of the limbs or body as a means of expression; a movement usually of the body or limbs that expresses or emphasizes an idea. [3] The main purpose of gesture recognition research is to identify a particular human gesture and convey information to the user pertaining to individual gesture. From the corpus of gestures, specific gesture of interest can be identified, and on the basis of that, specific command for execution of action can be given to robotic system. Overall aim is to make the computer to understand human body language thereby bridging the gap between machine and human. Hand gesture recognition can be used to enhance human–computer interaction without depending on traditional input devices such as keyboard and mouse. The use of intelligent robots encourages the view of the machine as a partner in communication rather than as a tool. In the near future, robots will interact closely with a group of humans in their everyday environment in the field of

ISSN: 2230-7818

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 43


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 1, 043 - 046

Hand Motor Driver

Head Motor Driver

The following processes compose the general framework:a) Initialization: The recognizable postures are stored in a visual memory, which is created in a start-up step. In order to configure this memory, different ways are proposed. b) Acquisition: a frame from the web cam is captured. c) Segmentation: each frame is processed separately before its analysis: the image is smoothed, skin pixels are labeled, noise is removed and small gaps are filled. Image edges are found, and finally, after a blob analysis, the blob which represents the user’s hand is segmented. A new image is created which contains the portion of the original one where the user’s hand was placed.

ES

II. SYSTEM COMPONENTS

Processor: to execute the software program stored in to ROM Ports: to communicate with external devices like driver controller or camera controller. Motor driver controller: This controller will employee different sub controller for different actions gets performed by the robot. Those mainly divided in to 1.Hand motor driver 2. Head motor driver 3. Wheel motor driver Camera Controller: This unit will capture the video input for the system using frame capturing technology.

T

between erroneous motions and specific gestures. Gestures can broadly be divided into two categories, a communicative/meaningful gesture and a non-communicative or transitional gesture. In order to identify different types of communicative motions, it is important to classify gestures. The aim of the proposed system is mainly designed for controlling robotic hand or an individual robot by merely showing hand gestures in front of a camera. With the help of this technique one can pose a hand gesture in the vision range of a robot and corresponding to this notation, desired action is performed by the robotic system. Simple video camera is used for computer vision, which helps in monitoring gesture presentation. This approach consists of four modules: (a) A real time hand gesture formation monitor and gesture capture, (b) feature extraction, (c) Pattern matching for gesture recognition, (d) Command determination corresponding to shown gesture and performing action by robotic system. Realtime hand tracking technique is used for object detection in the range of vision. The primary goal of hand gesture recognition research is to create a Embedded device which can identify specific hand gestures and use them to convey information or for device control.

Wheel Motor Driver

d) Pattern Recognition: once the user’s hand has been segmented, its posture is compared with those stored in the system’s visual memory (VMS) using scan line algorithm. e) Executing Action: finally, the system carries out the corresponding action according to the recognized hand gesture.

Camera Controller

A

Overall Execution Sequence

Power Supply Unit

IJ

Figure 1. Physical architecture. Main objective of a proposed system is 

 

The proposed embedded systems will used to improve the Human computer interaction by merely showing the hand gesture in front of camera which is inside of a robot. Single web camera is used.

Microcontroller: this main component will manage all the processing and hardware controlling. This component then mainly utilize ROM: to store embedded software program to process the system flow

ISSN: 2230-7818

Figure 2. Overall execution sequence.

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 44


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 1, 043 - 046

IV. OVERVIEW OF PROPOSED WORK

ES

Start

training phase and a testing phase. [4] In the training phase, the user shows hand gestures which were captured using Web camera. In this we have to save each and every posture in database. So instead of managing a huge database, we are going to divide the frame into different scan lines and just examine the pixels which fall under a particular scan line. This reduces the size of database to some extent because we are going to store the coordinates of the scan lines in the form of database. Initially we are going to examine whether this plan work for the color of arm (i.e. the color of shirt) and if it is successful then we are going to implement this for entire hand gesture. Scan line algorithm is beneficial because it will scan only those pixels where the hand movements action is performed. So this will increase our processing Speed.

T

Any camera first get controlled trough it’s driver software which is provided with the camera hardware. This driver is also customized for specific operating system. Now application needs to contact operating system for camera access. Once this process complete live cam view displayed in supportable control like picture box but this is real time live view so it is not possible to process it directly so we need to get the current frame out of live streaming for processing. This is what we called it frame extraction and we load this frame in memory for fast processing. Even we got the frame it is not easy to identify the color from it so we need to perform image processing here and as we know each pixel is made up of 3 bit of RGB so we try to extract RGB value for each pixel and we also try to compare those value with predefined value like what we do in pattern matching. All this process will executed for next frame, here we are doing frame capturing and frame processing both, it should be compulsory the both process should be synchronous for smooth performance. This is how real time image processing work and utilized in our project.

Get Camera View Get video Frame

Figure 4.Initial position.

A

Get Image Pixel Get RGB Value

As shown in fig.3, the system scan the pixel color which fall under lines indicated as L for left hand movements and R for right hand movements respectively and accordingly takes the respective action. V. THE RECOGNITION SYSTEM ARCHITECTURE

IJ

Compare RGB Value Colour as Output Stop

Figure 3.. Steps of Processing.

The aim of this paper is to present a Embedded device based on pattern recognition techniques for classifying hand gestures into ten categories: hand pointing up, pointing down, pointing left, pointing right and pointing front. [5] We have applied a simple pattern recognition technique to the problem of hand gesture recognition. This method has a

ISSN: 2230-7818

Figure 5. Recognition system The proposed system consists of four main components the large testing/training data, the hand gesture feature extraction, APs-based hand gesture recognition and applications as shown in Fig.5. [5] The hand gesture feature extraction component is designed and implemented based on software. In this

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 45


Ms. Shubhangi J. Moon et al. / (IJAEST) INTERNATIONAL JOURNAL OF ADVANCED ENGINEERING SCIENCES AND TECHNOLOGIES Vol No. 2, Issue No. 1, 043 - 046

component, we extract hand gesture feature as vector to hand gesture segmentation and vector extraction. In addition, the APs-based hand gesture recognition component is developed based on hardware. [7] It is able to satisfy quick recognition speed. Moreover, one of the main goals of this paper is to propose one kind of architecture based on software and hardware to solve the real-time hand gesture recognition problem. On the other hand, the architecture is also designed for those applications employing interaction between human and computer through hand gestures.

[7] Chen-Chiung Hsieh and Dung-Hua Liou, David Lee, A Real Time Hand Gesture Recognition System Using Motion History Image, 2nd International Conference on Signal Processing Systems (ICSPS), 5-7 July 2010, Dalian. [8] Chenglong Yu, Xuan Wang, Hejiao Huang, Jianping Shen, Kun Wu, Vision-Based Hand Gesture Recognition Using Combinational Features, Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing,2010, Darmstadt, Germany. [9] Mary Ruttum and Sarangi P. Parikh, Can robots recognize common Marine gestures?,42nd Southeastern Symposium on System Theory (SSST),2010, Tyler, TX.

ACKNOWLEDGMENT The Author would like to gratefully thank to prof.R. W.Jasutkar for her valuable guidance & her Support.

REFERENCES

ES

There are many approaches to hand gesture recognition, and each approach has its strengths and weaknesses. The strength of the proposed method in this paper is Scan line algorithm The Proposed algorithm achieves 88% average recognition rate. The weakness of the system is that the actions performed by robot is from small distances so in our future work, the system will be converted to wireless model so actions can be performed from long distances, and some additional gesture can be designed for Human Computer interaction such as sign language translation

T

CONCLUSION & FUTURE SCOPE

[1] Jagdish Lal Raheja, Radhey Shyam, Umesh Kumar, P Bhanu Prasad, Real-Time Robotic Hand Control using Hand Gestures, Second International Conference on Machine Learning and Computing, 2010.

A

[2] Mr. Chetan A. Burande, Prof. Raju M. Tugnayat, Prof.Dr. Nitin K. Choudhary, Advanced Recognition Techniques for Human Computer Interaction, 2nd International Conference on Computer and Automation Engineering (ICCAE), 2010, Singapore.

IJ

[3] Hatice Gunes,MassimoPiccardi,Tony Jan, Face and Body Gesture Recognition for a Vision-Bases Multimodel Analyzer, conferences in research and practice in information technology,Vol 36,2004. [4] G.R.S. Murthy, R.S. JadonHand Gesture Recognition using Neural Networks, Advance Computing Conference (IACC), 2010 IEEE 2nd International,, 19-20 Feb. 2010, Patiala. [5] Wang Ke, Wang Li, Li Ruifeng, Zhao Lijun, Real-time Hand Gesture Recognition for Service Robot, 2010 International Conference on Intelligent Computation Technology and Automation. [6] Chi-Min Oh, Md. Zahidul Islam , Jae-Wan Park and Chil-Woo Lee, A Gesture Recognition Interface with Upper Body Model-based Pose Tracking, Computer Engineering and Technology (ICCET), 2010 2nd International Conference, 16-18 April 2010, Chengdu .

ISSN: 2230-7818

@ 2011 http://www.ijaest.iserp.org. All rights Reserved.

Page 46


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.