Iaetsd multi view and multi band face recognition

Page 1

Proceedings of International Conference on Advancements in Engineering and Technology

www.iaetsd.in

Multi-View and Multi Band Face Recognition Survey Ms.L.MadhuMitha PG Student, CSE Dept Velammal Engineering College lmadhuviet@gmail.com

Ms. A.BhagyaLakshmi Asst.Prof, CSE Dept Velammal Engineering College kirubhagya@yahoo.com

Abstract - Face recognition is a challenging problem for security surveillance and become an active research area during few decades. Due to the different levels of illumination conditions, variations due to lighting, expression and aging, the recognition of such algorithms rate is considerably limited. To solve this problem,multi-band face recognition algorithm is introduced in this paper. The multi-view and multi band face recognition used in this paper is suitable for estimation the pose of the face from a video source. Unlike previous eigenface or PCA approach, a small number (40 or lower) of eigenfaces are derived from a set of training face images by using the Karhunen-Loeve transform or PCA. Instead, the similarity between feature sets from different videos using Wavelet Transform, Entropy imaging is measured in this work. The experimental results show that the wavelet transform takes less response time which is more suitable for feature extraction and face matching with high accuracy, performance and accuracy in CBIR system. Keywords: Image Processing, Face Recognition, Multi-View Videos, Wavelet Transform.

I. Introduction A biometric system[4] provides automatic recognition of an individual based on some sort of unique feature or characteristic possessed by the individual. Behavioral biometrics includes signatures, voice recognition, gait measurement, and keystroke recognition. Physiological biometrics includes facial recognition, fingerprinting, hand profiling, iris recognition, retinal scanning, and DNA testing. Behavioral methods tend to be less reliable than physiological methods because they are easier to duplicate than physical characteristics (Jain et al., 1999). Physiological attributes are more trusted method in biometrics among which iris recognition is gaining much attention in accuracy and reliability. First automatic face recognition[2][3][5] system was Developed by Kanade 1973. A face recognition system is expected to identify faces present in images and videos automatically. It can operate in either or both of two modes: Face verification (or authentication): involves a one-toone match that compares a query face image against a template face image whose Identity is being claimed. Face identification (or recognition)[8][9]: involves One-to-many matches that compare a Query face image against all the template images in the database to determine the identity of the query face. During face recognition major challenges is Inter-class similarity and Intraclass similarity. Inter-class similarity means people having identified similar faces which make their

ISBN NO : 978 - 1502893314

distinction difficult. And Intra-class variations Causes some changes in head pose, illumination conditions, expressions, facial accessories, expressions, aging effects. Lighting conditions change the face appearances so approaches based on intensity images are not sufficient for overcoming this problem. II. Background concepts A. Feature Recognition: Biometric facial recognition systems[1][7] compare images of individuals from incoming video against specific databases and send alerts when a positive match occurs. The key steps in facial recognition are: face detection, recording detected faces, Match recorded faces with those stored in a database automatic process to find the closest match. Applications include: 1. VIP lists –make staff aware of important individuals (VIP) and respond in an appropriate manner, 2. Black lists – identify known offenders or to register suspects to aid public safety, 3. Banking transactions - verification of the persons attempting a financial transaction and so on. Image Acquisition: The image acquisition engine enables you to acquire frames as fast as your camera and PC can support for high speed imaging. Image is captured using digital camera in RGB format. The first function performed by the imaging system is to collect the incoming energy and focus it onto an image plane. Digital and analog circuitry sweeps

International Association of Engineering and Technology for Skill Development 47


Proceedings of International Conference on Advancements in Engineering and Technology

www.iaetsd.in

R-image (600nm-700nm)

G-image Camera

(500nm-600nm)

Wavelet

Feature

Transform

R-image

Extraction

(400nm-500nm)

IR-image (1000nm)

Database of Image

Feature Matching

Face ID

Fig.1 Multi-Band Face Recognition Processing

these outputs and Convert them to an analog signal, which is then digitized by another section of the imaging system. The output is a digital image is formed finally. Pre-Processing: Image captured not used for feature Extraction and classification, because captured face Images are affected by various factors such as noise, lighting variance, climatic conditions, poor resolutions of an image, wanted background etc. RGB Image to GRAY Scale Image: RGB images converts to gray scale by eliminating the hue and saturation information while retaining the luminance. Then, add together 30% of the red value, 59% of the green value, and 11% of the blue value. To convert a gray intensity value to RGB, simply set all the three primary color components red, green and blue to the gray value, correcting to a different gamma if necessary. Filtering Techniques: Filtering refers to accepting or rejecting certain frequency components. A filter

ISBN NO : 978 - 1502893314

that passes low frequencies is called a lowpass filter. The Net image produced by lowpass is to blur (smooth) an image. Two Dimensional lowpass

1 if H (u , v)   0 if

D(u , v )  D0 D(u, v)  D0

where D0 is specified nonnegative quantity. A filter that passes high frequencies but reduce amplitude Signal with frequency lower than the sscutoff frequencies.

1 if H (u , v)   0 if

D(u , v )  D0 D(u, v)  D0

where Do is the cutoff distance measured from the origin Of the frequency plane. Wavelets: Wavelets can be used to extract information from many different kinds of data, including – but certainly not limited to – audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets will decompose data

International Association of Engineering and Technology for Skill Development 48


Proceedings of International Conference on Advancements in Engineering and Technology

without gaps or overlap so that the decomposition process is mathematically reversible.

principal components is less than or equal to the number of original variables.

Wavelet transforms[10] are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation where as DWTs use a specific subset of scale and translation values or representation grid.

The steps involved in PCA can be summarized as obtain the input matrix; calculate and subtract the mean; calculate the covariance matrix; the Eigenvectors; Eigen values and then forming a new feature vector; once the new feature vector is formed; the new dataset with low dimensions is derived. The new feature vectors are passed to classifier.

www.iaetsd.in

Database Image A continuous wavelet transform (CWT) is used to divide a continuous-time function into wavelets. Unlike Fourier transform, the continuous wavelet transform possesses the ability to construct a timefrequency representation of a signal that offers very good time and frequency localization. A discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal resolution: it captures both frequency and location information (location in time). Haar Wavelets The first DWT was invented by the Hungarian mathematician AlfrÊd Haar. For an input represented by a list of 2 n numbers, the Haar wavelet transform [10] may be considered to simply pair up input values, storing the difference and passing the sum. This process is repeated recursively, pairing up the sums to provide the next n scale: finally resulting in 2  1 differences and one final sum. B. Feature Extraction: When the input data is too large to be processed then the input data will be transformed into a reduced representation set of features. Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full size input. Principal Component Analysis After feature extraction is performed feature vectors are need to minimize. Principal component analysis (PCA)[8] is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. The number of

ISBN NO : 978 - 1502893314

To use a standard test data set for researchers to be able to directly compare the results. While there are many databases in use currently, the choice of an appropriate database to be used should be made based on the task given (aging, expressions, lighting etc). Another way is to choose the data set specific to the property to be tested (e.g. how algorithm behaves when given images with lighting changes or images[6] with different facial expressions). If, on the other hand, an algorithm needs to be trained with more images per class (like LDA), Yale face database is probably more appropriate than FERET. Some face data sets often used by researchers: 1.The Color FERET Database, USA: The images were collected in a semi-controlled environment. To maintain a degree of consistency throughout the database, the same physical s etup was used in each photography session. Because the equipment had to be reassembled for each session, there was some minor variation in images collected on different dates. 2. SCface - Surveillance Cameras Face Database: SCface is a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. 3. Natural Visible and Infrared facial Expression database (USTC-NVIE): The database contains both spontaneous and posed expressions of more than 100 subjects, recorded simultaneously by a visible and an infrared thermal camera, with illumination provided from three different directions. The posed database also includes expression images with and without glasses. C. Feature Matching: If the template image has strong features, a feature-based approach may be considered; the approach may prove further useful if the match in the search image might be transformed in some fashion. Since this approach does not consider the entirety of the template image, it can be more computationally efficient when working with source images of

International Association of Engineering and Technology for Skill Development 49


Proceedings of International Conference on Advancements in Engineering and Technology

larger resolution, as the alternative approach, template-based, may require searching potentially large amounts of points in order to determine the best matching location III. Conclusion Face recognition technology has come a long way for recognising people. Normally the face images are not accurate in single view videos as it does not support pose variations, illumination changes and so on. Hence in order to provide better performance, this work presents the combination of taking Multi -View videos, IR image and Wavelet Transform[10]. Multi -View videos and IR image provides the advantage of overcoming the environmental constraints and providing more accurate image in all conditions when compared with RBG image which provides accurate image only at normal lighting conditions. Wavelet Transform removes redundancies and preserves the originality of the image at multi scales and multiple directions. Thus our approach helps in feature extraction and face matching with high accuracy and less response time. IV. References 1. P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput.Vis., vol. 57, pp. 137– 154, May 2004. 2. A.C. Sankaranarayanan, A. Veeraraghavan, and R. Chellappa, “Object detection, tracking and recognition for multiple smart cameras,” Proc.IEEE, vol. 96, no. 10, pp. 1606–1624, Oct. 2008. 3.A. Li, S. Shan, and W. Gao, “Coupled biasvariance tradeoff for crosspose face recognition,”

ISBN NO : 978 - 1502893314

www.iaetsd.in

IEEE Trans. Image Process., vol. 21, no. 1, pp. 305–315, Jan. 2012. 4. A.K. Jain, R. Bolle and S. Pankanti, “Biometrics: Personal Identification in Network Society,” Kluwer Academic Publishers, 1999. 5. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,no. 9, pp. 1063– 1074, Sep. 2003. 6. P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf, and V. Blanz, “Automatic 3D face reconstruction from single images or video,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., Sep. 2008, pp. 7. A. Pentland, B. Moghaddam, and T. Starner, “View-based and modular eigenspaces for face recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 1994, pp. 84–91. 8. V. Blanz and T. Vetter, “Face recognition based on fitting a 3D morphable model,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25,no. 9, pp. 1063– 1074, Sep. 2003. 9. P. Breuer, K.-I. Kim, W. Kienzle, B. Scholkopf, and V. Blanz, “Automatic 3D face reconstruction from single images or video,” in Proc. IEEE Int. Conf. Autom. Face Gesture Recognit., Sep. 2008, pp. 1–8. 10. A. Pentland, B. Moghaddam, and T. Starner, “View-based and modular eigenspaces for face recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 1994, pp. 84–91. 11.Z.Dezhong., C.Fayi, “Face Recognition based on Wavelet Transform and Image Comparison,” International Symposium on Computational Intelligence and Design, 2008.

International Association of Engineering and Technology for Skill Development 50


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.