V geetha and a nagajothi

Page 1

International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014

ISSN: 2319:2682

AN EFFICIENT APPROACH FOR RECOGNIZING PARTIAL FACES WITH ILLUMINATION AND EXPRESSION VARIATIONS V. Geetha

A. Nagajothi

PG Scholar, Department of CSE, Karpagam University, Coimbatore, India geethavelu2005@gmail.com

Assistant Professor, Department of CSE Karpagam University, Coimbatore, India nagajothikris@gmail.com

Abstract — The proposed system works on solving the problem of partial face recognition from a single 2-D face image with facial expressions, and different illumination conditions. For many partial face recognition problem settings, like using a photo for face identification at custom security or identifying a person from a photo on the ID card, it is infeasible to gather multiple training images for each subject, especially with different expressions. Therefore, our goal is to solve the expressive face recognition problem under the condition that the training database contains only neutral face images with one neutral face image per subject. The expressional motions from each neutral face in the database can be calculated for input test image, and estimated the probability of such facial expressions. Using this information, neutral images in the database can be further warped to faces with the exact expression of input image. A framework is proposed for partial face recognition based on sparse representations of image gradient orientations. In this system, it is proposed to exploit the types of information, i.e., the computed and the group image, to improve the accuracy of face recognition.

Partial Face Recognition (PFR) problem is different from the holistic face recognition problem. Commercial off-the-shelf (COTS) face recognition systems [2] are not able to handle the general PFR problem because they need to align faces by facial landmarks that may be occluded. PFR is needed to recognize the identity of a suspect based on a partial face.

(a) Figure 1(a). Partial faces with illumination variations

Index Terms– Gradient Orientation, Partial Face Recognition, Sparse Representation.

I. INTRODUCTION Face recognition (FR) is one of the most active research areas in computer vision and pattern recognition. It is the problem of verifying or identifying a face from its image. Many face recognition problems address many challenging real-world applications, including security, General identity verification, Criminal justice systems, and video surveillance. General difficulties in face recognition problems are that the human face is not a unique, rigid object. While face recognition in controlled conditions has already accomplished impressive performance over largescale galleries, there still exist many challenges for face recognition [1] in uncontrolled environments, such as partial occlusions, large pose variations, and extreme ambient illumination etc.,

(b) Figure 1(b). Partial faces with expression variations.

Partial faces with illumination and expression variations are shown in Figure 1(a) and Figure 1(b). Traditional methods [3], [4], [5] require face alignment for face recognition in the image. Face alignment is based on detection of facial landmarks. Some traditional methods are used to solve some particular partial face scenarios. For example Subspace method is used for recognizing the faces that may be occluded.

78


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014 II. RELATED WORK The most elegant approach is to first detect two eyes and then normalize the face geometrically when face is aligned. Active Appearance Model (AAM) [6] entails a certain fixed number (typically 68) of landmarks on holistic face. There are many face recognition methods used for detecting the face in its image. Feature-based approaches [7], [8] and [9] are used to identify and extract distinctive facial features such as the eyes, mouth, nose, etc., as well as other facial marks; such methods are relatively robust to position variations in the input image. Holistic approaches [7], [8] and [9] are used to identify faces using global representations, i.e., descriptions based on the entire image rather than on local features of the face. They are not performed on effectively under large variations in pose, scale and illumination, etc. SRC (Sparse Representation based Classification) approach [10], [11] is proposed by Wright et al. It is applied in the problem of automatically recognizing human faces from frontal views with varying illumination and expression, as well as occlusion and disguise. But SRC requires aligned faces and uses a single fixed – size feature vector to represent a face. It also requires a sufficient number of training samples covering all possible illumination variations for each subject. The weighted SRC [12] algorithm is more effective to solve the problem of illumination, expression, occlusion and other errors. It has very good effect to recognize lowdimensional and face image with disguise. This algorithm needs frontal images for face recognition. MKD (Muti-Keypoint Descriptors) – SRC approach [13] performs satisfactorily in only limited scenarios like occlusion and arbitrary pose of objects. It could not outperform in face recognition with varying illumination conditions and expressions. LBP (Local Binary Pattern) operator [14] was introduced by Ojala et al. When texture and shape of images are described properly, this method is robust against face images with different facial expressions and different lighting conditions. However it requires aligned faces and uses a single fixed-size feature vector to represent a face. A novel low-computation discriminative feature space is introduced for facial expression recognition capable of robust performance over range of image resolutions. This approach requires much less computational resources. Although LBP features are robust for a low resolution face images it requires holistic face for facial expression recognition. One of the most important challenges for practical recognition system is to make the recognition more reliable under uncontrolled lighting conditions. New methods [15] for face recognition are presented based on robust preprocessing and an extension of the Local Binary Pattern (LBP) local texture descriptor. The combination of Gabor wavelets and LBP gives out performance on three well-

ISSN: 2319:2682

known large-scale face datasets (Extended Yale-B, CASPEAL, and FRGC-204) that contain widely varying lighting conditions. Good alignment of the input image is essential for face recognition. Gabor feature based classification [16] has some drawbacks. To overcome the above classification Gaborfeature based SRC (GSRC) is proposed by Meng Yang and Lei Zhang for face recognition. It reduces the computational cost in coding the occluded face images. This proposed method is evaluated on different conditions, including variations of illumination, expression and pose, as well as block occlusion and disguise. It also requires a sufficient number of training samples covering all possible illumination variations for each subject. III. OBJECTIVES & OVERVIEW OF THE PROPOSED MECHANISM A. Objectives In this paper, we propose a framework for partial face recognition based on sparse representations of image gradient orientations. For many partial face recognition problem settings, it is infeasible to gather multiple training images for each subject, especially with different expressions. Therefore, our goal is to solve the expressive face recognition problem under the condition that the training database contains only neutral face images with one neutral face image per subject. The expressional motions from each neutral face in the database can be calculated for input test image, and estimated the probability of such facial expressions. Using this information, neutral images in the database can be further warped to faces with the exact expression of input image. B. Overview of the proposed Mechanism The key idea of the proposed system is to replace pixel intensities with gradient orientations and then define a mapping from the space of gradient orientations into a highdimensional unit sphere. The key observation is that, in contrast to pixel intensities, representations of this type, when obtained from “visually unrelated” images, are highly incoherent. The flowchart for the proposed method is shown on Fig. 2.

79


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014

ISSN: 2319:2682

representation, and (ii) variable-length description. The specific content of the input (partial) face determines the length or size of representation. A holistic face will have a larger descriptor size than a partial face.

Start Input a face image Database Arrange the image matrix in objects of similar classes and train the matrix using L1 - Minimization Normalize the matrix using gradient orientation Set iteration i=1 Input image number for validation using Sparse Representation and Gradient Faces Obtain the result as output image Yes Want to test for any another

i=i+1 No

Stop Figure 2. Flowchart for the proposed method

The proposed approach is able to provide high recognition accuracy in images affected due to expressions and nonlinear lighting variations. It consumes less time for computation process when compared to the conventional techniques. IV. EFFICIENT PARTIAL FACE RECOGNITION SYSTEM A canonical frame is not always available for feature extraction in partial faces. Most of the available approaches either holistic (e.g. PCA and LDA), or local (e.g. Gabor and LBP), use a fixed-length representation for each face. A fixed-length representation assumes that the face image is aligned and cropped to a predefined size, followed by either concatenating the pixel values or extracting local feature vectors of predetermined dimensionality at fixed locations. It is not possible to extract a fixed-length descriptor for partial faces due to missing facial portions and the difficulty of alignment. Two principles for partial face representation are established in [13]. They are (i) alignment free

A. L1 - Minimization L1-minimization refers to finding the minimum L1- norm solution to an underdetermined linear system. Minimal L1 solution to problems formulated with gradient orientations can be used for fast and robust partial face recognition even for probe objects corrupted by outliers. It is focused on the efficiency of various L1minimization methods for sparse representation based face recognition. In the face recognition literature [17] and [18], it is known that a well-aligned frontal face image b Є R m of a human subject under different illuminations lies closely to a low-dimensional subspace, called face subspace. Therefore, given a known subspace class i and sufficient training samples, Ai= [vi, 1, vi, 2,‌, vi, ni] Є R m x ni, where vi,j represents the j -th training image from the i-th subject stacked in the vector form, b from the i -th class can be represented as b=Aixi. In real-world applications, it is important for a face recognition system to efficiently handle thousands or even more subjects while the dimension of each image remains roughly unchanged. Thus, a preferred algorithm should scale well in terms of C and the total number of images n. B. Gradient Orientations Analysis Low-dimensional embeddings generated from gradient orientations perform equally well even when probe objects are corrupted by outliers. This results in huge computational savings. An important factor that affects face recognition is that image is partial as well as misalignment and with varying expression, which is often caused by an inaccurate face detector applied to images collected in uncontrolled environments. When a query image is not aligned well with the training images, the face subspace model will not be satisfied. This problem can be solved within the sparse representation framework by iteratively optimizing a series of linear approximate problems that minimize the sparse registration error while the query image b is under an image transformation. In this phase, gradientfaces implementation algorithm is applied for obtaining gradient faces for both query and training images. This algorithm is outlined in Algorithm 1.

80


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014 ALGORITHM 1

ISSN: 2319:2682

minimization; it is very different compared to the approach proposed in [19]. In contrast to [19], L1 minimization is used as a discriminant classifier which separates the object from the background and as such our algorithm is closely related to methods which perform “partial verification”. Additionally, as opposed to, the proposed tracker is based on sparse representations of image gradient orientations and thus does not rely on the extended problem to achieve robustness to outliers. Here L1 minimum distance classifier is used for verification. In this task, image I is taken as training image and the other input image J as a probe. Then gradient face implementation is applied to both I and J images. Finally verification score is defined. In this way, an input face image is matched with the corresponding right one. V. PERFORMANCE EVALUATION

In order to extract gradientfaces, firstly, the gradient of face image is calculated, and then gradientfaces can be computed by the definition. To compute the gradient stably, the image is smoothened first with Gaussian kernel function. With a convolution-type smoothing, the numerical calculation of gradient is much more stable in calculation. The same operations are used in this phase. Furthermore, it should be pointed out that the main advantage for using Gaussian kernel is twofold: (a) Gradientfaces is more robust to image noise and, (b) it can reduce the effect of shadows. The implementation of gradientfaces can be summarized the original images and the corresponding Gradientfaces. Gradientfaces can extract the important features of face, such as facial shapes and facial objects (e.g., eyes, noses, mouths, and eyebrows) even under extremely lighting, which are key features for face recognition. Therefore, gradientfaces is an illumination insensitive measure. Furthermore, compared with original images, the relative position of key features represented by Gradientfaces is not changed. Therefore, gradientfaces is insensitive to object artifacts (such as facial expressions). Gradient orientation analysis gives an image corresponding to the local orientation parallel to the gradient of image, computed using discrete derivatives of a Gaussian of pixel radius r, returning values between –π/2 and π/2. Gradient faces are verified using sparse representation method. C. Sparse Representation and Verification Sparse representations of gradient orientations result in better recognition rates without the need for block processing and with smaller number of training samples. Finally, how to capitalize on the above results for robust and efficient partial face recognition is shown. A tracking algorithm is proposed which although it is also based on L1

The performance of the gradient orientation method has been evaluated on three public domain databases: FRGCv2.0 (Face Recognition Grand Challenge Ver2.0), LFW (Labeled Faces in the Wild) and ORL (Olivetti Research Laboratory) which are summarized in Table 1. Database

#of Total Subjects images

>466

>50,000 images and 3Dscans

LFW

5,749

13,233 images

ORL

10

FRGCv2.0

400

Highlights very large, lighting, expression, background, 3D,sequences large variations in pose, illumination, and expression, and may be arbitrarily occluded. slight variations in illumination, pose, facial expression and facial details.

Referenc es [20] and [21]

[22]

[23]

Experimental Results on Visible, Thermal and Fused Images A performance evaluation of the proposed face recognition technique on visible, thermal and fused multi sensor images is carried out on different database. The face database consists of both long and visual face images captured at different times. A subset of 34 individuals is selected for the experiments. Each individual has a total of 10 images. The partially visual images present in this database are preregistered. Instead of cropping the faces

81


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014

ISSN: 2319:2682

manually, to obtain face images to depict a real-time scenario. This face detection system is used to segment the face part from the background in each visual image. The corresponding region in the face images is also segmented based on the coordinates of the detected faces in the visual images. Sample images from the database can be seen. In the second case, the recognition is performed on the different pose, expression and visualization. The recognition accuracy increased when the procedure was applied on different images compared to intensity images. A. Face Recognition on FRGCv2.0 Database A large database of partial faces is generated from 16,028 face images of 466 subjects from the FRGCv2.0 database. Fig. 3 shows the comparison between three methods: weighted SRC, gradient and gradient-sparse. Finally gradient-sparse performs better than other two methods in this experiment.

Fig. 4 shows comparison between three methods on LFW database. C. Face Recognition on ORL Database ORL database contains slight variations in illumination, pose, facial expression (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). Fig. 5 shows the comparison between three methods: weighted SRC, gradient and gradient-sparse. Finally gradient-sparse performs better than other two methods in this experiment.

Fig. 3 shows comparison between three methods on FRGCv2.0 database. B. Face Recognition on LFW Database The LFW database consists of realistic and naturally occurring face images captured in uncontrolled environments and downloaded from the internet. It includes 13,233 images of 5,749subjects. Face images in LFW contain large variations in pose, illumination, and expression, and may be arbitrarily occluded. Fig. 4 shows the comparison between three methods: weighted SRC, gradient and gradient-sparse. Finally gradient-sparse performs better than other two methods in this experiment.

Fig. 5 shows comparison between three methods on ORL database. The proposed gradient sparse method now performs better than both weighted SRC and gradient and also improves the recognition performance. This shows that gradient orientation is more suitable for face recognition under pose variations. Experimental results are summarized in Table 2.

82


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014 TABLE 2 Experimental Results Database

Accuracy No of WGradient % Faces SRC Proposed

ORL

13,233 67.2 80.73

99.2

LFW

4000

60.3 89.83

96.5

FRGCv2.0 16,028 65.72 84.34

97.4

VI. CONCLUSION An approach is presented for recognizing faces with illumination and expression variations. This approach represents each face image as image matrix using L1minimization and then the image matrix is normalized by gradient orientation. Gradientfaces implementation algorithm is applied to obtain the gradientfaces. Verification score is defined using sparse representation. As a result, the proposed algorithm has the ability to recognize human faces with high accuracy even when only a single image per person is provided for training. The efficiency of the proposed method is demonstrated using publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms. Given the approach of gradient sparse representation, it would be useful for recognizing faces with pose variations. REFERENCES [1] G. Hua, M.-H. Yang, E. Learned-Miller, Y. Ma, M. Turk, D. J. Kriegman, and T. S. Huang, “Introduction to the special section on real-world face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 10, pp. 1921–1924, 2011. [2] “Face recognition technology fails to find UK rioters,” http://www.newscientist.com/article/mg21128266.000facerecognition-technology-fails-to-find-ukrioters.html. [3] J. Ho, M. Yang, J. Lim, K. Lee, and D. Kriegman, “Clustering Appearances of Objects under Varying Illumination Conditions,” Proc. IEEE International Conf. Computer Vision and Pattern Recognition, pp. 1118, 2003. [4] T. Kanade, Y. Tian, and J. Cohn. Comprehensive database for facial expression analysis. fg, page 46, 2000. [5] Marryam Murtaza, Muhammad Sharif, Mudassar Raza, and Jamal Hussain Shah, “Analysis of Face Recognition under Varying Facial Expression: A Survey,” The International Arab Journal of Information Technology, Vol. 10, No. 4, July 2013. [6] T. Cootes, G. Edwards, and C. Taylor, “Active appearance models,” IEEE Transactions on Pattern

ISSN: 2319:2682

Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681–685, June 2001. [7] Andrea F. Abate, Michele Nappi *, Daniel Riccio, Gabriele Sabatino, “2D and 3D face recognition: A survey,” in Pattern Recognition Letters 28 (2007) 1885– 1906 [8] Rabia Jafri and Hamid R. Arabnia, “A Survey of Face Recognition Techniques,” Journal of Information Processing Systems, Vol.5, No.2, June 2009. [9] W. Zhao, R. Chellappa, J. Phillips, and A. Rosenfeld, “Face Recognition: A Literature Survey,” ACM Computing Surveys, pp. 399-458, 2003. [10] S. Liao and A. K. Jain, “Partial face recognition: an alignment free approach,” in Proceedings of the IAPR/IEEE International Joint Conference on Biometrics (IJCB 2011), Oct. 11-13 2011. [11] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, pp. 210–227, 2009. [12] Junying Gan and Juan Xiao, “A Weighted SRC Algorithm for Face Recognition with Disguise,” Journal of Information & Computational Science 9: 2 (2012) 513–520. [13] Timo Ahonen, Abdenour Hadid and Matti Pietika inen, “Face Description with Local Binary Patterns: Application to Face Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 28, No.12, December 2006. [14]Xiaoyang Tan and Bill Triggs, “Enhanced Local Texture Feature Sets for Face Recognition under Difficult Lighting Conditions,” published in "IEEE Transactions on Image Processing 19, 6 (2010) 1635-1650". [15] Liu, C and Wechsler, H, “Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition,” IEEE IP 11 (2002) 467–476. [16] S. Liao, A. K. Jain, and Stan Z. Li, “Partial face recognition: Alignment - free approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, May 2013. [17] A. Yang, A. Ganesh, Z. Zhou, S. Sastry, and Y. Ma, “A Review of Fast l1-Minimization Algorithms for Robust Face Recognition.,” Arxiv preprint arXiv:1007.3753, 2010. [18] Allen Y. Yang, Zihan Zhou, Arvind Ganesh, S. Shankar Sastry, and Yi Ma, “Fast L1-Minimization Algorithms For Robust Face Recognition,” Arxiv preprint arXiv:1007.3753v4 [cs.CV] 26 Aug 2012. [19] X. Mei and H. Ling, “Robust visual tracking using l1 minimization,” In Proc. of Int. Conf. on Computer Vision (ICCV), pages 1–8, 2009. [20] P. Jonathon Phillips, Patrick J. Flynn, Todd Scruggs, Kevin W. Bowyer Jin Chang, Kevin Hoffman, Joe Marques, Jaesik Min, and William Worek, “Overview

83


International Journal of Advanced Information Science and Technology (IJAIST) Vol.22, No.22, February 2014

ISSN: 2319:2682

of the Face Recognition Grand Challenge,” Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) 1063-6919/05 $20.00 © 2005 IEEE. [21] P. J. Phillips, P. J. Flynn, T. Scruggs, K. W. Bowyer, J. Chang, K. Hoffman, J. Marques, J. Min, and W. Worek, “Overview of the face recognition grand challenge,” in IEEE Conference on Computer Vision and Pattern Recognition, 2005. [22] G. B. Huang, M. Ramesh, T. Berg, and E. LearnedMiller, “Labeled faces in the wild: A database for studying face recognition in unconstrained environments,” University of Massachusetts, Amherst, Tech. Rep. 07-49, October 2007. [23]Ralph Gross, “Face Databases,” In S.Li and A.Jain (ed). Handbook of FaceRecognition. Springer-Verlag, 2005. Authors Profile V.Geetha received the B.Tech. degree in Information Technology from Christian College of Engineering and Technology, Oddanchatram, Anna University, Chennai, India, in 2006.Currently doing M.E. in Computer Science and Engineering in Karpagam University, Coimbatore, India. Her research interests include Artificial Intelligence, Data Structures, and Image Processing. A.Nagajothi received her B.E` degree in Computer Science and Engineering from Karpagam College of Engineering, India, in 2005, the M.E. degree in Computer Science and Engineering from Anna University, in 2010. She is an Assistant Professor, Department of Computer Science and Engineering, Karpagam University. Her research interests include Neural Networks, Artificial Intelligence, Cloud computing, Grid Computing and Image Processing.

84



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.