Poster Paper Proc. of Int. Conf. on Advances in Computer Engineering 2011
An Application of Eigen Vector in Back Propagation Neural Network for Face expression Identification 1
Ahsan Hussain, M.Tech (Scholar), R.K.D.F Institute of Science & Technology, Bhopal (M.P) 2 Shrikant lade, Department of Information Technology, R.K.D.F.I.S.T, Bhopal (M.P) 3 Vijay Bhandari, Department of Information Technology, R.K.D.F.I.S.T Bhopal (M.P) 1
ehsaan_9292@yahoo.co.in, 2shrikantlade@gmail.com,3lnct.vijay@gmail.com that any particular face can be represented in terms of a best coordinate system termed as “eigenfaces”. These are the eigen functions of the average covariance of the ensemble of faces. Later, Turk and Pentland [7] proposed a face recognition method based on the eigenfaces approach. An unsupervised pattern recognition scheme is proposed in this paper which is independent of excessive geometry and computation. Recognition system is implemented based on eigenface, PCA and ANN. Principal Component analysis for face recognition is based on the information theory approach in which the relevant information in a face image is extracted as efficiently as possible. Further Artificial Neural Network was used for classification. Neural Network concept is used because of its ability to learn ‘ from observed data.
Abstract: This paper presents a methodology for identifying the facial expressions of human beings based on the information theory approach of coding. The above task work consists of two major phases 1) Identifying maximum matching face from database. 2) Extracting a facial expression from matched image. Phase one consist of feature extraction using Principal Component analysis & face reorganization using feed forward back propagation Neural Network. We have tested the above task over 1000 to 10000 images including both color, grayscale images of same & different human faces & got the 80 to 90 % accurate result. Key Words—Face recognition, Principal component analysis (PCA), Artificial Neural network (ANN), Eigenvector and Eigenface.Expression Evaluation.
III. PROPOSED TECHNIQUE I. INTRODUCTION
The proposed technique is coding and decoding of face images, emphasizing the significant local and global features. In the language of information theory, the relevant information in a face image is extracted, encoded and then compared with a database of models. The face recognition system is as follows:
The Face is a primary focus of attention of human being learning. A human being can observe thousands of face since his birth and his ability to identify faces anywhere is really remarkable. Even though humans can identify faces, but it could be quite difficult to identify an expression from his face. This skill is quite robust, despite of large changes in the visual stimulus due to viewing conditions, expression, aging, and distractions such as glasses, beards or changes in hair style Developing a computational model of face expression identification is quite difficult, because faces are complex, multi-dimensional visual stimuli. For face expression identification the starting step involves extraction of the relevant features from facial images. II. RELATED WORK Identifying a facial expression task consist of 1) Face reorganization from database. 2) Extraction Face expression of maximum matching image. A face reorganization task consist of two basic Methods The first method is based on extracting feature vectors from the basic parts of a face such as eyes, nose, mouth, and chin, with the help of deformable templates and extensive mathematics. Then key information from the basic parts of face is gathered and converted into a feature vector. Another method is based on the information theory concepts viz. principal component analysis method. In this method, information that best describes a face is derived from the entire face image. Based on the Karhunen-Loeve expansion in pattern recognition, Kirby and Sirovich [5], [6] have shown © 2011 ACEEE DOI: 02.ACE.2011.02.137
IV. CALCULATING EIGEN VALUES FOR TRAINING DATA SET IMAGES The face library entries are normalized. Eigenfaces are calculated from the training set and stored. An individual face can be represented exactly in terms of a linear combination of eigenfaces. The face can also be approximated using only the best M eigenfaces, which have the largest 201
Poster Paper Proc. of Int. Conf. on Advances in Computer Engineering 2011 Eigen values. It accounts for the most variance within the set of face images. Best M eigenfaces span an M-dimensional subspace which is called the “face space” of all possible images. For calculating the Eigen face PCA algorithm was used. Let a face image I(x, y) be a two-dimensional N x N array. An image may also be considered as a vector of dimension N2, so that a typical image of size 92 x 112 becomes a vector of dimension 10,304, or equivalently a point in 10,304dimensional space. An ensemble of images, then, maps to a collection of points in this huge space. Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. The main idea of the principal component analysis is to find the vectors that best account for the distribution of face images within the entire image space.
An example training set is shown in Figure 2, with the average face Ψ. Φi =Ti – Ψ (2) This set of very large vectors is then subject to principal component analysis, which seeks a set of M orthonormal vectors, un , which best describes the distribution of the data. The kth vector, uk , is chosen such that Face Acquisition Pre-Processing Face Library Formation Training Images Testing Images Eigen Face Formation Select Eigen Faces Face Descriptor of training images Select Image New Face Descriptor Start
is a maximum,
V. CALCULATING EIGEN VALUE FOR BOTH T RAINING & TESTING SET IMAGES
Subject to The vectors uk and scalar λk are the eigenvectors and eigen values, respectively of the covariance matrix
Let a face image I(x, y) be a two-dimensional N x N array. An image may also be considered as a vector of dimension N2, so that a typical image of size 92 x 112 becomes a vector of dimension 10,304, or equivalently a point in 10,304dimensional space. An ensemble of images, then, maps to a collection of points in this huge space Images of faces, being similar in overall configuration, will not be randomly distributed in this huge image space and thus can be described by a relatively low dimensional subspace. The main idea of the principal component analysis is to find the vectors that best account for the distribution of face images within the entire image space. The face can also be approximated using only the best M eigen faces. Let the training set of face images be . TM then the average of the set is defined by
where the matrix A = [ Φ1, Φ2,........ΦM ] The covariance matrix C, however is N2 x N2 real symmetric matrix, and determining the N2 eigenvectors and eigen values is an intractable task for typical image sizes. We need a computationally feasible method to find these eigenvectors. If the number of data points in the image space is less than the dimension of the space ( M < N2 ), there will be only M1,rather than N2 , meaningful eigenvectors. The remaining eigenvectors will have associated Eigen values of zero. We can solve for the N2 dimensional eigenvectors in this case by first solving the eigenvectors of an M x M matrix such as solving 16 x 16 matrix rather than a 10,304 x 10,304 matrix and then, taking appropriate linear combinations of the face images Φi. Consider the eigenvectors vi of ATA such that Premultiplying both sides by A, we have from which we see that Avi are the eigenvectors of C =AAT.Following these analysis, we construct the M x M matrix L= ATA, where Lnm = Φm T Φn, and find the M eigen vectors, I, of L. These vectors determine linear combinations of the M training set face images to form the Eigen faces ui.
Each face differs from the average by the vector
the calculations become quite manageable. The associated Eigen values allow us to rank the eigenvectors according to their usefulness in characterizing the variation among the images. Process Implemented for image reorganization & Expression identification is as shown in following figure
Fig 2: Training Set Images
© 2011 ACEEE DOI: 02.ACE.2011.02.137
202
Poster Paper Proc. of Int. Conf. on Advances in Computer Engineering 2011
IX. CONCLUSION This paper presents a new technique for face expression identification & overall result as shown in the result section shows that accuracy can be achieved when neural network is trained over the several images of same person. Table 1 shows that, accuracy rate is more when network is trained over same person images than the network which is trained over the images of different persons. Finally, we compared our project result with the previous technique & got the better result than previous one.
VI. EXPERIMENTAL RESULT
REFERENCES [1] Yuille, A. L., Cohen, D. S., and Hallinan, P. W.,”Feature extraction from faces using deformable templates”, Proc. of CVPR, (1989) [2] S. Makdee, C. Kimpan, S. Pansang, “Invariant range image multi – pose face recognition using Fuzzy ant algorithm and membership matching score” Proceedings of 2007 IEEE International Symposium on Signal Processing and Information Technology,2007, pp. 252-256 [3] Victor-Emil and Luliana-Florentina, “Face Rcognition using a fuzzy –Gaussian ANN”, IEEE 2002. Proceedings, Aug. 2002 Page(s):361 –368 [4] [Howard Demuth,Mark Bele,Martin Hagan, “Neural Network Toolbox” [5] Kirby, M., and Sirovich, L., “Application of theKarhunen-Loeve procedure for thecharacterization of human faces”, IEEE PAMI, Vol.12, pp. 103-108, (1990). [5] Sirovich, L., and Kirby, M., “Lowdimensionalprocedure for the characterization of human faces”, J.Opt. Soc. Am. A, 4, 3,pp. 519-524, (1987). [6] Turk, M., and Pentland, A., “Eigenfaces for recognition”, Journal of Cognitive Neuroscience, Vol. 3, pp. 71-86, (1991). [8] S. Gong, S. J. McKeANNa, and A. Psarron, Dynamic Vision, Imperial College Press, London, 2000. [7] Manjunath, B. S., Chellappa, R., and Malsburg, C., “A feature based approach to face recognition”, Trans. Of IEEE, pp. 373-378, (1992)
Fig 3: Testing Set Images
Fig 4: Result showing output of Project
VII. RESULT ANALYSIS 1) Result of Artificial Neural Network trained over Images of Same Person.
© 2011 ACEEE DOI: 02.ACE.2011.02. 137
203