A Novel Approach for Detecting the IRIS Crypts

Page 1

International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com Volume 13, Issue 6 (June 2017), PP.01-12

A Novel Approach for Detecting the IRIS Crypts Neelima Chintala1, D. Ravi Krishna Reddy2, M. Nagaraju3 1

M.Tech Research Scholar, ECE, Gudlavalleru Engineering College, Gudlavalleru, Krishna(Dt), A.P 2 Associate Professor, ECE, Gudlavalleru Engineering College, Gudlavalleru, Krishna(Dt), A.P 3 Assistant Professor, IT, Gudlavalleru Engineering College, Gudlavalleru, Krishna(Dt), A.P

ABSTRACT:- The iris is a stable biometric trait that has been widely used for human recognition in various applications. However, deployment of iris recognition in forensic applications has not been reported. A primary reason is the lack of human friendly techniques for iris comparison. To further promote the use of iris recognition in forensics, the similarity between irises should be made visualizable and interpretable. Recently, a human-in-the-loop iris recognition system was developed, based on detecting and matching iris crypts. Building on this framework, we propose a new approach for detecting and matching iris crypts automatically. Our detection method is able to capture iris crypts of various sizes. Our matching scheme is designed to handle potential topological changes in the detection of the same crypt in different images. Our approach outperforms the known visible-feature-based iris recognition method on three different data sets. After iris Crypts detection, Iris images were taken before and after the treatment of eye disease and the output shows the mathematical difference obtained from treatment. Gabor filter is used to extract the features. This iris recognition was effectively withstood with most ophthalmic disease like corneal oedema, iridotomies and conjunctivitis. This proposed iris recognition should be used to solve the potential problems that could cause in key biometric technology and medical diagnosis Keywords:- Iris recognition, forensics, visible feature, human-in-the-loop, eye pathology, ophthalmic disease, corneal Oedema, iridotomies, conjunctivitis I. INTRODUCTION IRIS recognition is one of the most reliable techniques in biometrics for human identification. The Daugman algorithm [1] can achieve a false match rate of less than 1in 200 billions [2]. Iris recognition techniques have been used widely by governments, such as the Aadhaar project in INDIA[3]. However, the iris is still under assessment as a biometric trait in law enforcement applications. One reason that hinders the forensic deployment of iris is that iris recognition results are not easily interpretable to examiners. As discussed in [4], ―Iris Examiner Workstation‖ may be built analogously to the ―Tenprint Examiner Workstation‖, which has been used in forensics [5]. In fingerprint recognition, a human examiner bases a decision on the number of matched minutiae on two fingerprints [6]. In contrast, common iris recognition techniques, such as Daugman‘s framework [1], perform matching on an iris code, which is the result of applying a band-pass filter and quantizer to grayscale images. In this scenario, the whole procedure appears as a black-box to an examiner without the knowledge of image processing. Experiments have shown that human examiners can perform well in identity verification using iris images [7]. In [7], the certainty was rated from 1 to 5. The decision was made based on human perception of the overall texture. Analogous to fingerprints, one way to further promote the development of iris recognition in law enforcement applications is to make the similarity between irises interpretable so that the whole process can be supervised and verified by human experts. Namely, the judgement should be made based on quantitative matching of visible features in iris images. In the literature, the study of iris recognition relevant to forensics includes the recognition of iris captured in visible wavelength [8] or nonideal conditions, such as on the move or at a distance [9]. There are very few results on investigating iris recognition using human-friendly features. Known feature based iris recognition methods, such as ordinal features [10], SIFT descriptors [11], and pseudo-structures [12], are neither easily interpretable nor corresponding to any physically visible features. In this paper, we seek to improve the performance of the automated iris recognition process, i.e., the first three steps of the ACE-V framework. Specifically, we propose a new fully automated approach to: (1) extract human-interpretable features in iris images, and (2) match the features with the images in the database to determine the identity. Our proposed approach can provide reliable aid to human evaluation in a human-in-theloop iris recognition system. Our new approach employs the following observations. In theory, iris crypts may appear in various sizes and shapes in images. In practice, it is sometimes uncertain whether multiple proximal crypts are connected. Furthermore, slight differences in the acquired images of the same iris may alter the topology of the detection of the same crypts from image to image. An example is shown in Figure 2. The two images in Figure 2 are from the same eye, but acquired at different times. Examples of the same crypts with 1


A Novel Approach for Detecting the IRIS Crypts different topologies are labeled in the red boxes. Yet, even though the topology of particular crypts may vary, the overall similarity can still be determined quite easily by a human examiner. There are two main tasks in our approach: crypt detection and crypt matching. Our detection (or segmentation) algorithm is designed to handle multi-scale crypts. It applies a key morphological operation in a hierarchical manner. Human annotated training data is used to determine the major parameters, so that the detected crypts are similar to those obtained by human inspection. In our matching algorithm, we adopt a matching model based on the Earth Mover‘s Distance (EMD) [18]. This matching model is quite general. Specifically, to handle possible differences in crypt topology, our matching algorithm is able to establish correspondences between the detected crypts in two images, which can be one-to-one, one-to-multiple, multiple-to-one, or even multiple-to-multiple matching. Additionally, due to different lighting conditions, there may be some false alarms or missing detections. Not all crypts can be captured in every image, subject to different physical conditions. Our matching algorithm is carefully designed so that it performs robustly to segmentation errors and potential appearance/disappearance of small crypts. The segmentation algorithm may detect some blob-like regions not physically corresponding to iris crypts. As long as such regions are stable, they will be accepted as human interpretable features, and can contribute to discriminating different irises. Our matching algorithm (Section II-B) is designed to be robust to such false positive errors. Therefore, we use the term ―crypts‖ and ―human interpretable features‖ interchangeably in the remainder of the paper. A preliminary version of the algorithm proposed herein was presented in [19]. Comparing to the prior work, the feature detection approach has been modified here to reduce some false positive errors. In addition, we conducted more extensive evaluation in this paper comparing to [19]. Besides our in-house dataset and ICE2005 [20], our approach was evaluated on the CASIA-Iris-Interval (Version 4.0) dataset [21]. Consistent results were obtained on three different datasets with fairly large number of subjects and variety. In addition, the benefits of multiple enrollment is demonstrated experimentally for the human-in-the-loop iris recognition system. Iris recognition is a biometric technology for identifying humans by capturing and analyzing the unique patterns of the iris in the human eye. Iris recognition can be used in a wide range of applications in which a person‘s identity must be established or confirmed. passport control, border control, frequent flyer service, premises entry, access to privileged information, computer login and r transaction in which personal identification and authentication are the key elements. Most dangerous security threats in today‘s world are impersonation, in which somebody claims to be someone else. Through impersonation, a high-risk security area can be vulnerable. An unauthorized person may get access to confidential data or important documents can be stolen. Normally, impersonation is tackled by identification and secure authentication. Traditional knowledgebased (password) or possession-based (ID, Smart card) methods are not sufficient since they can be easily hacked or compromised. Hence, there is an essential need for personal characteristics-based (biometric) identification due to the fact that it can provide the highest protection against impersonation. Among other biometric approaches, the new Iris recognition technology promises higher prospects of security. Due to eye diseases Iris recognition sometimes failed. In this proposed method diseases affected parts of the iris are identified and remedial actions are taken. So this method used for medical diagnosis and person identification. Commonly occurring diseases are Burning Eye, Bloody Eye (Subconjunctival Hemorrhage), Contact Lens Problem, Cataract, Discharge eye drainage, Eyelid twitching, Glaucoma. Eye burning is mainly induced due to eye strain, eye allergies and strain. Blood eye is caused when the blood vessels get broken in the sclera part. A very small blood vessel gets rupture from the eye surface. Contact lens problem is created when wearing the poor contact lens, in taking bad hygiene. There are many types in contact lens problem which consists of burning sensation, dry eyes, blurred vision, photophobia and redness. It will be easily cured when wearing fresh contact lens, washing hands before wearing the contact lens. Cataract problem was mostly found at the age of 80 in the United States or they had a cataract surgery over that period. Double vision, glare, faded colours and double vision are symptoms for cataract problem. Eye drainage is the moisture that leaks out from the eye. Discharge eye drainage is mainly caused by bacteria or virus, parasites and other organisms. Eyelid twitching is the nerve problem and it persists for a weeks or months. It usually caused because of eye stress or fatigue. II. RELATED WORK A. Iris Crypts and the Human-in-the-Loop System Overview Recently, Shen [13] developed a new human-in-the-loop iris biometric system which performs iris recognition by detecting and matching crypts in iris images. Iris crypts are certain relatively thin areas of iris tissue, which may appear near the collarette or in the periphery of the iris. The visibility of iris crypts stems from their relationship with the pigmentation and structure of the iris. In iris images captured under near infrared (NIR) illumination, the appearance of iris crypts has the following characteristics: 1. The interior has a relatively homogeneous intensity that is lower than that of the neighboring pixels in the exterior. 2. The boundary exhibits stronger edge evidence than either the interior or the exterior. 2


A Novel Approach for Detecting the IRIS Crypts Comparing to fingerprint recognition, iris crypts may serve as the ―minutiae of the iris‖ [14]. Thus, iris recognition was formulated as the problem of detecting and matching iris crypts [15]. Following the ACE-V methodology (Analysis, Comparison, Evaluation, and Verification) commonly used in fingerprint recognition [5], a notional human-in-the-loop iris recognition system would employ the following steps as a scientific method [13]: 1) Analysis (A): Features (iris crypts) are detected on the iris image under investigation, by a computer program or by trained examiners. 2) Comparison (C): A similarity (or dissimilarity) score is computed by comparing detected features with the feature patterns in the database using a rigorous process. 3) Evaluation (E): Preliminary conclusion is formed according to the score(s). 4) Verification (V): Different trained examiners do independent manual inspections of the preliminary conclusion, in order to make creditable decisions. Previous experiments [16] have demonstrated that human perception of iris crypts is consistent across different examiners, even without full training. A recent approach [17] aimed to automate the A, C, and E steps. As a consequence, an integrated human-in-the-loop iris recognition system was established [13]. Below, we briefly summarize how the system works in the identification and verification scenarios. For a completed description of the system and graphical interfaces for human inspection and annotation, we refer readers to [13]. For identification, the probe image under investigation is first processed by the system to detect visible features automatically (the Analysis step). A dissimilarity score between the probe image and each gallery image is computed (the Comparison step). The system will retrieve m candidate images from the gallery whose features have the most similar patterns to the probe image, i.e., the smallest dissimilarity score (the Evaluation step). In practice, m is a small integer, such as 10 or 20. Finally, human examiners will manually compare the candidate images against the probe image with the human-interpretable features labeled and the similaritybetween the features in the probe image and different candidate images presented, so as to make the conclusion on the identity of the probe image (the Verification step). In verification applications, the system processes the probe image to detect features first (Analysis). The dissimilarity score between the probe image and the gallery image(s) of the identity that the probe image claims to be is computed (Comparison). The system will present the results to human examiners, only if the dissimilarity score is lower than a threshold (Evaluation). The human examiners will inspect the results, with the aid of detected features and similarity between corresponding features, to accept or reject that the probe image has the claimed identity. B. Our Contributions: In this paper, we seek to improve the performance of the automated iris recognition process, i.e., the first three steps of the ACE-V framework. Specifically, we propose a new fully automated approach to: (1) extract human-interpretable features in iris images, and (2) match the features with the images in the database to determine the identity. Our proposed approach can provide reliable aid to human evaluation in a human-in-theloop iris recognition system. Our new approach employs the following observations. In theory, iris crypts may appear in various sizes and shapes in images. In practice, it is sometimes uncertain whether multiple proximal crypts are connected. Furthermore, slight differences in the acquired images of the same iris may alter the topology of the detection of the same crypts from image to image. Our new approach employs the following observations. In theory, iris crypts may appear in various sizes and shapes in images. In practice, it is sometimes uncertain whether multiple proximal crypts are connected. Furthermore, slight differences in the acquired images of the same iris may alter the topology of the detection of the same crypts from image to image. There are two main tasks in our approach: crypt detection and crypt matching. Our detection (or segmentation) algorithm is designed to handle multi-scale crypts. It applies a key morphological operation in a hierarchical manner. Human annotated training data is used to determine the major parameters, so that the detected crypts are similar to those obtained by human inspection. In our matching algorithm, we adopt a matching model based on the Earth Mover‘s Distance (EMD) [18]. This matching model is quite general. Specifically, to handle possible differences in crypt topology, our matching algorithm is able to establish correspondences between the detected crypts in two images, which can be one-to-one, one-to-multiple, multiple-to-one, or even multiple-to-multiple matching. Additionally, due to different lighting conditions, there may be some false alarms or missing detections. Not all crypts can be captured in every image, subject to different physical conditions. Our matching algorithm is carefully designed so that it performs robustly to segmentation errors and potential appearance/disappearance of small crypts. The segmentation algorithm may detect some blob-like regions not physically corresponding to iris crypts. As long as such regions are stable, they will be accepted as human interpretable features, and can contribute to discriminating different irises. Our matching algorithm is designed to be robust to such false positive errors. 3


A Novel Approach for Detecting the IRIS Crypts Therefore, we use the term ―crypts‖ and ―human interpretable features‖ interchangeably in the remainder of the paper. A preliminary version of the algorithm proposed herein was presented in [19]. Comparing to the prior work, the feature detection approach has been modified here to reduce some false positive errors. In addition, we conducted more extensive evaluation in this paper comparing to [19]. Besides our in-house dataset and ICE2005 [20], our approach was evaluated on the CASIA-Iris-Interval (Version 4.0) dataset [21]. Consistent results were obtained on three different datasets with fairly large number of subjects and variety. In addition, the benefits of multiple enrollment is demonstrated experimentally for the human-in-the-loop iris recognition system.

III. METHODOLOGY Our approach consists of three main steps: (a) detecting crypts, (b) matching crypts and (c) disease identification. The input is normalized iris images (of 64 × 512 pixels). Many algorithms and software packages can be used for this purpose; we use the system in [20]. A dissimilarity score will be output for each pair of iris images under comparison. A. Feature Detection: We employ a hierarchical segmentation algorithm based on morphological reconstruction to detect crypts of different scales. The core operation, denoted by f , is a closing-by reconstruction top-hat transformation [17]. On a grayscale image I , f has the following formulation: (1) Here, RA(B) is the morphological reconstruction of mask A from marker B. (⊕) is the dilation operation, while the structuring element is a disk of radius r , denoted by Dr . I c is the complement of image I . The major steps of the segmentation algorithm are depicted in Figure 3. First, the intensity of the image is rescaled to [0, 1]. The image background is estimated (by convolution with a Gaussian kernel) and subtracted from the grayscale image. Then the image is smoothed by a Gaussian filter. The resulting image is denoted by Ip. Next, f is applied in a hierarchical fashion. In level 0, f (Ip, R0) is applied (for a chosen value R0, as shown below). After that, a binary image BW0 is obtained by thresholding the output of f (Ip, R0) (with threshold = 0), removing small regions, and filling holes inside each connected component. Suppose C = {cci , i = 1, . . . , k} is the collection of all connected components in BW0, which will be classified into two groups: acceptable features (AF0) and under-segmented features (UF0). For each connected component cci ∈ C, we put cci into AF0 if  si ze(cci) < Sz1, or  Sz1 ≤ si ze(cci) < Sz2 and std(cci) < δ where si ze(cci ) is the number of pixels in cci , Sz1 and Sz2 are two size parameters, std(cci ) is the standard deviation of the intensity of cci ‘s corresponding region in Ip, and δ is the trained threshold for the standard deviation of the region intensity. Thresholding the standard deviation of the intensity of segmented regions is meant to include only those regions with relatively homogenous intensity among all midsize features, considering the characteristics of iris crypts (see Section I). AF0 directly constitutes S0, i.e., the selected features in level 0. On the other hand, UF0 = C \ AF0. A binary mask, M0, is built using all connected components in UF0. In level k (k > 0), f (Ip, R0 − k) is applied. BWk , AFk , UFk, and Mk are obtained similarly as in level 0. But, Sk is the intersection of AFk and Mk−1. In other words, all selected features in level k must reside within the region defined by Mk−1. The hierarchical segmentation will terminate if UFk = ∅ or k reaches the smallest scale T (T is pre-selected). At the end, the final segmentation BW will be

The parameters Sz1, Sz2, and δ were determined by the training data, a small in-house image dataset that was manually annotated by human examiners. In this dataset, there are 188 images from 94 eyes, two images for each eye. Each image was annotated by two different persons. The statistical result of the crypt sizes is summarized in Figure 4(a). We select the 75th percentile as Sz1, i.e., 148, and the upper adjacent value as Sz2, i.e., 314. Then, for those crypts larger than Sz1 but smaller than Sz2, the region contrast is calculated (i.e., the standard deviation of the grayscale values of the pixels within the region). The result is presented in Figure 4(b). δ is set as the median value, namely, 0.06.

4


A Novel Approach for Detecting the IRIS Crypts In addition, RT and R0 were determined by experiments on the training dataset. (Note: T is the index of the scale. R0 and RT are the radii of the structuring elements at scale 0 and scale T , respectively.) R0 is the scale that is able to capture the largest crypts in the samples. Any scale smaller than RT will be able to detect only tiny features, which are usually not crypts. (We use RT = 3 and R0 = 8.) B. Feature Matching: The objective of this step is to measure the similarity/ dissimilarity between two iris images based on the detected features. Suppose P = {p1, p2, . . . , pn} and Q = {q1, q2, . . . , qm} are the detected visible features (i.e., connected regions) in two images under comparison. A score ranging from 0 to 1 will be computed to determine the dissimilarity between P and Q (a lower score means a higher similarity). First, a simple registration [17] is applied to compensate for the possible shift between the two images when unwrapping the iris annulus. Then, to reduce the computation overhead, a pre-check is performed so that obviously unmatched iris images can be discarded. P and Q are considered as a non-match, i.e., the dissimilarity score is equal to 1, if   where σ1 and σ2 are pre-determined thresholds. We use σ1 = 0.25 and σ2 = 0.5 in our approach. Intuitively, it means that P and Q have to overlap more than 25% or their sizes differ less than 50%. Next, the correspondence between the features in P and in Q is computed using the Earth Mover‘s Distance (EMD) matching model [18] (see the details in Section II-B2 for this). The output of the EMD matching model is a collection of pairs of matched regions. Each match pair may correspond to a match between two features, or two sets of multiple features. Suppose k matched pairs are found between P and Q, namely,

To take the potential iris deformation and movement into account, the dissimilarity between each pair of ¯Pi and ¯Q i is computed as the minimum of is transformed from  ̄Pi by shifting up to •±h pixels in the horizontal direction or vertical direction}, where Sim(∗, ∗) is the feature dissimilarity. Next, the k matched pairs are ranked. The final dissimilarity score is the weighted arithmetic mean of the top r% pairs, while the weight of the i -th rank pair is computed by the following formula: (2) where spi (resp., sqi ) is the size of ¯Pi (resp., ¯Qi ). Basically, if a match pair is more reliable, then it will have a larger contribution (i.e., a larger weight) in the final score. Here, the reliability of a match pair is evaluated in two aspects. The first term in Formula (2) measures the average size of the matched pair. The second term in Formula (2) is to assess the size difference between the matched pair (rescaled to (0, 1)). Finally, the smaller the final dissimilarity score is, the more similar the two images are. 1) Feature Dissimilarity: To measure the dissimilarity between two features or two sets of features, A and B, the following equation is used. (3) Sym(A, B) is the symmetric difference between A and B. Haus(A, B) is the Hausdorff distance between A and B. α is a weight parameter (α = 0.5 in our implementation). F1(∗) and F2(∗) are two sigmoid functions to rescale the symmetric difference and the Hausdorff distance:   The proposed dissimilarity measure is designed to evaluate the shape similarity in terms of two different aspects. In general, the systemic difference is to evaluate how much two regions overlap. On the other hand, the Hausdorff distance tends to focus on how far each point in one region locates away from the other region, no matter how much the two regions overlap. To combine these two shape similarity measures together, sigmoid functions are used to rescale them into the same range, i.e., (0, 1). Another advantage of the sigmoid functions is to emphasize the similarity difference in a particular range. After computing the first and second 5


A Novel Approach for Detecting the IRIS Crypts order derivatives, we know that F1(t) amplifies the discriminability of the symmetric difference in the range of [0.5−h, 0.5+h], and weakens the discriminability in the range of [0, 0.5 − h) and (0.5 + h, 1], where h = 0.2063. F2(t) has a similar property. 2) EMD Matching Model: EMD is a similarity measure for comparing multi-dimensional distributions. In computer vision, EMD was first introduced in [22] for image retrieval. In [18], an EMD based matching model was proposed for establishing correspondence between bacteria in consecutive image frames of time-lapse videos. Some major advantages of this EMD matching model include the capability of multipleto- multiple matching of objects and the robustness to dealing with various segmentation errors [18]. Below, we briefly summarize the EMD based matching model [18]. The input consists of two signatures, P = {(p0,wp0 ), (p1,wp1), . . . , (pn,wpn )} and Q = {(q0,wq0 ), (q1,wq1), . . . , (qm,wqm )}. In the context of visible feature matching here, pi (1 ≤ i ≤ n) and q j (1 ≤ j ≤ m) are detected features in two iris images, while wpi and wq j are the sizes of the features pi and q j , respectively; p0 and q0 are auxiliary variables, both of size infinity (needed by the processing of the model). A key component of EMD is the ground distance between pi and q j , denoted by Di j . Intuitively, if one views each pi as a pile of dirt (of an amount wpi ) and each q j as a hole (of a volume wq j ), then Di j represents the cost for moving one unit of dirt from pi to q j. Let fi j denote the mass of flow from pi to q j . Then, EMD measures the smallest average cost for mass distribution from P to Q under certain linear constraints. Precisely, it has the following definition. Minimize the objective function (4) subject to (5a) (5b) (5c) (5d) In our problem, Di j (with both i > 0 and j > 0) is the dissimilarity between the features pi and q j , namely, Sim(pi , q j ), as described in Section II-B1. For either i > 0 or j > 0 but not both, Di j = 1, which is an upper bound of the function Sim(∗, ∗). This is the cost for a special ―feature‖ that matches to nothing. D00 is set as +∞ in order to avoid correspondence between the two auxiliary variables p0 and q0. In essence, EMD can be solved as a transportation problem in polynomial time [23]. From the perspective of a transportation problem, pi is a source and q j is a destination, and their weights are respectively the amount available at a source and the amount demanded by a destination. Then, constraint (5a) (resp., (5b)) restricts that each source (resp., destination) can only send (resp., receive) no more than the available amount (resp., the demanding amount); constraint (5c) means that we have to either empty all available supplies at the sources or satisfy all demands at the destinations; constraint (5d) ensures that the flows can only move from the sources to the destinations. By solving the above optimization problem, it is easy to interpret the resulted correspondence between the features in P and in Q when { fi j } achieving optimality. Specifically, a large amount of flow between pi and q j indicates a strong correspondence between two crypts pi and q j . Finally, the output is a collection of matched features. A matching example is shown in Figure1

Figure1: Illustrating the EMD based matching model: Given P = {p1, . . . , p5} and Q = {q1, . . . , q6} as the features detected in two iris images, a bipartite graph is built in the EMD matching model (some of the edges are omitted for clear visualization). p0 and q0 are auxiliary variables. Three matched pairs are obtained by solving the model. Here, q3 is a false positive detection that matches to nothing. 6


A Novel Approach for Detecting the IRIS Crypts 3) System Design: System design gives the details of the over all design of the proposed work. Figure 2 shows the block diagram of the proposed work. Input Image

Data base

Normalization

Localization

Pattern matching using distance

Template Generate

Crypts detection

Pre processing operation

Gabor Filter Segmentation using morphological Support Vector Machine (SVM) Result

Figure 2: Block Diagram Of Proposed System C. Disease Identification: Here Gabor Filter is used for feature extraction. Before and after the treatment the iris can be compared. Gabor filter is used to extract the features and it can be matched to find the disease affected area of the iris. The input date is passed into two directional filters to determine the existence of ridges and their orientation. The RED iris recognition algorithm uses directional filtering to generate the iris template, a set of bits that meaningfully represents a person‘s iris. Feature Extraction [24] takes the input data will be transformed into reduced representation set of features (also named feature vector). Transforming the input data into the set of features is called feature extraction. [25] Matching between the newly acquired and database representations is pattern matching. To calculate the similarity of two iris codes, Hamming Distance (HD) method is used. Lower Hamming Distance means the higher similarity Disease Identification is performed after pattern matching in both irises. Iris features are matched then there is no disease, otherwise disease can be identified from the features. IV. EXPERIMENTAL RESULTS This work has been implemented using Matlab R2012b.Before and after the treatment the iris can be compared. Gabor filter is used to extract the features and it can be matched to find the disease affected area of the iris. Figure3(a) shows the results obtained from the two iris images can be matched and Figure 3(b) shows Not matched..

Figure3: (a) shows the two iris images can be matched.

7

Figure 3(b) shows two iris can not be matched


A Novel Approach for Detecting the IRIS Crypts

Figure (a) Figure (b) Figure4 (a): Results obtained from the given input iris image (b): shows the results obtained from the given iris image can be matched.

Figure 5: shows the patient details of the corresponding matched iris image

Figure 6: shows the corresponding eye disease of the given iris image. 8


A Novel Approach for Detecting the IRIS Crypts

Figure 7: shows the Eye disease patient details of the corresponding matched iris image

V. RESULTS ANALYSIS A. Datasets and Software: We conducted experiments on three datasets, our in-house dataset [17], ICE2005 [20], and CASIA-IrisInterval (v4) [21], in order to evaluate our proposed iris recognition approach in both the identification and verification scenarios. Our in-house dataset [17] contained 3505 images from 701 eyes, five images for each eye. In the experiments, one image of each eye was randomly selected as the gallery image, while the other four images of the same eye were used as probe images. Thus, the probe set contained 2804 images, and there were 701 images in the gallery set. The in-house dataset will be released at http://www.nd.edu/~cvrl. In ICE2005 [20], there were 2953 images. Two images were rejected by irisBEE in the pre-processing stage due to off-angle iris. ICE2005 contained images from 244 different eyes, 175 eyes with multiple images and 69 eyes with only one image enrolled. Thus, the gallery dataset consisted of 244 images, each randomly selected for a unique eye. The remaining 2707 images formed the probe set. The CASIA-Iris-Interval dataset (Version 4.0) [21] was collected by the Chinese Academy of Sciences‘ Institute of Automation (CASIA), and captured by a novel self-developed camera. The images presented very detailed textures, which were good for visible feature detection. In this dataset, there were 2639 images from 395 eyes, each with multiple images enrolled. 395 images were randomly selected, each from a unique eye, to form the gallery dataset. The probe dataset consisted of the remaining 2244 images. Due to the randomness of the gallery set and probe set partition, the experiments in all the three datasets were repeated ten times in order to obtain statistically valid results. The performance of our method on the dataset mixing all the three datasets (i.e., with 7775 probe images and 1340 gallery images/subjects) will also be reported. All original NIR images were pre-processed and unwrapped into images of 64 Ă— 512 pixels by the irisBEE software [20]. Our proposed automated approach was implemented and tested in Matlab, with the unwrapped images as input. Our approach was compared with the method of Shen and Flynn [17]. B. Identification: In the experiments of human identification, each probe image was compared against all gallery images to determine the identity of the probe image. The top m (say 10) candidates with the smallest dissimilarity scores were presented to human examiners for further inspection. This was a closed set comparison. Namely, it was known that at least one image from the same subject had been enrolled in the gallery set. Before selecting the candidates, a pre-check was imposed. Suppose the k-th gallery image has nk matched features with the probe image. Then, any gallery image with less than 0.5 Ă— Max{nk, for all k} matched features will be considere as non-match. Among the remaining gallery images, the m gallery images with the smallest dissimilarity scores were output as candidates. 9


A Novel Approach for Detecting the IRIS Crypts The cumulative match characteristics (CMC) was adopted as the metric. Generally, the accuracy at rank m represents the probability that the correct subject is in the top m candidates. In forensic applications, we hope to return a small set of candidates to professional examiners for further inspection, while the correct subject has a high probability to be within the selected candidates. For the in-house dataset, the results of our proposed approach and the method of Shen and Flynn [17] were plotted. It was demonstrated that our approach achieved at least 22% higher rank one hit rate than [17]. For ICE2005, the comparison was that the rank one hit rate of our approach was at least 58% higher than that of [17]. The performance on the CASIA-Iris-Interval dataset; our approach has 56% higher rank one hit rate than [17]. Furthermore, the 95% confidence intervals of the results on the dataset mixing all the three datasets and on each individual dataset. In the results of our approach, on all datasets if we select the top 10 candidates for further inspection, the probability that the true image is returned was higher than 95%. The errors incurred by our approach were mainly due to blurry images, high occlusion by eyelids or eyelashes, and large deformation caused by off-angle iris. Thus, additional pre-check at image acquisition to remove low quality images or advanced algorithms to enhance image quality will be helpful for further improvement. C. Verification: In the human verification experiments, our objective was to determine whether two images were from the same subject, based barely on the dissimilarity score. First, the impostor (non-match) distribution and the authentic (match) distribution of the results were analyzed. The comparisons between our proposed approach and the method of Shen and Flynn [17] on the three datasets were plotted in Figures 12, 13, and 14, respectively. It was evident that our approach showed significant improvement in terms of discrimination over [17]. It is worth mentioning that there are sudden hikes (near x = 1) in the non-match distribution of our method. This is due to assigning the dissimilarity as one if two images under comparison cannot pass the pre-check.

Figure 3: The ROC curves of the verification results of our proposed approach and the method of Shen and Flynn Moreover, the Receiver Operating Characteristic (ROC) curve was used for further evaluation on the datasets (see Figure3). Also, the 95% confidence intervals of the results on the dataset mixing all the three datasets. Meanwhile, we calculated the Equal Error Rate (EER), which is defined as the common value when the false acceptance rate is the same as the false rejection rate. In general, the smaller the EER is, the more accurate the method is. The results are summarized in Table I. In short, our approach reduced the EER over [17] by almost 51% on the in-house dataset, by 84% on ICE2005, and by 91% on CASIA-Iris-Interval. Methods Proposed Method Flyn & Shen

In-House 0.02 0.041

ICE2005 0.035 0.223

CASIA 0.0139 0.153

TABLE I : The Equal Error Rates Of The Verification Experiments

10


A Novel Approach for Detecting the IRIS Crypts 0.25 0.2 0.15

Proposed Method FLYN & SHEN

0.1 0.05 0 In-House

ICE2005

CASIA

Figure 4: The Equal Error Rates Of The Verification Experiments VI. CONCLUSION We present a new approach for detecting and matching iris crypts for the human-in-the-loop iris biometric system. Our proposed approach produces promising results on all the three tested datasets, in-house dataset, ICE2005, and CASIA-Iris-Interval. Comparing to the known method, our approach improves the iris recognition performance by at least 22% on the rank one hit rate in the context of human identification and by at least 51% on the equal error rate in terms of subject verification. It increases the reliability of the human-in-the loop iris biometric system, incorporating a quality measure for images enrolled in the system would be beneficial. This would allow evaluating whether the quality of each acquired image is good enough for visual feature matching. This approach under the human in-the-loop iris recognition framework exhibits a promising application of the iris as a biometric trait in forensics. In this, by using the gabor filter we are detecting the disease in the eye. After that by providing the authentication the disease will be identified by using support vector machine (SVM) and the patient details will be displayed. Based on our observations and trial studies, our approach is robust with respect to certain common factors, such as interlacing or moderate blurring.

REFERENCES [1]. [2]. [3]. [4].

[5]. [6]. [7]. [8]. [9].

[10]. [11]. [12].

[13]. [14].

J. Daugman, ―How iris recognition works,‖ IEEE Trans. Circuits Syst. Video Technol., vol. 14, no. 1, pp. 21–30, Jan. 2004. J. Daugman, ―Probing the uniqueness and randomness of IrisCodes: Results from 200 billion iris pair comparisons,‖ Proc. IEEE, vol. 94, no. 11, pp. 1927–1935, Nov. 2006. Unique Identification Authority of India. [Online]. Available: http://uidai.gov.in, accessed Nov. 1, 2015. K. R. Nobel, ―The state of the art in algorithms, fast identification solutions and forensic applications,‖ MorphoTrust USA, Billerica, MA, USA, Tech. Rep., Jan. 2013. [Online]. Available: http://www.planetbiometrics.com/article-details/i/1446/. P. E. Peterson et al., ―Latent prints: A perspective on the state of the science,‖ Forensic Sci. Commun., vol. 11, no. 4, pp. 1–9, 2009. C. Champod, ―Edmond Locard—Numerical standards and ‗probable‘ identifications,‖ J. Forensic Identificat., vol. 45, no. 2, pp. 136–163, 1995. K. McGinn, S. Tarin, and K. W. Bowyer, ―Identity verification using iris images: Performance of human examiners,‖ in Proc. IEEE 6th Int. Conf. Biometrics, Theory, Appl., Syst. (BTAS), Sep./Oct. 2013, pp. 1–6. H. Proenca, ―Iris recognition: On the segmentation of degraded images acquired in the visible wavelength,‖ IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 8, pp. 1502–1516, Aug. 2010. H. Proenca, S. Filipe, R. Santos, J. Oliveira, and L. A. Alexandre, ―The UBIRIS.v2: A database of visible wavelength iris images captured onthe- move and at-a-distance,‖ IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 8, pp. 1529–1535, Aug. 2010. Z. Sun, L. Wang, and T. Tan, ―Ordinal feature selection for iris and palmprint recognition,‖ IEEE Trans. Image Process., vol. 23, no. 9, pp. 3922–3934, Sep. 2014. M. S. Sunder and A. Ross, ―Iris image retrieval based on macro-features,‖ in Proc. 20th Int. Conf. Pattern Recognit., 2010, pp. 1318–1321. J. De Mira and J. Mayer, ―Image feature extraction for application of biometric identification of iris—A morphological approach,‖ in Proc. Brazilian Symp. Comput. Graph. Image Process., 2003, pp. 391–398. F. Shen, ―A visually interpretable iris recognition system with crypt features,‖ Ph.D. dissertation, Dept. Comput. Sci. Eng., Univ. Notre Dame, Notre Dame, IN, USA, 2014. F. Shen and P. J. Flynn, ―Using crypts as iris minutiae,‖ Proc. SPIE, vol. 8712, p. 87120B, May 2013.

11


A Novel Approach for Detecting the IRIS Crypts [15]. F. Shen and P. J. Flynn, ―Iris matching by crypts and anti-crypts,‖ in Proc. IEEE Conf. Technol. Homeland Secur., Nov. 2012, pp. 208–213. [16]. F. Shen and P. J. Flynn, ―Are iris crypts useful in identity recognition?‖ in Proc. IEEE 6th Int. Conf. Biometrics, Theory, Appl., Syst., Sep./Oct. 2013, pp. 1–6. [17]. F. Shen and P. J. Flynn, ―Iris crypts: Multi-scale detection and shapebased matching,‖ in Proc. IEEE Winter Conf. Appl. Comput. Vis., Mar. 2014, pp. 977–983. [18]. J. Chen, C. W. Harvey, M. S. Alber, and D. Z. Chen, ―A matchingmodel based on earth mover‘s distance for tracking Myxococcus xanthus,‖ in Proc. Med. Image Comput. Comput.-Assist. Intervent., 2014, pp. 113–120. [19]. J. Chen, F. Shen, D. Z. Chen, and P. J. Flynn, ―Iris recognition based on human-interpretable features,‖ in Proc. IEEE Int. Conf. Identity, Secur. Behavior Anal. (ISBA), Mar. 2015, pp. 1–6. [20]. P. J. Phillips, K. W. Bowyer, P. J. Flynn, X. Liu, and W. T. Scruggs, ―The iris challenge evaluation 2005,‖ in Proc. 2nd IEEE Int. Conf. Biometrics, Theory, Appl., Syst., Sep./Oct. 2008, pp. 1–8. [21]. CASIA Iris Image Database. [Online]. Available: http://biometrics. idealtest.org/, accessed Nov. 14, 2015. [22]. Y. Rubner, C. Tomasi, and L. J. Guibas, ―A metric for distributions with applications to image databases,‖ in Proc. IEEE 6th Int. Conf. Comput. Vis., Jan. 1998, pp. 59–66. [23]. F. S. Hillier and G. J. Lieberman, Introduction to Operations Research, 7th ed. New York, NY, USA: McGraw-Hill, 2001. [24]. N Singh, D Gandhi, K. P. Singh, ―Iris recognition using Canny edge detection and circular Hough transform,‖ International Journal of Advances in Engineering & Technology, May 2011. [25]. L. Ma, Y. Wang, T. Tan, ―Iris recognition using circular symmetric Filter,‖ National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, 2002.

12


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.