217 46 10MB
English Pages X, 214 [216] Year 2020
Feng Liu Qijun Zhao David Zhang
Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail
Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail
Feng Liu • Qijun Zhao • David Zhang
Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail
Feng Liu College of Computer Science and Software Engineering Shenzhen University Shenzhen, China
Qijun Zhao College of Computer Science Sichuan University Chengdu, China
David Zhang School of Science and Engineering The Chinese University of Hong Kong Shenzhen, China
ISBN 978-981-15-4127-8 ISBN 978-981-15-4128-5 https://doi.org/10.1007/978-981-15-4128-5
(eBook)
© Springer Nature Singapore Pte Ltd. 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore
Preface
As one of the most popular biometric features, fingerprints have been explored for centuries. Automated Fingerprint Recognition Systems (AFRSs) also have been widely used in forensic and civil applications, including border defense, attendance control, as well as other identity services. In general, fingerprints are identified by the features extracted from them, while fingerprint features are divided into different levels according to their scales and distinctiveness. Among these features, global ridge patterns (e.g., ridge orientation fields, kind of level-1 features) and local ridge singularities (e.g., minutiae, kind of level-2 features) are the main features utilized by traditional AFRSs. Despite the impressive performance of these features in automated fingerprint recognition, other features characterizing individual features have recently attracted increasing attention from both researchers and practitioners, thanks to the advancement of 3D and high-resolution imaging technology. The consequent advanced fingerprints open up new areas for building more robust and secure AFRSs. Thus, it is significant to further investigate new features coupled with the advanced fingerprint images. This book introduces progress on advanced fingerprint recognition techniques, mainly focusing on 3D fingerprint recognition and high-resolution fingerprint recognition technologies. For 3D fingerprint recognition, methods of 3D fingerprint feature extraction, systems, and applications are all involved. For high-resolution fingerprint recognition, we recommended reference resolution for high-resolution fingerprint recognition system, presenting extraction and matching methods based on high-resolution fingerprint features, namely, level-3 features, as well as fusion strategies with traditional fingerprint features. In a whole, not only feature extraction and matching algorithms but also imaging and real application systems are included in this book. By reading this book, the readers will have a comprehensive understanding of current advanced fingerprint recognition technologies. Our team has been working on biometrics for a long time. We appreciate the related grant supports from Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS) and Shenzhen Research Institute of Big Data (SRIBD) and the National Natural Science Foundation of China (NSFC). The authors are also grateful v
vi
Preface
to Jiaxin Li, Kunjian Li, Liying Lin, Guojie Liu, Haozhe Liu, Caixiong Shen, Wenxue Yuan, and Wentian Zhang for their helpful assistance in proofreading and in the preparation of some tables/figures in this book. Shenzhen, China Chengdu, China Shenzhen, China January 2020
Feng Liu Qijun Zhao David Zhang
Contents
1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Why Advanced Fingerprints? . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 3D Fingerprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 High Resolution Fingerprints . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Outline of This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
1 1 3 4 5 6
2
Overview: 3D Fingerprints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 3D Fingerprint Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 3D Fingerprint Recognition: Systems and Applications . . . . . . 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
9 9 10 11 12 13
3
3D Fingerprint Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Feature-Based 3D Reconstruction Model . . . . . . . . . . . . . . . . . 3.3 Criteria for Close-Image Objects Reconstruction . . . . . . . . . . . . 3.3.1 Selection of Representative Features for Correspondence Establishment . . . . . . . . . . . . . . . . 3.3.2 Influence Analysis of Correspondences to Reconstruction Accuracy . . . . . . . . . . . . . . . . . . . . 3.4 Human Finger 3D Reconstruction . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Effectiveness Validation of the Proposed Reconstruction Model . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Criteria Verification . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15 15 18 19
3D Fingerprint Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
33 33
4
21 22 23 26 28 30 30
vii
viii
Contents
4.2 4.3
Touchless Fingerprint Image Preprocessing . . . . . . . . . . . . . . . . Dip-Based Fingerprint Feature Extraction and Matching . . . . . . 4.3.1 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 View Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Feature Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Experimental Results and Performance Analysis . . . . . . . . . . . . 4.4.1 Database and Remarks . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Recognition Performance Using Dip-Based Feature . . . 4.4.3 Effectiveness Validation of the Proposed View Selection Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Comparison of Recognition Performance Based on Different Fingerprint Features . . . . . . . . . . . . . . . . . 4.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
36 40 40 46 47 48 48 49
Applications of 3D Fingerprints . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 3D Finger Point Cloud Data Acquisition and Preprocessing . . . 5.3 Investigation of 3D Fingerprint Features . . . . . . . . . . . . . . . . . 5.4 Applications and Experimental Results . . . . . . . . . . . . . . . . . . 5.4.1 3D Fingerprint Applications and the Experimental Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Case 1: Coarse Alignment by 3D Finger Point Cloud Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Case 2: Core Point Detection Under the Guidance of Distinctive 3D Shape Ridge Feature . . . . . . . . . . . 5.4.4 Case 3: User Authentication Using Distinctive 3D Shape Ridge Feature . . . . . . . . . . . . . . . . . . . . . . 5.4.5 Comparisons with the Art-of-the-State 3D Fingerprint Matching Performance . . . . . . . . . . . . . . . . . . . . . . . 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
59 59 61 63 65
.
65
.
65
.
69
.
70
. . .
72 73 73
6
Overview: High Resolution Fingerprints . . . . . . . . . . . . . . . . . . . . 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 High Resolution Fingerprint Features . . . . . . . . . . . . . . . . . . . 6.3 High Resolution Fingerprint Recognition . . . . . . . . . . . . . . . . 6.4 Benchmarks of High Resolution Fingerprint Images . . . . . . . . 6.5 Recent Development Based on Deep Learning . . . . . . . . . . . . 6.6 Chapters Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
77 77 78 81 83 84 84 85 85
7
High Resolution Fingerprint Acquisition . . . . . . . . . . . . . . . . . . . . . 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Collecting Multi-resolution Fingerprint Images . . . . . . . . . . . . .
89 89 91
5
49 50 52 56
Contents
ix
7.2.1 Acquisition Device . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Fingerprint Samples . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Implementation of Multi-resolution . . . . . . . . . . . . . . . 7.3 Selecting Resolution Criteria Using Minutiae and Pores . . . . . . . 7.4 Experiments and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Selecting the Minimum Resolution Required for Pore Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Selecting the Resolution Based on the Established Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.3 Analysis of Ridge Width . . . . . . . . . . . . . . . . . . . . . . . 7.4.4 Fingerprint Recognition Accuracy . . . . . . . . . . . . . . . . 7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
9
Fingerprint Pore Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Model-Based Pore Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Review of Previous Pore Models and Extraction Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Dynamic Anisotropic Pore Model (DAPM) . . . . . . . . . 8.2.3 Adaptive Pore Extraction . . . . . . . . . . . . . . . . . . . . . . 8.2.4 Experiments and Performance Evaluation . . . . . . . . . . 8.3 Learning-Based Pore Extraction . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Methodology Statement . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Experimental Results and Analysis . . . . . . . . . . . . . . . 8.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pore-Based Partial Fingerprint Alignment . . . . . . . . . . . . . . . . . . 9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.1 High Resolution Partial Fingerprint . . . . . . . . . . . . . . 9.1.2 Fingerprint Alignment . . . . . . . . . . . . . . . . . . . . . . . 9.1.3 Partial Fingerprint Alignment Based on Pores . . . . . . 9.2 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 Ridge and Valley Extraction . . . . . . . . . . . . . . . . . . . 9.2.2 Pore Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Pore–Valley Descriptors . . . . . . . . . . . . . . . . . . . . . . 9.3 PVD-Based Partial Fingerprint Alignment . . . . . . . . . . . . . . . 9.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 The High Resolution Partial Fingerprint Image Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 The Neighborhood Size and Sampling Rate of Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.3 Corresponding Feature Point Detection . . . . . . . . . . . 9.4.4 Alignment Transformation Estimation . . . . . . . . . . . .
. . . . . . . . . . .
91 92 93 95 99 99 100 100 102 103 104 107 108 110 110 111 113 120 128 128 132 134 136 139 139 141 142 143 144 144 145 146 148 151
. 152 . 153 . 154 . 156
x
Contents
9.4.5 Partial Fingerprint Recognition . . . . . . . . . . . . . . . . . 9.4.6 Computational Complexity Analysis . . . . . . . . . . . . . 9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . .
158 161 162 162
10
Fingerprint Pore Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Coarse Pore Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Difference Calculation by TD-Sparse Method . . . . . . . 10.2.2 Coarse Pore Correspondence Establishment . . . . . . . . . 10.3 Fine Pore Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Experimental Results and Analysis . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Robustness to the Instability of Extracted Pores . . . . . . 10.4.3 Effectiveness in Pore Correspondence Establishment . . . 10.4.4 Fingerprint Recognition Performance . . . . . . . . . . . . . . 10.4.5 TDSWR Applied in Fingerprint Minutiae Matching . . . 10.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165 165 167 169 172 174 178 178 178 180 180 184 185 186
11
Quality Assessment of High Resolution Fingerprints . . . . . . . . . . . 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Quality Assessment Methods . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Performance Analysis and Comparison . . . . . . . . . . . . . . . . . . 11.3.1 Selected Quality Indexes . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Database and Protocol . . . . . . . . . . . . . . . . . . . . . . . 11.3.3 Experimental Results and Analysis . . . . . . . . . . . . . . 11.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
189 189 190 192 192 192 192 196 197 197
12
Fusion of Extended Fingerprint Features . . . . . . . . . . . . . . . . . . . 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Fusion Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.1 Parallel Fusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Hierarchical Fusion . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 Fusion Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.1 Datasets and Algorithms . . . . . . . . . . . . . . . . . . . . . . 12.3.2 Fusion Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . .
199 199 200 200 201 202 202 203 203 204 206
13
Book Review and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 13.1 Book Recapitulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 13.2 Challenges and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Chapter 1
Introduction
Abstract Fingerprints are among the most widely used biometric modalities. In forensics, fingerprints serve as important legal evidence. In civilian applications, fingerprints are employed for access and attendance control as well as other identity services. It is well known that fingerprint features can be divided into different levels according to their scales and distinctiveness. However, limited by earlier imaging technology, some fingerprint features cannot be extracted so as to limit the development of fingerprint recognition systems. Thanks to the advancement of 3D and high resolution imaging technology, advanced fingerprints are available to further extend fingerprint applications. This chapter aims to introduce recent progress in the acquisition and utilization of 3D and high resolution fingerprint features. Keywords Advanced fingerprints · Level 3 features · Level 0 features · 3D fingerprints · High resolution fingerprints
1.1
Why Advanced Fingerprints?
A fingerprint is defined as an impression of the friction ridges of all or any part of fingers [1]. Figure 1.1 gives the fingerprint example we commonly referred. The individuality of fingerprints is theoretically studied [2–4] and results demonstrate the uniqueness of fingerprints that it would be virtually impossible for two fingerprints (even two fingerprints of identical twins) to be exactly alike. In a real world situation, the same fingerprint scanned twice may look different due to some distortions and skin conditions. Thus, salient features but not directly the pixel intensity values of fingerprint image are usually used to discriminate between identities. Fingerprint features have been comprehensively studied in the last decades. In general, features on fingerprints are categorized at different scales and fall into three levels [5]. Features on the first level are defined by ridge patterns globally. Such as the examples of the fingerprint classes shown in Fig. 1.2 (left loop, right loop, whorl and arch). Singular points (cores and deltas), external fingerprint shape, orientation and frequency maps of fingerprint ridges also belong to this category. The level 2 features mainly refer to minutiae (e.g. ridge endings and © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_1
1
2
1 Introduction
Fig. 1.1 An example of fingerprints
Fig. 1.2 Example fingerprint classes. (a) Left loop. (b) Right loop. (c) Whorl. (d) Arch
ridge bifurcations). They are stable and robust to fingerprint conditions. Thus, they become the basis of most existing Automatic Fingerprint Recognition Systems (AFRSs). Level 3 features are defined as intra-ridge details. Finger sweat pore is one of the most important level 3 features. Others also include width, curvature, edge contours and so on. However, exacting such level features like sweat pores requires high-resolution (e.g., 1000 dpi) fingerprint images with good quality. Since most existing AFRSs are equipped with fingerprint sensors of ~500 dpi, level-3 features are usually ignored by them. Meanwhile, researchers found it is difficult to robustly extract the above mentioned three levels of fingerprint features from very low resolution fingerprint images (~50 dpi, e.g. touchless fingerprint images captured by a webcam) [6]. They designated a coarser level of fingerprint features--level zero features. This level features consist of broken line-like patterns representing creases and ridges of varying clarity, which can be extracted and used for human identification. The advent of AFRSs is promoted by the increasing workloads and timeconsuming tasks of manually comparing fingerprints by experts. AFRSs have been deeply studied over the past four decades and now widely used in applications such as attendance and access control systems. In the early studies, almost all of the AFRSs are minutiae-based systems since minutiae are distinctive and stable fingerprint features, and can be robustly extracted from fingerprint images at a resolution of ~500dpi [5, 7, 8]. However, with the development of fingerprint imaging techniques, higher performance and more new requirements (e.g. hygiene and
1.2 3D Fingerprints
3
user-friendly) can be achieved by improving earlier minutiae-based AFRSs. For instance, by using high resolution fingerprint imaging technique, high quality fingerprint image can be obtained which permits the extraction of fingerprint additional features (e.g. level 3 features). Such features are found to be helpful to enable high-confidence and more accurate matching, especially when partial fingerprints with insufficient minutiae are used for authentication [9]. Another example is the development of touchless 3D AFRS. As we all know, touchless fingerprint imaging technique has advantages of being insensitive to skin deformation and skin conditions, avoiding distortions and inconsistencies due to projecting an irregular 3D finger onto a 2D flat plane image, securing against latent fingerprints, practically maintenance free, being hygienic and robust to fake attacks. Multi-view imaging further provides a way to generate 3D shape of human finger. Such merits permit new developed AFRS to meet more requirements for civilian applications and provide more reliable recognition by using 3D information. Thus, to build more robust and secure AFRSs, it is necessary to study advanced fingerprints for more useful features.
1.2
3D Fingerprints
Typical AFRSs basically equipped with a touched optical or capacitor sensor to capture fingerprint images. The touch based capture manner forces 3D human finger to be flattened into 2D plain, so as to introducing skin deformation and distortion, more serious is lose 3D fingerprint information, thus limited the system performance. 3D fingerprints permit new developed AFRS to meet more requirements for civilian applications and provide more reliable recognition by using 3D information. With the rapid development of 3D reconstruction technologies and 3D imaging sensors, 3D fingerprint recognition techniques have arisen to solve the problem existing in current touch based 2D fingerprint recognition system and further improve the system performance [10, 11]. Usually, these techniques capture fingerprint images at a distance and provide the 3D finger shape feature simultaneously. There are three kinds of 3D imaging techniques. That are multi-view reconstruction [12–14], laser scanning [15–17], and structured light scanning [18–20]. Among them, there are few laser scanning based 3D fingerprint acquisition devices due to the cost and scanning time. The multi-view reconstruction technique has the advantage of low cost but the disadvantage of low accuracy, while structured light imaging has a high accuracy as well as a moderate cost. However, Fig. 1.3 gives an example of 3D fingerprint images captured using structured light imaging. However, it also takes much time to collect 3D data and suffers from the instability problem such that one needs to keep still when projecting some structured light patterns to the human finger. Thus, the study of current 3D fingerprint recognition techniques includes not only 3D fingerprint feature extraction and matching methods but also the reconstruction technique based on multi-view 2D fingerprint images.
4
1 Introduction
Fig. 1.3 A 3D fingerprint. (a) 3D point cloud data of a finger. (b) A 2D fingerprint image
Fig. 1.4 Traditional and high resolution fingerprint images
1.3
High Resolution Fingerprints
With the advent of high resolution fingerprint imaging technology, more fingerprint features (e.g. level 3 features) can be extracted for more accurate fingerprint recognition. As shown in Fig. 1.4, just level-2 features (e.g. minutiae) can be seen on traditional fingerprint images with resolution about 500 dpi. With the increasing of
1.4 Outline of This Book
5
imaging resolution, for instance, about 1200 dpi given in Fig. 1.4, level-3 features (e.g. closed pore, open pore) are available. Such features are very useful to assist more confident and accurate recognition, as shown in Fig. 1.4, different fingerprints with similar minutiae can be distinguished by pores. Thus, it is necessary to investigate high resolution fingerprint recognition techniques to build high performance AFRSs [21–32].
1.4
Outline of This Book
To report the progress of advanced fingerprint recognition techniques, this book presented the work of high resolution fingerprint recognition and 3D fingerprint recognition technologies, as labeled part II and Part I in Fig. 1.5. The details of these technologies will be described in the subsequent chapters and the outline of each chapter is summarized as follows. Chapter 2 gave an overview of 3D fingerprint recognition, including 3D fingerprint features, systems and applications. Detailed technologies were given in Chaps. 3, 4 and 5. In Chap. 3, we introduced a 3D fingerprint generation approach by defining some criteria specific to fingerprints. Chapter 4 mainly presented a fingerprint authentication system by using a new touchless fingerprint features extracted from multi-view fingerprint images which is the intermediate form of 3D fingerprints. A real applications of 3D fingerprints for recognition were given in Chap. 5. Chapter 6 summarized the researches about high resolution fingerprint recognition. Detailed technologies were included from Chaps. 6, 7, 8, 9, 10, 11 and 12. In Chap. 7, we thoroughly investigated the optimal resolution for level-3 fingerprint feature extraction and finally recommended a reference resolution for high resolution
Fig. 1.5 Advanced fingerprints: from 3D shape to ridge detail
6
1 Introduction
fingerprint recognition. Chapter 8 introduced an effective fingerprint pore model for pore extraction. A pore-based partial fingerprint alignment method was proposed in Chap. 9. In Chap. 10, we presented an accurate fingerprint pore matching method which largely improved minutiae-based fingerprint recognition accuracy. Chapter 11 described quality assessment method for high resolution fingerprints. Fusion strategy based high resolution fingerprint features were given in Chap. 12. Chapter 13 summarized the book and indicated the future directions of fingerprint recognition.
References 1. Scientific Working Group on Friction Ridge Analysis, Study and Tech-nology, Peer reviewed glossary of the scientific working group on fric-tion ridge analysis, study and technology (SWGFAST), available at http://www.swgfast.org/documents/glossary/090508_Glossary_2.0. pdf 2. Pankanti, S., Prabhakar, S., Jain, A.K.: On the individuality of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 24(8), 1010–1025 (2002) 3. Chen, J., Moon, Y.S.: The statistical modelling of fingerprint minutiae distribution with implications for fingerprint individuality studies. 2008 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2008) 4. Chen, Y., Jain A.K.: Beyond minutiae: A fingerprint individuality model with pattern, ridge and pore features. International Conference on Biometrics. Springer, Berlin, Heidelberg (2009) 5. Ashbaugh, D.R.: Quantitative-Qualitative Friction Ridge Analysis: an Introduction to Basic and Advanced Ridgeology. CRC press, Boca Raton (1999) 6. Kumar, A., Zhou, Y.: Contactless Fingerprint Identification Using Level Zero Features. CVPR 2011 WORKSHOPS. IEEE (2011) 7. Maltoni, D., et al.: Handbook of Fingerprint Recognition. Springer, New York (2009) 8. Ratha, N., Bolle, R. (eds.): Automatic Fingerprint Recognition Systems. Springer, New York (2003) 9. Kryszczuk, K.M., Morier, P., Drygajlo, A.: Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison. International Workshop on Biometric Authentication. Springer, Berlin, Heidelberg (2004) 10. Kumar, A.: Contactless 3D Fingerprint Identification. Springer, Cham (2018) 11. Lee, C., Lee, S., Kim, J.: A study of touchless fingerprint recognition system. Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, Berlin, Heidelberg (2006) 12. Parziale, G., Diaz-Santana, E., Hauke, R.: The surround imager tm: A multi-camera touchless device to acquire 3D rolled-equivalent fingerprints. International Conference on Biometrics. Springer, Berlin, Heidelberg (2006) 13. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, New York (2003) 14. Hernandez, C., Vogiatzis, G., Cipolla, R.: Multiview photometric stereo. IEEE Trans. Pattern Anal. Mach. Intell. 30(3), 548–554 (2008) 15. Blais, F., Rioux, M., Beraldin, J.A.: Practical considerations for a design of a high precision 3-D laser scanner system. In: Optomechanical and Electro-optical Design of Industrial Systems, vol. 959. International Society for Optics and Photonics (1988) 16. Rusinkiewicz, S., Hall-Holt, O., Levoy, M.: Real-time 3D model acquisition. ACM Trans. Graph. (TOG). 21(3), 438–446 (2002)
References
7
17. Bradley, B.D., Chan, A.D.C., Hayes, M, John, D.: A simple, low cost, 3D scanning system using the laser light-sectioning method. 2008 IEEE Instrumentation and Measurement Technology Conference. IEEE (2008) 18. Wang, Y., Hassebrook, L.G., Lau, D.L.: Data acquisition and processing of 3-D fingerprints. IEEE Trans. Inf. Forensics Secur. 5(4), 750–760 (2010) 19. Stockman, G.C., et al.: Sensing and recognition of rigid objects using structured light. IEEE Control. Syst. Mag. 8(3), 14–22 (1988) 20. Hu, G., Stockman, G.: 3-D surface solution using structured light and constraint propagation. IEEE Trans. Pattern Anal. Mach. Intell. 11(4), 390–402 (1989) 21. Roddy, A.R., Stosz, J.D.: Fingerprint features-statistical analysis and system performance estimates. Proc. IEEE. 85(9), 1390–1421 (1997) 22. Zhao, Q., et al.: Adaptive pore model for fingerprint pore extraction. 2008 19th International conference on pattern recognition. IEEE (2008) 23. Zhao, Q., et al.: Direct pore matching for fingerprint recognition. International Conference on Biometrics. Springer, Berlin, Heidelberg (2009) 24. Zhao, Q., et al.: High resolution partial fingerprint alignment using pore–valley descriptors. Pattern Recogn. 43(3), 1050–1061 (2010) 25. CDEFFS, Data format for the interchange of extended fingerprint and palmprint features, Version 0.4 (2009) 26. Analysis of Level 3 Features at High Resolutions (Phase II). International Biometric Group (2008) 27. Jain, A., Yi, C., Demirkus, M.: Pores and ridges: Fingerprint matching using level 3 features. 18th International Conference on Pattern Recognition (ICPR’06), vol. 4. IEEE (2006) 28. Jain, A.K., Yi, C., Demirkus, M.: Pores and ridges: High-resolution fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2006) 29. Kryszczuk, K., Drygajlo, A., Morier, P.: Extraction of level 2 and level 3 features for fragmentary fingerprint comparison. EPFL. 3, 45–47 (2008) 30. Ray, M., Meenen, P., Adhami, R.: A novel approach to fingerprint pore extraction. In: Proceedings of the Thirty-Seventh Southeastern Symposium on System Theory, 2005. SSST’05. IEEE (2005) 31. Parsons, N.R., et al.: Rotationally invariant statistics for examining the evidence from the pores in fingerprints. Law Probab. Risk. 7(1), 1–14 (2008) 32. Zhao, Q., et al.: Adaptive fingerprint pore modeling and extraction. Pattern Recogn. 43(8), 2833–2844 (2010)
Chapter 2
Overview: 3D Fingerprints
Abstract Traditional fingerprints are captured in touch-based way, which results in partial or degraded images. Replacement of touch-based fingerprints by touchless ones can promote the development of touchless 3D AFRSs with high security and accuracy. This chapter reviewed technologies involved 3D AFRSs, including 3D fingerprint generation, 3D fingerprint features, as well as 3D fingerprint recognition systems and applications. Keywords Touchless 3D fingerprints · Level 0 features · 3D fingerprints · Touchless multi-view fingerprints
2.1
Introduction
With the rapid development of imaging technology, fingerprint-based identification rarely uses inked impressions but electronic imaging. Early electronic imaging acquired fingerprints in a touch based way. However, touch-based fingerprints are easily effected by the condition of finger (e.g. dirty, wet or dry, shown in Fig. 2.1), moreover, lose 3D information of human finger. In [1], the author summarized the advantages of touchless 3D fingerprint technologies compared with contact-based, touchless 2D technologies. Thus, exploring touchless 3D fingerprint recognition technology comes into researchers’ view for more secure and robust recognition. Developing touchless 3D AFRSs will further expand the application of fingerprint identification technologies. However, there are challenges introduced relating to accessibility, ergonomics, anthropometrics and user acceptability to build 3D AFRSs [1]. The way to generate 3D fingerprint images, how to define and extract the 3D fingerprint features, and how to make full use of the 3D information to identification all need to be studied. Some research of 3D AFRSs have been carried out in recent years [1–17]. References [2– 12] mainly involve technology or devices to obtain 3D fingerprints. Since there are different ways to generate 3D fingerprints, the quality of 3D fingerprint images are different, so as to influence the methodologies to extract 3D fingerprint features. For 3D fingerprint features, minutiae (2D or 3D) and 3D contour profile are the most © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_2
9
10
2 Overview: 3D Fingerprints
Fig. 2.1 Touch-based fingerprint images with different finger conditions. (a) Dirty, (b) Dry, (c) Wet
commonly used ones [1, 3, 13–17]. Since there are errors introduced when 3D fingerprint images are generated or captured by different technologies, the performance of matching such features are degraded. To build more robust and secure 3D fingerprint recognition systems, it is very important to fuse 3D fingerprint features with 2D features comprehensively. From the results of current research, it is no doubt to get better performance of 3D fingerprint recognition than 2D fingerprint recognition. However, the performance of 3D fingerprint recognition is highly relevant to the accurate of 3D fingerprint images. It is very necessary to study the most effective methods specific to different 3D fingerprint images generated by different technologies. This chapter summarized and analyzed frequently used 3D fingerprint features, 3D fingerprint systems and the applications of 3D fingerprint information specific to different 3D fingerprint images generated by different technologies. Following three chapters then gives detailed methods relevant to 3D fingerprint generation, 3D fingerprint authentication system, as well as definition and extraction of 3D fingerprint features and their applications.
2.2
3D Fingerprint Features
As we know, there are three levels of fingerprint features [18]. All of these three levels of fingerprint features are relevant with fingerprint ridges and defined by analyzing 2D fingerprint image. Compared with 2D fingerprint image, 3D fingerprints added a dimension of depth besides length, width and texture intensity, as shown in Fig. 2.2 [13]. Thus, traditional 2D fingerprint features, especially minutiae, can directly expand to 3D space to form 3D fingerprint features. References [2, 3, 15, 16] introduced the minutiae based features including 2D minutiae from unwrapped 3D fingerprints, 3D minutiae and 3D minutiae tetrahedron. Therefore, one frequently used 3D fingerprint features are minutiae-based fingerprint feature. Due to the wide application of minutiae in fingerprint recognition domain, minutiae-based features will continue to be valued in 3D fingerprint recognition domain. Other popular 3D fingerprint features are 3D shape relevant, such as Curvature Features mentioned in [13]. Those features totally rely on the depth information of 3D fingerprint images, as the skeleton lines shown in Fig. 2.2. From the perspective
2.3 3D Fingerprint Recognition: Systems and Applications
11
Fig. 2.2 Fingerprint image in 3D space [13]
of observation and experimental analysis given in [13], the distinctiveness of those features are very low to identify human’s identity individually. The distinctiveness of such features are considered to be worse than level-1 features defined in [18]. Thus, we defined those features as Level zero features in this book. Those Level zero features are very useful to assist matching, like indexing or coarse alignment [8]. Moreover, there can be taken as additional features to assist more accurate matching by fusing with other features. It should be noted that the accuracy of 3D fingerprint images are different by different 3D fingerprint generation or acquisition process. For 3D fingerprint features, Level zero features will be more reliable when the quality of 3D fingerprints are high, otherwise, the performance of matching will be significantly degraded by unreliable 3D fingerprint features.
2.3
3D Fingerprint Recognition: Systems and Applications
Even though there are three kinds of commonly used 3D fingerprint generation or acquisition technologies, i.e. reconstruction [1, 2, 5, 7, 8, 11, 14–17], structured light illumination [3, 4, 9, 13] and ultrasound scanning [12, 19, 20], current 3D fingerprint recognition systems are almost based on reconstruction or structured light illumination techniques. The reasons are manifold. At first, the reconstruction technique has the advantage of low cost which is very important to system building. Secondly, it is very hard to extract features the ultrasound-based 3D fingerprint images or to directly match them for identification although the image resolution is very high. Thus, this kind of 3D fingerprint acquisition technique is abandoned gradually. The structured light imaging technique is a little bit time-consuming to collect 3D data,
12
2 Overview: 3D Fingerprints
meanwhile it suffers from the instability problem such that one needs to keep still when projecting some structured light patterns to the human finger. However, it has the advantages of high accuracy and a moderate cost. Commercial systems based on reconstruction [5] and structured light illumination [6] are available in market. To build a reliable and accurate 3D fingerprint recognition system based on reconstruction technique, problems including accurate 3D fingerprint reconstruction, 3D fingerprint feature extraction, and fusion with traditional 2D fingerprints should be solved. The first and foremost problem is 3D fingerprint reconstruction since the reconstruction accuracy will directly influence the accuracy of 3D fingerprint feature extraction and matching so as to effect system performance. In fact, the reconstruction accuracy is rarely analyzed except in [7] due to the lack of ground truth of 3D fingerprint data. This book will provide a criteria for multi-view 3D fingerprint reconstruction in Chap. 3. For feature extraction and matching methods in those systems [1, 2, 5, 7, 8, 11, 14–17], we found minutiae-based matching achieved better performance if only one feature was used. Curvature features should be fused with 2D fingerprint features to reach better system performance. Also, for reconstruction based 3D fingerprint recognition systems, high recognition accuracy will be achieved if 2D fingerprint features, such as Distal Interphalangeal Crease (DIP), can be reasonably used. Thus, Chap. 4 in this book introduced an authentication system with very high recognition accuracy by providing multi-view fingerprint images which is the intermediate form of 3D fingerprints. Since structured light illumination technique images 3D fingerprints by hardware, the quality of 3D fingerprints is guaranteed. It is the most suitable way to develop 3D fingerprint recognition systems. However, earlier works based on such kind of 3D fingerprints focused on unwrapping the 3D fingerprints into 2D flattened fingerprints with larger size so as to achieve high accuracy [3, 4]. However, true 3D fingerprint images are expected to include depth information corresponding to the fingerprint ridges, or at least the shape of 3D finger surface [1]. We will discuss the recognition performance and applications of such 3D fingerprints in Chap. 5.
2.4
Summary
In a word, this chapter elaborated the causes and significance to develop touchless 3D fingerprint technology. The methods to generate 3D fingerprints, the definition and extraction of 3D fingerprint feature, as well as the applications and systems of 3D fingerprint images were introduced in detail. Some example technologies were given in the next chapters. It should be noted that the 3D fingerprints mentioned here refer to fingerprint images with both surface texture and the 3D finger shape. True 3D fingerprint images are expected to obtain the depth information of fingerprint ridges and valleys. Exploring true 3D fingerprint techniques will further improve fingerprint-based recognition accuracy and anti-spoofing ability.
References
13
References 1. Kumar, A.: Contactless 3D Fingerprint Identification. Springer, Cham (2018) 2. Parziale, G., Diaz-Santana, E., Hauke, R.: The surround imager tm: A multi-camera touchless device to acquire 3D rolled-equivalent fingerprints. International Conference on Biometrics. Springer, Berlin/Heidelberg (2006) 3. Wang, Y., et al.: Data acquisition and quality analysis of 3-dimensional fingerprints. 2009 First IEEE international conference on Biometrics, Identity and Security (BIdS). IEEE (2009) 4. Wang, Y., Lau, D.L., Hassebrook, L.G.: Fit-sphere unwrapping and performance analysis of 3d fingerprints. Appl. Opt. 49(4), 592–600 (2010) 5. Biometrische TechnologieMade in Switzerland. RSS. www.tbs-biometrics.com/en/ 3denrollment/ 6. The 3D Fingerprinting Company. FlashScan3D. www.FlashScan3D.com/ 7. Liu, F., Zhang, D.: 3D fingerprint reconstruction system using feature correspondences and prior estimated finger model. Pattern Recogn. 47(1), 178–193 (2014) 8. Zhang, D., Lu, G.: 3D Biometrics. Springer, New York (2013) 9. Troy, M., et al.: Non-contact 3D fingerprint scanner using structured light illumination. In: Emerging Digital Micromirror Device Based Systems and Applications III, vol. 7932. International Society for Optics and Photonics (2011) 10. Labati, R.D., et al.: Touchless fingerprint biometrics: a survey on 2D and 3D technologies. J. Internet Technol. 15(3), 325–332 (2014) 11. Yi, C., et al.: 3D touchless fingerprints: Compatibility with legacy rolled images. 2006 Biometrics symposium: Special session on research at the biometric consortium conference. IEEE (2006) 12. Maev, R.G., et al.: High resolution ultrasonic method for 3D fingerprint representation in biometrics. In: Acoustical Imaging, pp. 279–285. Springer, Dordrecht (2008) 13. Liu, F., Zhang, D., Shen, L.: Study on novel curvature features for 3D fingerprint recognition. Neurocomputing. 168, 599–608 (2015) 14. Lin, C., Kumar, A.: Contactless and partial 3D fingerprint recognition using multi-view deep representation. Pattern Recogn. 83, 314–327 (2018) 15. Kumar, A., Kwong, C.: Towards contactless, low-cost and accurate 3D fingerprint identification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2013) 16. Lin, C., Kumar, A.: Tetrahedron based fast 3D fingerprint identification using colored LEDs illumination. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 3022–3033 (2017) 17. Zhou, W., et al.: A benchmark 3D fingerprint database. 2014 11th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD). IEEE (2014) 18. Jain Anil, K., Bolle, R., Pankanti, S. (eds.): Biometrics: Personal Identification in Networked Society, vol. 479. Springer, New York (2006) 19. Savoia, A., et al.: Design and fabrication of a cMUT probe for ultrasound imaging of fingerprints. 2010 IEEE International Ultrasonics Symposium. IEEE (2010) 20. Lamberti, N., et al.: A high frequency cMUT probe for ultrasound imaging of fingerprints. Sens. Actuators A Phys. 172(2), 561–569 (2011)
Chapter 3
3D Fingerprint Generation
Abstract This chapter addresses the problem of feature-based 3D reconstruction model for close-range objects. Since it is almost impossible to find pixel-to-pixel correspondences from 2D images by algorithms when the object is imaged on a close range, the selection of feature correspondences, as well as the number and distribution of them, play important roles in the reconstruction accuracy. Then, features on representative objects are analyzed and discussed. The impact of the number and distribution of feature correspondences is analyzed by reconstructing an object with standard cylinder shape by following the reconstruction model introduced in this chapter. After that, three criteria are set to guide the selection of feature correspondences for more accurate 3D reconstruction. These criteria are finally applied to the human finger since it is a typical close-range object and different number and distribution of feature correspondences can be established automatically from its 2D fingerprints. The effectiveness of the setting criteria is demonstrated by comparing the accuracy of reconstructed finger shape based on different fingerprint feature correspondences with the corresponding 3D point cloud data obtained by structured light illumination (SLI) technique which is taken as a ground truth in this chapter. Keywords 3D fingerprint reconstruction model · Representative fingerprint features · Reconstruction criteria
3.1
Introduction
The 3D geometric shape and appearance of objects offer attributes that are invariant to the changes introduced by the imaging process. These attributes can facilitate recognition and assist in various applications, including graphical animation, medical applications, and so forth. Thus, how to obtain the 3D geometric models of real objects has attracted more and more attentions from researchers and companies [1– 18]. In computer vision and computer graphics, the process of capturing the shape and appearance of real objects refers to 3D reconstruction. Currently, the existing 3D reconstruction techniques are divided into two categories: active and passive modeling. Active modeling creates the 3D point cloud data of geometric surface by © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_3
15
16
3 3D Fingerprint Generation
interfering with the reconstructed objects, either mechanically or radiometrically [1– 6], while the passive modeling uses only the information contained in the images of the scene to generate the 3D information, namely image-based reconstruction [7– 17]. Each of these two kinds of modeling has its own advantages and disadvantages. The active modeling reconstructs the 3D model of objects by devices directly with high accuracy but the used devices are costly and cumbersome [18]. The imagebased reconstruction generates the 3D model of objects based on their 2D plain images captured by cameras which are challenged to achieve high reconstruction accuracy but the adopted capturing devices (cameras) are usually cheap and light weight [19]. Considering the cost and portability, as well as aiming to make breakthroughs to the reconstruction accuracy, image-based reconstruction is deeply investigated, as summarized in [20, 21]. As summarized in [20], there are mainly five kinds of image-based reconstruction methods: shape from shading [7–9], photometric stereo [14–16], stereopsis [10, 11], photogrammetry [22–24], and shape from video [12, 13]. The shape-from-shading approaches recover the shape of an object from a gradual variation of shading in the image and only one 2D image is needed for depth calculation. Thus, they are the least on equipment requirements but at the price of accuracy and computational complexity [25]. Photometric stereo methods measure 3D coordinates based on different images of the object’s surface taken under multiple non-collinear light sources. This kind of methods is an improved version of the shape from shading ones. Higher reconstruction accuracy is achieved due to the usage of more light sources and images [20]. The stereopsis approaches calculate the 3D depth by binocular disparity and two different images captured at the same time are necessary for 3D depth computation. This kind of methods provides better accuracy with less mathematical complexity but difficulty lies in establishing of feature correspondences in two different images automatically and making essential equipment calibrations [26]. Photogrammetry approaches use the same methods to compute the 3D coordinates as the stereopsis ones. Thus, they have similar merits and drawbacks. But, photogrammetry approaches usually use more than two images and produces good results in some types of applications. Typically, they have been successfully applied for modeling archaeological and architectural objects [20]. The shape-from-video approaches render the assumptions in all previous methods since a series of images can be parted from a video. But the problem still lies in the establishment of correspondences from 2D plain images. This kind of methods is usually used in reconstructing terrain, natural targets and buildings [21]. Among all of those methods, photogrammetry approaches are classical and well established ones. They have been around since nearly the same time as the discovery of photography itself [27]. Whereas, photogrammetrists are usually interested in building detailed and accurate 3D models from images. However, in the field of computer vision, work is being done on automating the reconstruction problem and implementing an intelligent human-like system that is capable of extracting relevant information from image data [28]. Thus, algorithms are usually specifically designed for different applications. Currently, the applications of 3D reconstruction approaches are mainly focus on the modeling of terrain, natural targets, as well as
3.1 Introduction
17
Fig. 3.1 Example images of archaeological and architectural objects labeled with contour points, (a) dinosaur, (b) buildings
Fig. 3.2 Example images of close-range small objects, (a) finger, (b) palm, (c) ear, (d) iris
archaeological and architectural objects. The characteristics of those kinds of objects are imaged at a long distance and have contour points, as the examples shown in Fig. 3.1. The reconstruction of these kinds of objects made researchers ignored two important problems met by the reconstruction of close-range objects: one is that it is hard to find contour points or corner points for correspondences establishment on their 2D plain images of the close-range objects, the influence of the selection of feature correspondences, as well as the number of correspondences is increased for the reconstruction of close-range objects. The other one is minor depth difference corresponds to a significant rise of pixel difference on 2D plain images for the reconstruction of close-range objects. The effect of the distribution of correspondences is enlarged in this situation. Currently, there are no proven results for close-range objects and irregular surfaces like human biometrics (see Fig. 3.2). Motivated by designing an effective method to model the shape of close-range objects without contour correspondences, a feature-based 3D reconstruction model is investigated in this Chapter. This 3D modeling was on the base of traditional binocular stereo vision theory. The methodology of the used reconstruction method is first introduced in this Chapter. Then, for the first time we analyzed the selection of feature points for correspondence establishment for close-range objects, as well as the impact of the number and distribution of feature correspondences on reconstruction accuracy by reconstructing
18
3 3D Fingerprint Generation
an object with standard cylinder shape and of radius 10 mm. The number and distribution of correspondences from two pictures of the cylinder were labeled and selected manually. After that, three criteria were set to guide the selection of feature correspondences on close-range objects for more accurate 3D reconstruction. These criteria were finally applied to the human finger since it is a typical close-range object and different number and distribution of feature correspondences can be established automatically from its 2D fingerprints. The effectiveness of the setting criteria was demonstrated by comparing the accuracy of reconstructed finger shape based on different fingerprint feature correspondences with the corresponding 3D point cloud data obtained by structured light illumination (SLI) technique which was taken as a ground truth in this Chapter.
3.2
Feature-Based 3D Reconstruction Model
Based on the theory of binocular stereo vision [29], the 3D information of an object can be obtained from its two different plane pictures captured at one time. As shown in Fig. 3.3, given two images Cl and Cr simultaneously captured from two viewpoints, the 3D coordinate of V can be calculated if some camera parameters (e.g., focal length of the left camera fl, focal length of the right camera fr, principal point of the left camera Ol, principal point of the right camera Or) and the matched pair (vl(xl, yl)) $ (vr(xr, yr)), where v() represents a 2D point in the given images Cl or Cr; x is the column-axis of the 2D image, and y is the row-axis of the 2D image) are provided. Thus, there are mainly three steps to obtain the 3D space coordinate of points from 2D images, namely camera calculation, correspondence establishment, and 3D coordinates calculation.
Fig. 3.3 3D coordinates calculation on 3D space using binocular stereo vision theory
3.3 Criteria for Close-Image Objects Reconstruction
19
Camera calibration refers to the calculation of camera parameters. It is the first step of reconstruction and provides the intrinsic parameters (focal length, principal point, skew, and distortion) of each camera and extrinsic parameters (rotation, translation) between cameras necessary for reconstruction. It usually implements off-line and the commonly used methods and codes are available [30, 31]. Correspondence establishment is of great importance and also a huge challenging problem to 3D modeling. The methods for correspondence establishment are categorized into two classes: feature-based approach and correlation technique [32– 34]. Feature-based approach usually produces sparse depth maps by matching feature correspondences while correlation technique yields to dense depth maps by matching all pixels in the entire images. Each has merits and drawbacks. Featurebased approach is suitable when good features can be extracted from 2D images, relatively insensitive to illumination changes and faster than correlation technique. But it usually just provides sparse depth maps. While correlation technique is easier to implement than feature-based method and can provide a dense depth map. It does not work well when viewpoints are very different. Generally, feature-based approach is preferable than correlation technique by taking both accuracy and time complexity into account. The 3D coordinate of each correspondence can be calculated by using the stereo triangulation method when given camera parameters and matched pairs between images of different views [31]. However, to obtain the 3D surface of an object, it is necessary to produce dense depth maps. They are two ways to realize 3D surface reconstruction by feature-based approach. One is to establish pixel-to-pixel correspondence by estimating the transformation model between 2D images based on feature correspondences (labeled by Framework I). The other one is to find representative feature correspondences from 2D images and given a prior shape model then reconstructing the 3D surface by interpolation (labeled by Framework II). The first framework of reconstruction technique is similar to the correlation-based one due to the establishment of pixelto-pixel correspondence which has drawbacks of low accuracy and high time complexity. This Chapter thus studied reconstruction technique by following Framework II. Based on Framework II, this Chapter focused on investigating the influence of feature correspondences establishment to 3D reconstruction accuracy for closerange objects. The model of the proposed 3D reconstruction model is shown in Fig. 3.4.
3.3
Criteria for Close-Image Objects Reconstruction
Figure 3.5 shows an example of the reconstruction result based on the model given in Fig. 3.4. It can be seen that the correspondences established on the objects are almost contour or corner points labeled manually. It is invalid for close-range objects without contour or corner points on them, which raises problems of the selection of representative features for correspondence establishment. Meanwhile, the number
20
3 3D Fingerprint Generation
Fig. 3.4 The proposed 3D reconstruction model in this chapter
Fig. 3.5 Building reconstruction results by following the model shown in Fig. 3.4. (a) contour or corner correspondences establishment result, (b) 3D reconstruction result wrapped with texture image
and distribution of feature correspondences also plays an important role in the 3D reconstruction accuracy. These two problems are studied in-depth in the following subsections. Finally, three criteria are set based on the previous analysis so as to guide feature correspondences establishment for 3D modeling of close-range objects.
3.3 Criteria for Close-Image Objects Reconstruction
3.3.1
21
Selection of Representative Features for Correspondence Establishment
Generally speaking, it is intuitive that corner points which refers to the intersection of two lines or point which located in two adjacent objects with different principle lines will be selected as representative features for correspondence establishment, as the points manually labeled in Fig. 3.5. However, there are no corner points in some objects, as the example images shown in Fig. 3.2. It is necessary to find representative feature points or corner-like points to instead corner points for correspondence establishment. By observing the images of objects in Fig. 3.2, we can see that lines or regions of variation are widespread in them. In this Chapter, we assume the points located in the position with changes as representative feature points. There are three typical situations (e.g. on line, between lines, between regions) we summarized as follows. The first two situations are relative to lines, which are for a single line and between lines. These two situations are analyzed based on fingerprint images since they consist of lines. As the solid line labeled in the example fingerprint image shown in Fig. 3.6, for a single line, it can be seen that changes occur in the end of the line or the point where its direction changes largely. Generally, the end of a line is defined as termination point (triangle labeled in Fig. 3.6) and the point where lines’ direction largely changed refers to local extreme point (circle labeled in Fig. 3.6). Thus, such two kinds of points are selected as representative feature points for a single line in this Chapter. In the situation of between lines (see dashed lines labeled in Fig. 3.6), change just exist in the intersection point of lines, the representative feature point is then defined as the intersection point between lines (rectangle labeled in Fig. 3.6). The third situation is between regions. Besides lines, some objects consist of different regions, as the iris image shown in Fig. 3.7. It can be seen that it contains regions with different textures and colors. Similar to the situation between lines, changes occurs in the boundary between adjacent regions, as circles labeled in Fig. 3.7. Thus, in the third case, representative feature points are defined as the points in the boundary between adjacent regions.
Fig. 3.6 Illustration of representative feature points of lines in a fingerprint image
22
3 3D Fingerprint Generation
Fig. 3.7 Illustration of representative feature points between regions in an example iris image
Fig. 3.8 (a) Original cylinder shape object wrapped with grid paper, (b) 2D images of (a) captured by the left and right cameras in the experiments
Finally, we summarize the first criterion for feature correspondences establishment to the 3D reconstruction of close-range objects as: Criterion 1: selecting representative feature points or corner-like points for correspondence establishment.
3.3.2
Influence Analysis of Correspondences to Reconstruction Accuracy
As known to everyone, it is extremely difficult to establish pixel-to-pixel feature correspondences manually or automatically, especially for close-range objects due to their small size and the unknown number and irregular distribution of feature points on them. To the best of our knowledge, there are no literatures available about the influence analysis of number and distribution of feature correspondences to 3D reconstruction accuracy. This subsection thus studied this problem by reconstructing a small object with standard cylinder shape and of radius 10 mm. To facilitate the labeling of feature correspondences, we wrapped the cylinder with a grid paper, as shown in Fig. 3.8a. As the representative feature points defined in the previous subsection, those feature points which are located in the boundary between adjacent
3.4 Human Finger 3D Reconstruction
23
regions are manually labeled for correspondence establishment. By following the method introduced in Sect. 3.3.2, the shape of the cylinder can be reconstructed. Here, a 3D software was used for the display and analysis of reconstruction results. This software is popularly used for 3D point cloud data display and analysis. First, experiments were organized to analyze the effect of the number of correspondences on 3D reconstruction accuracy. Thus, the distributions of selected feature correspondences were all even. The largest number of correspondences between two 2D images shown in Fig. 3.8b is set to 40 for the reason that this is the largest area those points covered in the experiments. The number of feature correspondences both along horizontal axis and vertical axis was gradually reduced to get different reconstruction results. Table 3.1 lists the setting of parameters in the experiments, such as the number of feature correspondences, the distribution of correspondences, and the sampling interval and direction along decreasing number. The reconstruction results were also summarized in Table 3.1. Details of the corresponding feature correspondences establishment and reconstruction results were given in Fig. 3.9. From the results, we can see that the reconstruction accuracy dropped with the decreasing of correspondence number. From Table 3.1, we can see that there is a little influence of number decreasing on reconstruction accuracy for a line shape (vertical axis). For a curved shape (horizontal axis), the accuracy decreased when correspondence number decreasing and sampling interval increasing. For the same number of feature correspondences, the closer between correspondences, the larger the error may be due to the effect of errors resulted in 3D coordinate calculation for each correspondence. Therefore, we set the second criterion for feature correspondences establishment to the 3D reconstruction of close-range objects as: Criterion 2: Densely sampling of feature correspondences along the direction where depth changed quickly and sparsely sampling of feature correspondences along the direction where depth smoothly changed. We also conducted an experiment of 3D reconstruction by randomly selecting feature correspondences with irregular distributions. The selected feature correspondences and its reconstruction result are shown in Fig. 3.10, where the number of correspondences is around 20. By comparing the result with Enum-2, Enum-3 and Enum-4 shown in Fig. 3.8 (they have similar number of correspondences), we can see that better results were achieved with large sampling interval no matter the distribution of feature correspondences is even or not. Thus, the third criterion for feature correspondences establishment to the 3D reconstruction of close-range objects in this Chapter is: Criterion 3: Establishing feature correspondences to cover the surface of object as large as possible.
3.4
Human Finger 3D Reconstruction
As we can see that human fingers are typical close-range objects. To verify the effectiveness of the proposed reconstruction model and the criteria to the reconstruction accuracy for close-range objects, this Chapter took the reconstruction of
Title of experiments Number of feature correspondences Distribution of correspondences (Sampling interval, Direction along decreasing number) Reconstruction radius (standard 10 mm))
Enum-2 24 Even (4 mm, horizontal) 9.85
Enum-1 40
Even (2 mm, ) 9.91
Even (2 mm, horizontal) 9.08
Enum-3 24 Even (4 mm, vertical) 9.78
Enum-4 20
Table 3.1 Setting of experimental parameters and the corresponding reconstruction results
Even (6 mm, vertical) 9.80
Enum-5 15 Even (6 mm, horizontal) 2.77
Enum-6 12
Even (8 mm, vertical) 9.68
Enum-7 12
Even (8 mm, horizontal) 3.69
Enum-8 12
Even (10 mm, vertical) 9.71
Enum-9 10
24 3 3D Fingerprint Generation
Fig. 3.9 Established feature correspondences and the reconstruction result for (a) Enum-1, (b) Enum-2, (c) Enum-3, (d) Enum-4, (e) Enum-5, (f) Enum-6, (g) Enum-7, (h) Enum-8, (i) Enum-9, listed in Table 3.1
3.4 Human Finger 3D Reconstruction 25
26
3 3D Fingerprint Generation
Fig. 3.10 Established feature correspondences with irregular distribution and the reconstruction result
finger shape as a case study. The device used to capture 2D fingerprint images was the same as the one introduced in Ref. [35].
3.4.1
Effectiveness Validation of the Proposed Reconstruction Model
As mentioned in Sect. 3.3.2, there are two frameworks to realize 3D results by using feature-based reconstruction technique. This Chapter selected Framework II in the proposed reconstruction model. This subsection tries to demonstrate the effectiveness of the proposed model by reconstructing a human finger with two frameworks mentioned in Sect. 3.3.2. First, we manually labeled 50 representative feature correspondences on example fingerprint images by following the criteria set in Sect. 3.3.3, as shown in Fig. 3.11a. Then, pixel-to-pixel correspondences were established by estimating the transformation model between images based on previously labeled feature correspondences. The result is shown in Fig. 3.11b. Here, the rigid transform was selected as the model between images. After that, 3D reconstruction results can be achieved by following the procedures given in Sect. 3.3.2, as shown in Fig. 3.12. For better comparison, the depth of the reconstruction result is normalized to [0, 1] by MIN-MAX rule. From Fig. 3.12, we can see that the result obtained by the proposed model is closer to the appearance of human finger than the one generated by following the procedure of framework I. Furthermore, we compared the reconstruction results with the 3D point cloud data of the same finger to verify the effectiveness of the model. The 3D point cloud data are defined as the depth information of each point on the finger. They are collected by a camera together with a projector using the Structured Light Illumination (SLI) method [36, 37]. Since this technique is well studied and proved to acquire 3D depth information of each point on the finger with high accuracy [36, 37], 3D point cloud data obtained using this technique are taken as the ground truth of the human finger in this Chapter. Compared our results in Fig. 3.12 with the ground truth shown in
3.4 Human Finger 3D Reconstruction
27
Fig. 3.11 Correspondences establishment results. (a) manually labeled 50 presentative feature correspondences, (b) pixel-to-pixel correspondences (gray part in the center) after rigid transformation between fingerprint images
Fig. 3.12 3D reconstruction results. (a) reconstruction result based on the model of Framework I, (b) reconstruction result based on our proposed reconstruction model
Fig. 3.13, it can be seen that the profile of the human finger shape reconstructed based on the proposed model is similar to the 3D point cloud data even though it is not that accurate. Meanwhile, the reconstruction result based on framework I shown in Fig. 3.12a is quite different from the 3D point cloud data. The real distances between the upper left core point and the lower left delta point of the reconstruction results in Fig. 3.12a, b, as well as of the ground truth in Fig. 3.13a were also calculated. The corresponding values are 0.431, 0.353 and 0.386, respectively. As a result, it is concluded that the proposed model is effective even though there is an error between the reconstruction result and the 3D point cloud data.
28
3 3D Fingerprint Generation
Fig. 3.13 Ground truth of the same finger of Fig. 3.11 captured by structured light illumination (SLI) technique. Comparison of 3D fingerprint images from the same finger but different acquisition technique: (a). original fingerprint image captured by the camera when collecting 3D point cloud, (b). 3D point cloud collected by one camera and a projector using the SLI method
3.4.2
Criteria Verification
This Chapter proposed three criteria to guide feature correspondences establishment for the 3D reconstruction of close-range objects. The effectiveness of such criteria was verified by analyzing the reconstruction accuracy based on different fingerprint feature correspondences. As studies in [38], there are two classical fingerprint features for low resolution fingerprint images, namely ridge feature and minutiae. Feature correspondences were first established automatically from the images shown in Fig. 3.11 by using the algorithms introduced in [35, 38, 39], and then the reconstruction results were generated based on those three different fingerprint feature correspondences, as illustrated in Table 3.2. It can be seen that the results are different corresponding to different feature matched pairs due to different numbers and distribution of established fingerprint feature correspondences and also the existence of false correspondences. From the results shown in Table 3.2, it can be seen that the reconstruction result based on minutiae correspondences is better than the one based on ridge feature. The histograms of error maps between the results in Table 3.2 and the ground truth in Fig. 3.13b are shown in Fig. 3.14. Smaller errors were achieved between the minutiae-based reconstruction result and the ground truth. These results fully demonstrated the proposed criteria in this Chapter that: (1) minutiae, which refer to the ends or bifurcations of ridges, are satisfied the definition of representative feature points or corner-like points in the Chapter. However, ridge feature, which is the sampling of lines, provides too much insignificant information. Thus, it is better to select representative feature points or corner-like points for correspondence
3.4 Human Finger 3D Reconstruction
29
Table 3.2 Reconstruction results from different fingerprint feature correspondences of Fig. 3.11 Results used feature Minutiae
Established correspondences
Reconstructed 3D fingerprint image
Ridge feature
Fig. 3.14 Histogram of error maps between reconstructed results in Table 3.2 and Fig. 3.13b. (a) histogram of err map between Fig. 3.13b and reconstruction result by using minutiae, (b) histogram of err map between Fig. 3.13b and reconstruction result by using ridge feature
establishment; (2) poor result will be achieved if densely establishing correspondences along the direction with smoothly changed depth, like ridge feature correspondences. Therefore, we recommended to sparsely sampling of feature correspondences along the direction where depth smoothly changed; (3) the region covered by minutiae correspondences is larger than the one covered by ridge correspondences, better reconstruction result is achieved correspondingly. Hence, this Chapter set the third criterion for feature correspondences establishment to cover the surface of object as large as possible.
30
3.5
3 3D Fingerprint Generation
Summary
The issue of feature-based 3D reconstruction method for close-range objects was investigated in this Chapter. For close-range objects, it is very hard to found pixel-topixel correspondence from their 2D images. Thus, our study mainly focused on 3D modeling with limited feature correspondences. In this situation, the selection of representative feature correspondences, the number and distribution of the feature correspondences play an important role in the 3D reconstruction accuracy. Then, features on representative close-range objects were analyzed and the suitable features for correspondence establishment were indicated. Subsequently, the impact of the number and distribution of feature correspondences was analyzed by reconstructing an object with standard cylinder shape and of radius 10 mm. Three criteria were set to guide the selection of features on close-range objects for more accurate 3D reconstruction. We finally took the reconstruction of human finger as a case study by applying our setting criteria. The effectiveness of the setting criteria was demonstrated by comparing the accuracy of reconstructed finger shape based on different fingerprint feature correspondences with the corresponding 3D point cloud data obtained by structured light illumination (SLI) technique which was taken as a ground truth in this Chapter.
References 1. Rusinkiewicz, S., Hall-Holt, O., Levoy, M.: Real-time 3D model acquisition. ACM Trans. Graph. (TOG). 21(3), 438–446 (2002) 2. Bradley, B.D., Chan, A.D.C, Hayes, M.J.D:. A simple, low cost, 3D scanning system using the laser light-sectioning method. 2008 IEEE Instrumentation and Measurement Technology Conference. IEEE (2008) 3. Wu, Q., et al.: A 3D modeling approach to complex faults with multi-source data. Comput. Geosci. 77, 126–137 (2015) 4. Wang, Y., Hassebrook, L.G., Lau, D.L.: Data acquisition and processing of 3-D fingerprints. IEEE Trans. Inf. Forensics Secur. 5(4), 750–760 (2010) 5. Stockman, G.C., et al.: Sensing and recognition of rigid objects using structured light. IEEE Control. Syst. Mag. 8(3), 14–22 (1988) 6. Hu, G., Stockman, G.: 3-D surface solution using structured light and constraint propagation. IEEE Trans. Pattern Anal. Mach. Intell. 11(4), 390–402 (1989) 7. Horn Berthold, K.P.: Shape from shading: A Method for Obtaining the Shape of a Smooth Opaque Object from One View (1970) 8. Zhang, R., et al.: Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 690–706 (1999) 9. Worthington, P.L.: Reillumination-driven shape from shading. Comput. Vis. Image Underst. 98 (2), 325–343 (2005) 10. Akimoto, T., Suenaga, Y., Wallace, R.S.: Automatic creation of 3D facial models. IEEE Comput. Graph. Appl. 13(5), 16–22 (1993) 11. Lao, S., et al.: Building 3D facial models and detecting face pose in 3D space. Second International Conference on 3-D Digital Imaging and Modeling (Cat. no. PR00062). IEEE (1999)
References
31
12. Brodsky, T., Fermuller, C., Aloimonos, Y.: Shape from video. In: Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. no PR00149), vol. 2. IEEE (1999) 13. Strecha, C., et al.: Shape from video vs still images. Proc. Opti. 3D Meas. Tech. 2, 168–175 (2003) 14. Woodham, R.J.: Photometric method for determining surface orientation from multiple images. Opt. Eng. 19(1), 191139 (1980) 15. Rushmeier, H., Taubin, G., Guéziec, A.: Applying Shape from Lighting Variation to Bump Map Capture Rendering techniques, vol. 97, pp. 35–44. Springer, Vienna (1997) 16. Malzbender, T., et al.: Surface enhancement using real-time photometric stereo and reflectance transformation. Rendering techniques 2006: 17th (2006) 17. Grün, A.: Semi-automated approaches to site recording and modeling. IAPRS. 33, 309–318 (2000) 18. Bradley, B.D., Chan, A.D.C., Hayes, M.J.D.: A simple, low cost, 3D scanning system using the laser light-sectioning method. 2008 IEEE Instrumentation and Measurement Technology Conference. IEEE (2008) 19. Paris, S.: Methods for 3D Reconstruction from Multiple Images. Cambridge, Massachussets Institute of Technology. [online] Disponible en https://people.csail.mit.edu/sparis/talks/Paris_ 06_3D_Reconstruction.pdf. Consultado el 29.06(2006) (2018) 20. Said, A.Md, Hasbullah, H., Baharudin, B.: Image-based modeling: A review. J. Theor. Appl. Inf. Technol. 5(2) (2009) 21. Remondino, F., Hakim, S.-E.: Image-based 3D modelling: A review. Photogramm. Rec. 21 (115), 269–291 (2006) 22. Udayan, J.D., Kim, H.S., Kim, J.-I.: Animage-based approach to the reconstruction of ancient architectures by extracting and arranging 3D spatial components. Front. Inf. Technol. Electron. Eng. 16(1), 12–27 (2015) 23. Grün, A., Remondino, F., Zhang, L.: Photogrammetric reconstruction of the great Buddha of Bamiyan, Afghanistan. Photogramm. Rec. 19(107), 177–199 (2004) 24. Zhu, S., Xia, Y.: 3D Simulation and Reconstruction of Large-Scale Ancient Architecture with Techniques of Photogrammetry and Computer Science (2005) 25. Zhang, R., et al.: Shape-from-shading: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 690–706 (1999) 26. Poggio, G.F., Poggio, T.: The analysis of stereopsis. Annu. Rev. Neurosci. 7(1), 379–412 (1984) 27. Hartley, R.I., Mundy, J.L.: Relationship between photogrammmetry and computer vision. In: Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision, vol. 1944. International Society for Optics and Photonics (1993) 28. Henrichsen, Arne. 3D Reconstruction and Camera Calibration from 2D Images. Dissertations. University of Cape Town, (2000) 29. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge university press, New York (2003) 30. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000) 31. Bouguet, J.: First calibration example – corner extraction, calibration, additional tools. Camera Calibration Toolbox for Matlab. www.vision.caltech.edu/bouguetj/calib_doc/htmls/example. html 32. Correspondence Problem. Wikipedia, Wikimedia Foundation, 8 Oct.2019. http://en.wikipedia. org/wiki/Correspondence_problem#Basic_Method 33. Ogale, A.S., Aloimonos, Y.: Shape and the stereo correspondence problem. Int. J. Comput. Vis. 65(3), 147–162 (2005) 34. Belhumeur, P.N., Mumford, D.: A Bayesian treatment of the stereo correspondence problem using half-occluded regions. CVPR. 506, 512 (1992)
32
3 3D Fingerprint Generation
35. Liu, F., et al.: Touchless multiview fingerprint acquisition and mosaicking. IEEE Trans. Instrum. Meas. 62(9), 2492–2502 (2013) 36. Wang, Y., Hassebrook, L.G., Lau, D.L.: Data acquisition and processing of 3-D fingerprints. IEEE Trans. Inf. Forensics Secur. 5(4), 750–760 (2010) 37. Zhang, D., et al.: Robust palmprint verification using 2D and 3D features. Pattern Recogn. 43 (1), 358–368 (2010) 38. Choi, H., Choi, K., Kim, J.: Mosaicing touchless and mirror-reflected fingerprint images. IEEE Trans. Inf. Forensics Secur. 5(1), 52–61 (2010) 39. Jain, A., Hong, L., Bolle, R.: On-line fingerprint verification. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 302–314 (1997)
Chapter 4
3D Fingerprint Authentication
Abstract Touchless-based fingerprint recognition technology is thought to be an alternative to touch-based systems to solve problems of hygienic, latent fingerprints, and maintenance. However, there are few studies about touchless fingerprint recognition systems due to the lack of a large database and the intrinsic drawback of low ridge-valley contrast of touchless fingerprint images. This Chapter proposes an endto-end solution for user authentication systems based on touchless fingerprint images in which a multi-view strategy is adopted to collect images and the robust fingerprint feature of touchless image is extracted for matching with high recognition accuracy. More specifically, a touchless multi-view fingerprint capture device is designed to generate three views of raw images followed by preprocessing steps including region of interest (ROI) extraction and image correction. The distal interphalangeal crease (DIP)-based feature is then extracted and matched to recognize the human’s identity in which part selection is introduced to improve matching efficiency. Experiments are conducted on two sessions of touchless multi-view fingerprint image database with 541 fingers acquired about 2 weeks apart. An EER of 1.7% can be achieved by using the proposed DIP-based feature, which is much better than touchless fingerprint recognition by using scale invariant feature transformation (SIFT) and minutiae features. The given fusion results show that it is effective to combine the DIP-based feature, minutiae, and SIFT feature for touchless fingerprint recognition systems. The EER is as low as 0.5%. Keywords Competitive coding scheme · Distal interphalangeal crease (DIP) · Finger width · Multi-view images · Touchless fingerprint recognition
4.1
Introduction
Personal identification based on human fingers has been applied for forensics and civilian for decades. It has the largest market shares and brightest perspective among various biometric feature-based recognition systems [1]. Even though there are lots of research achievements and products in fingerprint recognition domain, the performance still cannot reach the expectations of people and theory estimation © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_4
33
34
4
3D Fingerprint Authentication
Fig. 4.1 Touch-based fingerprint (left) corresponds to the portion of the touchless fingerprint (right) enclosed by polygon approximately, and extra information provided by touchless fingerprint image (red lines labeled)
[2]. Human intervention is necessary when handling low quality fingerprint images. Other new requirements also emerged with the increasing introduction of fingerprint-based techniques into civilian applications, such as user convenience, template security, and hygiene. Furthermore, with the rapid development of computer information security technology, multimodal biometrics becomes an unstoppable and unchangeable tendency. Nowadays, rapid advance in fingerprint sensing technology provides solutions to meet the increasing demands of people. Researchers find that the touchless fingerprint imaging technique has decisive advantages of being insensitive to skin deformation (skin elasticity, nonuniform pressure) and skin conditions (dry/wet or dirt), avoiding distortions and inconsistencies due to projecting a 3-D finger onto a 2-D flat plane image, securing against latent fingerprints, practically maintenance free, being hygienic and robust to fake attacks [3–6]. Meanwhile, larger fingerprint area and other finger relative information can be easily offered by capturing images at a distance. As shown in Fig. 4.1, the effective area of touch-based fingerprint (left) just corresponds to the portion of the touchless fingerprint (right) enclosed by polygon approximately and finger shape and Distal Interphalangeal Crease (DIP) is offered by touchless imaging. Thanks to such merits of touchless imaging, researchers and companies begin to design and investigate touchless fingerprint recognition systems although some of them are still in the prototyping phase [3–16]. Typically, the group led by Kim in Korea proposed a prototype of touchless fingerprint recognition system using a camera sensor, but just some preprocessing steps were referred to the captured fingerprint images [8]. They also developed a fingerprint enhancement method, resolved the 3-D to 2-D image mapping problem, and applied the fingerprint verification technology to mobile handsets in their following work [5, 9]. In 2010, they renewed their device to obtain more than one view of the finger at one time to solve finger rolling problem and proposed a mosaicking method using fingerprint minutiae [6]. Hiew et al. [11] designed their own digital camera based touchless fingerprint capturing device and finished an end to end solution fingerprint recognition system by extracting Gabor feature of cropped core point region and using SVM classifier. Kumar et al. [12] captured touchless fingerprint images using webcam and proposed using their defined Level Zero Feature (texture-based feature) to realize
4.1 Introduction
35
Fig. 4.2 Touch-based fingerprint (left), corresponding touchless fingerprint (right), and their pixel value cross sections (middle)
low resolution fingerprint recognition. Parziale et al. [14] from TBS Company designed a multicamera touchless fingerprint capture device (Surround Imager™) and proposed a new representation of fingerprints, namely 3-D minutiae, for fingerprint recognition. Jain et al. [15] then proposed an unwrapping algorithm to solve the interoperability issue between rolled images used in AFIS and the touchless fingerprint image captured by the Surround Imager™ in [15]. After that, TBS goes in for improving their device and realizing 3-D fingerprint recognition [16]. Recently, we designed a touchless multi-view fingerprint capture device by optimizing parameters regarding the captured fingerprint image quality and device size [17]. This Chapter also introduced a mosaicking method and compared performance between mosaicked images and flat touch-based fingerprint images. However, few of such works realized human authentication. Three major reasons may be involved. First, there is lack of public and sufficient touchless fingerprint image database for performance evaluation; this greatly limits the development of touchless fingerprint recognition algorithms. Second, the essential drawback of low ridge-valley contrast makes unprecedented difficulties for classical fingerprint feature extraction; this essential disadvantage can be explained by the imaging method of two devices; in the case of FTIR (Frustrated Total Internal Reflection) imaging, the light that passes through the glass upon valleys is totally reflected but the light that passes through the glass upon ridges is not reflected; in the case of touchless imaging, both ridges and valleys reflect light, and the contrast of ridges and valleys is a result of ridges receiving and reflecting a little more light than valleys; Fig. 4.2 shows an example of a touch-based fingerprint, corresponding its touchless fingerprint, and their pixel value cross sections; this is why usually lower recognition accuracy is obtained when compared touchless-based fingerprint recognition system with touch-based ones. Third, in real fingerprint recognition systems, the performance is degraded by the limitation caused by single-view touchless imaging, such as depth of the field (Dof) of the camera, perspective distortion introduced by camera.
36
4
3D Fingerprint Authentication
Fig. 4.3 Block diagram of the proposed DIP-based user authentication system
To solve the problems mentioned above, a touchless multi-view fingerprint database with 541 fingers captured by the device presented in [17] is firstly built for performance evaluation. Then, new fingerprint features which can be robustly extracted from low ridge-valley contrast images are investigated, and the corresponding algorithms for feature extraction and matching are proposed. Also, a view selection strategy is described to reduce computation complexity when multiview images of one finger are matched. The block diagram of the proposed touchless multi-view fingerprint authentication system is shown in Fig. 4.3 An end to end touchless fingerprint authentication system is finally achieved with an EER of, which is acceptable for practical civilian application. The remainder of this Chapter is organized as follows. The touchless fingerprint image capturing device and some preprocessing steps are briefly described in Sect. 4.2. In Sect. 4.3, fingerprint recognition algorithm using DIP-based feature is presented in detail. Performance analyses and experimental comparisons are shown in Sect. 4.4. This Chapter is concluded and the future works are indicated in Sect. 4.5.
4.2
Touchless Fingerprint Image Preprocessing
To build the database, a touchless multi-view fingerprint acquisition device was designed by our group [17]. The schematic diagram and prototype of the device are shown in Fig. 4.4 One central camera and two side cameras were focused on the finger. The angle between the central camera and the side cameras was roughly 30 degrees. Four blue LEDs were used to light the finger and arranged to give uniform brightness. The three channels of fingerprint images are illustrated in
4.2 Touchless Fingerprint Image Preprocessing
37
Fig. 4.4 Proposed touchless multi-view fingerprint capture device. (a) Prototype of the device. (b) Schematic diagram of the device
Fig. 4.5 Images of a finger captured by our device (left, frontal, right)
Fig. 4.5 The image size of each channel was restricted to and the resolution of the image was. The details of the parameter setting of the device can be obtained in [17]. It can be seen from Fig. 4.5 that the contrast of ridges and valleys is low, ridge frequency increases from the center part to the side parts, and large fingerprint area with more information is captured. Camera calibration refers to the calculation of camera parameters. It is the first step of reconstruction and provides the intrinsic parameters (focal length, principal point, skew, and distortion) of each camera and extrinsic parameters (rotation, translation) between cameras necessary for reconstruction. It usually implements off-line and the commonly used methods and codes are available [18, 19]. Obviously (see Fig. 4.5), it is necessary to preprocess the original images. First, the foreground should be separated from background, namely ROI extraction. In the ideal situation, simple thresholding segmentation algorithm can easily separate the ROI region from the background due to the full black background of the designed place to put fingers. In the real application, a more robust iterative thresholding segmentation method [20] was adopted to extract ROI. This method firstly set an
38
4
3D Fingerprint Authentication
Fig. 4.6 Preprocessing results of the frontal image given in Fig. 4.5 (a) ROI. (b) Illustration of angle calculation when doing image correction. (c) Corrected fingerprint image
initial threshold to separate the original image into foreground and background and calculated the mean values of both of the regions. New threshold was then computed by averaging those mean values. The optimal threshold was finally obtained when the difference between the current threshold and the last one was smaller than 0.005. New threshold was calculated in an iterative way. It is found that this method is effective to the captured images. Figure 4.6a shows the segmentation result of the frontal fingerprint image given in Fig. 4.5. Since there were tilted fingerprint images caused by volunteers who put their fingers casually in image collection, it was requested to correct the image before feature extraction. This study proposed correcting the fingerprint image by the following steps: (i) find the center point of each row in ROI through horizontal scanning (see yellow line in Fig. 4.6b); (ii) Fit those center points X mentioned in step (i) by a line Y ¼ aX (see blue line in Fig. 4.6b), a is the slope of this line; (iii) calculate the angle between the fitting line and the vertical axis (see red line in Fig. 4.6b) θ ¼ 90 tan (a); (iv) rotate the original image by angle θ anti-clockwise. The corrected image was finally obtained, as shown in Fig. 4.6c. It is noted that whether the ROI is extracted correctly or not has an impact on the image rectification result since the angle is calculated based on the ROI. However, thanks to the full black background of the designed place to put fingers and the robust ROI extraction method, there are few wrongly extracted ROIs on the whole database. Figure 4.7 gives the histogram of all of the rotation angles obtained by correcting images on the whole database using the proposed method. It can be seen from the result that the angle is usually smaller than 10 degrees. Here, we intensively segmented an image (Fig. 4.8a) with bad ROI (see Fig. 4.8b) and rectified the image using the proposed correction method. The rotation angle is and Fig. 4.8c shows the corrected image of Fig. 4.8a. Figure 4.8e illustrates the corrected result based on good ROI Fig. 4.8d) which was extracted by the method introduced in this Chapter. The rotation angle is ~7.53 . The difference between them was small and acceptable. This phenomenon reflects that the proposed correction method is robust to the ROI result in a certain degree. It is because that the rotation angle is calculated after fitting the center points by a line (see red line shown in Fig. 4.8b), which alleviates the
4.2 Touchless Fingerprint Image Preprocessing
39
1600 1400
Frequency
1200 1000 800 600 400 200 0 -15
-10
-5
0
5
10
15
Angle Fig. 4.7 Histogram of all of the rotation angles obtained by correcting images on the whole database using the proposed correction method
a
b
c
d
e
Fig. 4.8 Image correction with bad and good ROI. (a) Original fingerprint image. (b) Intensively extracted bad ROI. (c) Corrected fingerprint image based on (b). (d) Good ROI extracted by the method adopted in this Chapter. (e) Corrected fingerprint image based on (d)
influence of wrongly calculated center points (green points shown in Fig. 4.8b) caused by the bad ROI region.
40
4.3 4.3.1
4
3D Fingerprint Authentication
Dip-Based Fingerprint Feature Extraction and Matching Feature Extraction
In general, fingerprint features are divided into three levels [21]. Such three level fingerprint features are categorized by their relationship with fingerprint ridges. For example, Level 1 features are the macro details of fingerprints such as singular points and global ridge patterns. Level 2 features refer to the ridge endings and bifurcations. Level 3 features are defined as the dimensional attributes of the ridges. However, it is difficult to robustly extract features which are closely related to fingerprint ridges from touchless fingerprints due to the intrinsic drawback of low ridge-valley contrast. Thus, in [12], authors labeled four levels of fingerprint features according to the image resolution. They designated level 0 features as fingerprint features which can be observed/extracted from very low resolution images. Such level 0 features are then extracted and used for human authentication on their established database with very low resolution images. By observing the raw image captured by the device designed by us, we find that the distal interphalangeal crease (DIP) based and finger width features are less relative to fingerprint ridges and can be used for human authentication. It can be concluded that these features can be extracted from very low resolution fingerprint images, and they are studied in this Chapter. Here, it should be noticed that the preprocessed image was downsampled before feature extraction to reduce the computational complexity. DIP [22] is defined as the only permanent flexion crease which is located between medial and distal segments of finger except thumb (between proximal and distal segments), as shown in Fig. 4.9 (cropped by red rectangle). It can be seen that they have two obvious characteristics: (I) the principle orientation is almost perpendicular to the fingertip; (II) they are dark and thick lines, and similar to the principle line in palm print (see Fig. 4.10). A method is then proposed in this Chapter to extract the location of DIP and the DIP-based feature based on these two characteristics. Firstly, the orientation field of the pre-processed fingerprint image was calculated using the classical Gradient-based approach introduced in [23], represented by {O | Fig. 4.9 Example images to show DIP feature (cropped by red rectangle). (a) Index finger. (b) Thumb
4.3 Dip-Based Fingerprint Feature Extraction and Matching
41
Fig. 4.10 Principle line on palm print (cropped by red rectangle)
Fig. 4.11 Orientation map and generated mask M of Fig. 4.6c. (a) Orientation map. (b) Mask M
oi 2 [0 , 180 )} in this Chapter. In view of character I of DIP, the points whose orientation are close to 0 or 180 predict the existence of DIP. A mask was then generated to forecast the location of DIP by using Eq. (4.1). Angles of 30 and 150 were set by experience in Eq. (4.1). Figure 4.11 gives an example of the mask of Fig. 4.6c. ( M¼
1
oi 30 j oi 150
0
otherwise
ð4:1Þ
Secondly, the intensity image (IM, see Fig. 4.12a) which predicts the location of DIP on the original image was obtained by using M. Here, the regions of Mwith zero intensity values were set to 255 (maximal gray-level value, white region in Fig. 4.12a) when IMwas generated. Because of character II of DIP, we projected the intensity of pixels of IM in row, represented by L1 (see Fig. 4.12b). It can be seen that the location of DIP lies in the local minimum of the projection line. Thirdly, since DIP is similar to the principle line in palm print mentioned in character II of DIP, we applied a set of Gabor filters introduced in [24] to the preprocessed image to form a set of response maps. Here, the frequency of the
42
4
3D Fingerprint Authentication
(a) 4 4 x 10
Projection Intensity
3.5
3
2.5
2
1.5
10
50
100
150
200
250
300
350
Row
(b) Fig. 4.12 The intensity image which predicts DIP IMand its corresponding projection line L1. (a) IM. (b) L1
Gabor filters was set to 1/3.45 and the range of orientation was [0 : 15 : 180 ]. After that, a maximum response map R (see Fig. 4.13a) was obtained by extracting the maximum response of each pixel from the set of response maps. The corresponding region where DIP exists can be extracted by M, marked as RM (see Fig. 4.13b). The local minima will be formed if the intensity of pixels of RM was projected in row, as the projection lineL2 shown in Fig. 4.13c. It is obvious that the local minima of the projection line indicate the possible location of DIP. After the extraction of two projection lines, L1 and L2, a 1-Dementional Gaussian filter with length of five was adopted to smooth them, and normed their values to [0, 1], as shown in Fig. 4.14a, b. Finally, they were combined into one line by simple averaging strategy (see Fig. 4.14c).
4.3 Dip-Based Fingerprint Feature Extraction and Matching
(a)
43
(b)
2000 0
Projection Intensity
-2000 -4000 -6000 -8000 -10000 -12000 -14000
0
50
100
150
Row
200
250
300
350
(c) Fig. 4.13 The Results applied to Fig. 4.6c based on the similarity to the principle line in palm print. (a) Maximum response map R. (b) Region mask RM. (c) Projection line L2
Since there were several local minima ({Vi | i ¼ p1, p2, , pN}, where p represents the position of local minimum) in the projection line L, criteria should be made to pick up the closest one. It is prior knowledge that the positions of DIP for the same finger type are similar. The lengths from the fingertip to the DIP of five finger types were estimated and taken as a threshold (T) to indicate the location of DIP by Eq. (4.2). The threshold T was determined after two steps. Step one: we manually measured the lengths from the fingertip to the DIP of five finger types of several persons and converted the lengths into image pixels by Eq. (4.3), where r denotes image resolution, h is the height of the image, and H represents the measured length (in millimeters). The measured value (h) was taken as a coarse threshold to compute the location of the DIP by using Eq. (4.2). Step two: we statistically computed the lengths from the fingertip to the initial DIP of each finger on the whole database. The refined threshold was obtained by analyzing the histogram of the calculated lengths from the fingertip to the initial DIP (shown in Fig. 4.15). It was set to 180 in this
44
4
3D Fingerprint Authentication
1
Normed Projection Intensity
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
50
100
150
200
250
300
350
200
250
300
350
Row
(a) 1
Normed Projection Intensity
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
50
100
150
Row
(b) 0.9
Normed Projection Intensity
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0
50
100
150
200
250
300
350
Row
(c) Fig. 4.14 Projection lines after processing. (a) Smoothed projection line of L1. (b) Smoothed projection line of L2. (c) Final combined projection line L
4.3 Dip-Based Fingerprint Feature Extraction and Matching
45
3000
2500
Frequency
2000
1500
1000
500
0 50
100
150 200 250 Length of fingertip to initial DIP (in pixels)
300
350
Fig. 4.15 Histogram of length from fingertip to initial DIP on the whole database
Fig. 4.16 Illustration of extraction of DIP-based feature
Chapter. The location of the DIP was finally defined by considering both of the local minima of the projection line and the priori lengths, as illustrated in Eq. (4.2). PDIP ¼ min fabsðp T Þg
ð4:2Þ
H ¼ 25:4 h=r
ð4:3Þ
p
Finally, the location of DIP was taken as the base line, and a region of size 101288 pixels centered at the base line was cropped. DIP-based feature was finally formed by coding the region using competitive coding scheme introduced in [24], as shown in Fig. 4.16. Such feature was extracted in this study since the DIP lines are similar to the palm lines. Orientation information of DIP lines was extracted by
46
4
3D Fingerprint Authentication
Fig. 4.17 Illustration of finger width extraction. (a) Finger width referred in this Chapter (green lines labeled). (b) Final extracted finger width feature
competitive coding scheme using multiple 2D Gabor filters. The filters and parameters are the same as those which were used to extract the projection line L2. Additionally, since the finger shape was fully imaged by the device, the finger width feature could be extracted by counting the non-zero values of the pre-processed image row by row. We designated the counts from the fingertip to the location of DIP as the finger width feature, as illustrated in Fig. 4.17.
4.3.2
View Selection
There are three views of fingerprint images (left-side, frontal, and right-side) for each finger. So nine times of matching are needed to identify one finger to find out the best matching. It is time consuming. This Chapter thus proposed a view selection strategy before matching to reduce the complexity but keep the accuracy. Since the finger width feature can be easily extracted from each view of images, the mean value of finger width of each view were calculated and compared between views of gallery and probe images. Those pairs whose difference was smaller than a threshold were finally selected. Eq. (4.4) gives the criterion of the proposed view selection strategy, where WG and WP represent the mean value of finger width for gallery and probe images, respectively. In the end, the number for matching (9 times in total) could be reduced to about 3–5 times after this view selection.
4.3 Dip-Based Fingerprint Feature Extraction and Matching
(
47
8 ) > 1 : left side < 2 : frontal ði, jÞ ¼ arg abs W Gi W Pj < 30 ji, j ¼ 1, 2, 3 , > i, j : 3 : right side
4.3.3
ð4:4Þ
Feature Matching
Since the competitive code based on DIP was taken as the DIP-based feature, angular matching method [24, 25] was adopted. This method was proposed to compare orientation information stored in competitive code effectively and efficiently. It calculated the angular distance by bit operation. In this study, there were 12 directions (labeled as integer: 0, 1, 2, . . ., 11) used when DIP-based feature was computed. Four bits can fully represent each element. However, to realize bit operation, six bits should be used since maximum angular distance will be six when 12 directions are used. Table 4.1 shows the bit representation of competitive code in this Chapter. The DIP-based feature was finally represented by seven bits since the adding of one bit to label mask. The angular distance was then defined as Eq. (4.5), where Gmask and Pmask represent the masks of DIP-based feature G and P in gallery and probe images, Gib(or Pib) denotes the ith bit plane of G (or P), \ is an AND operator and is bitwise exclusive OR, and M N is the size of feature matrixes. The calculated angular distance was used as the match scores in this Chapter. Since there were several view pairs between a registered finger and an input finger, the best match score, namely the minimum one, among match scores of all view pairs was chosen as the final match score to do the authentication.
Table 4.1 Bit representation of competitive code Elements representing competitive code 0 1 2 3 4 5 6 7 8 9 10 11
Bit 1 0 0 0 0 0 0 1 1 1 1 1 1
Bit 2 0 0 0 0 0 1 1 1 1 1 1 0
Bit 3 0 0 0 0 1 1 1 1 1 1 0 0
Bit 4 0 0 0 1 1 1 1 1 1 0 0 0
Bit 5 0 0 1 1 1 1 1 1 0 0 0 0
Bit 6 0 1 1 1 1 1 1 0 0 0 0 0
48
4 M1 6 P N1 P P
DðG, PÞ ¼
y¼0 x¼0
3D Fingerprint Authentication
ðGmask ðx, yÞ \ Pmask ðx, yÞÞ \ Gi b ðx, yÞ Pi b ðx, yÞ
0
6
M1 P N1 P
ð4:5Þ Gmask ðx, yÞ \ Pmask ðx, yÞ
y¼0 x¼0
4.4 4.4.1
Experimental Results and Performance Analysis Database and Remarks
The database was established by using the touchless multi-view fingerprint imaging device introduced in [17]. It contained 541 fingers from both male and female aged 22 to 45. Five kinds of fingers were all included. There were two samples collected in each of two sessions separated by a time period of about 2 weeks. Each sample consisted of three views of fingerprint images with size of 576 pixels by 768 pixels and at a resolution of about 400 dpi. The following matches and experiments were conducted on the database. (1) Genuine matches: fingerprint images of the same finger were matched with each other, resulting in 3246 genuine match scores. (2) Imposter matches: the first fingerprint image of each finger in the first session was matched with the first fingerprint images of all the other fingers but with the same finger type in the second session, resulting in 15,010 imposter match scores. Based on the obtained match scores, the equal error rates (EER) and the receiver operating characteristic (ROC) curves were calculated for performance evaluation. Since minutiae are the classical fingerprint features used in touch-based AFISs, and Scale Invariant Feature Transformation (SIFT) [26] is one of the frequently used non minutia features used for fingerprint recognition with poor image quality. Both were adopted for authentication to compare their recognition performance with the proposed DIP-based feature on the established database. The method introduced in [27] was employed to extract and match SIFT feature. The method introduced in [28] was adopted to extract minutiae and minutiae was a matched by using the method introduced in [17]. There are lots of minutiae-based fingerprint matching algorithms [6], [27, 29]. The one proposed in [27] was used in this study for the reason that it ranks 1st on DB3, the most difficult database in FVC2002 and outperforms the best two algorithms PA15 and PA 27 on four databases in FVC2002. The minutiae match score between two fingerprints was defined as the percentage of matched minutiae among the complete set of minutiae on the two fingerprints, and the SIFT match score between two fingerprints was defined as the number of matched SIFT points. Fusion was implemented in score level. Score normalization was firstly applied so as to make the match scores of different matchers transform into a common domain [30]. The min-max (MMN) technique [30] was considered in the experiments. After normalization, min (MIN), max (MAX), simple sum (SSUM) and weighted sum (WSUM) rules were used to combine the match scores of individual matchers into a
4.4 Experimental Results and Performance Analysis
49
single final score for the input fingerprint. Obviously, the MIN and MAX rules respectively select the minimum and maximum of the match scores of all individual matchers as the final score, whereas the SSUM rule takes the summation of the match scores as the final score [30]. The WSUM rule tests different weights from 0 to 1 with an interval as 0.1 to find the best weight to form the final score [31].
4.4.2
Recognition Performance Using Dip-Based Feature
To evaluate the performance of the proposed method using DIP-based feature, the genuine and imposter match score distribution map and the ROC curve were both given, as shown in Fig. 4.18. The EER was also calculated. It can be seen that the match scores range from 0 to 0.4614. Match scores for genuine pairs are between 0.1 and 0.3, while match scores for imposter pairs are centered on ~0.41. It can well separate genuine and imposter pairs. An EER of ~1.7% was obtained when the DIP-based feature is used for recognition, which shows the effectiveness of user authentication using the DIP-based feature for multi-view touchless fingerprint images.
4.4.3
Effectiveness Validation of the Proposed View Selection Scheme
Since there are three views of fingerprint images captured at one time for one finger, recognition results will be different if different view pairs are used or diverse fusion strategies are adopted. This study took the DIP-based feature matching as an example to validate the effectiveness of the proposed view selection scheme. Figure 4.19 shows the ROC curves by matching single view fingerprint images, the ROC curve by matching multi-view fingerprint images after view selection, as well as the ROC curves based on four kinds of fusion strategies using DIP-based feature. It can be seen that the matching result after the view selection outperforms the results of both of single view matching and matching using simple fusion strategies (all of four kinds fusion mentioned). This is because the best matching candidates were selected from the overall nine times of matching after the view selection process but single view matching and single view fusion could not guarantee the best matching, as the example shown in Table 4.2. The best matching score existed between (red labeled in Table 4.2, 0.2641), which was not included in single view pairs (blue labeled in Table 4.2) but in the view candidates after the view selection process. Thus, the best matching score (0.2641) could be obtained by minimizing results after the view selection process, whereas better but not the best matching score (0.2812) was obtained after score-level fusion using MIN rule.
50
4
3D Fingerprint Authentication
900 800 700 600 Frequency
Imposter
500 400 300 Genuine
200 100 0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Match Score
(a) 102
False Rejection Rate (%)
EER=1.6496%,Matching result using DIP-based feature
101
100
10-1
10-2 10-1
100
101
102
False Acceptance Rate (%)
(b) Fig. 4.18 Verification results using DIP-based feature. (a) Genuine and imposter distributions. (b) The ROC curve using DIP-based feature
4.4.4
Comparison of Recognition Performance Based on Different Fingerprint Features
As we mentioned in subsection 4.4.1, minutiae and SIFT features were both considered for human authentication and implemented on the database to investigate their performance for touchless fingerprint images. First, examples of matching results using DIP-based feature, minutiae, and SIFT feature for one genuine pair and one imposter pair are shown in Fig. 4.20. It can be found that there were a few of
4.4 Experimental Results and Performance Analysis
51
False Rejection Rate (%)
102
101
100
10-1 EER=1.6496%,DIP-based matching result EER=4.5712%,DIP-based matching result EER=4.8818%,DIP-based matching result EER=3.9483%,DIP-based matching result
10-2 10-3
10-2
after view selection for frontal images for left-side images for right-side images
10-1
100
101
102
101
102
False Acceptance Rate (%)
(a)
False Rejection Rate (%)
102
101
100
10-1 EER=5.485%,DIP-based matching result of fusing three views of images by MAX rule EER=3.0512%,DIP-based matching result of fusing three views of images by MIN rule EER=3.2369%,DIP-based matching result of fusing three views of images by SSUM rule EER=3.2066%,DIP-based matching result of fusing three views of images by WSUM rule
10-2 10-3
10-2
10-1
100
False Acceptance Rate (%)
(b) Fig. 4.19 The ROC curves based on different view fingerprint images and various fusion strategies. (a) The ROC curves of single view fingerprint images compared with ROC curve of multi view fingerprint images after view selection. (b) The ROC curves based on four kinds of fusion strategies
matched pairs for minutiae matching (7 for genuine pair and 0 for imposter pair). There were 315 matched SIFT feature points for genuine fingerprint image pair while 7 couples for imposter pair. The angular distances of DIP-based feature matching were 0.214 and 0.417 for genuine and imposter pairs, respectively. The ROC curves and EERs were also given, as shown in Fig. 4.21. The best EER of minutiae-based matching was ~7.7%, while the best EER of SIFT feature-based matching was ~3%. Both EERs were larger than the best one obtained by matching DIP-based feature (see Fig. 4.19a). It can be concluded that the proposed DIP-based
52
4
3D Fingerprint Authentication
Table 4.2 Scores of matching DIP-based feature extracted from Gallery images (left-side, frontal, right-side) and Probe images (left-side, frontal, right-side)
feature is more effective than both minutiae and SIFT feature when touchless multiview fingerprint images are matched. Generally, non-minutia features which are less sensitive to the clarity of ridges will be more suitable for touchless fingerprint recognition when only one feature is used for authentication. Figure 4.22 provides the ROC curves and EERs of the fusion results in match score level. It can be seen that performance improved after fusion. DIP-based feature plays an important role in improving recognition accuracy. As low as an EER of ~0.5% was achieved if all of these three features were fused.
4.5
Summary
This study provided an end to end user authentication system using DIP-based feature for touchless fingerprint images. Multi-view strategy was used in the capturing device to alleviate system performance degradation caused by single-view imaging. DIP-based feature, which could be robustly extracted from low ridgevalley contrast touchless fingerprint images, was presented and used for authentication. The corresponding algorithms for feature extraction and matching were also introduced in this Chapter. View selection scheme was adopted before matching to reduce computation complexity when multi-view images for one finger are matched. Experiments were implemented on the touchless multi-view fingerprint image database established by us with 541 fingers acquired in two sessions with an interval of about 2 weeks (each 2 samples). Classical and frequently-used fingerprint features (e.g., minutiae and SIFT feature), as well as the new proposed DIP-based feature
4.5 Summary
53
Fig. 4.20 Example matching results by minutiae, SIFT and DIP-based features for a genuine fingerprint image pair and an imposter pair. (a) Original genuine fingerprint image pair (right ring finger). (b) Original imposter fingerprint image pair (right ring finger vs left ring finger of the same person). (c) Minutiae matching result of (a). (d) Minutiae matching result of (b). (e) SIFT feature matching result of (a). (f) SIFT feature matching result of (b). (g) DIP-based feature comparison of (a) (angular distance is 0.214). (h) DIP-based feature comparison of (a) (angular distance is 0.417)
54
4
3D Fingerprint Authentication
False Rejection Rate (%)
102
101
100
EER=10.2422%,Minutiae-based matching result for frontal images EER=13.5896%,Minutiae-based matching result for left-side images EER=15.1503%,Minutiae-based matching result for right-side images
10-1 10-3
10-2
10-1
100
101
102
False Acceptance Rate (%)
(a)
False Rejection Rate (%)
102
101
100
EER=9.4083%,Minutiae-based matching result of fusing three views of images by MAX rule EER=13.1486%,Minutiae-based matching result of fusing three views of images by MIN rule EER=7.7453%,Minutiae-based matching result of fusing three views of images by SSUM rule EER=7.7149%,Minutiae-based matching result of fusing three views of images by WSUM rule
10-1
10-3
10-2
10-1
100
101
102
False Acceptance Rate (%)
(b) Fig. 4.21 The ROC curves based on different features on situation of both using single view images and adopting fusion strategies. (a) The ROC curves on situation of single view fingerprint images when only minutiae are used. (b) The ROC curves based on four kinds of fusion strategies when only minutiae are used. (c) The ROC curves on situation of single view fingerprint images when only SIFT feature is used. (d) The ROC curves based on four kinds of fusion strategies when only SIFT feature is used
4.5 Summary
55
False Rejection Rate (%)
102
101
100
10-1 EER=5.5291%,SIFT-based matching result for frontal images EER=8.8122%,SIFT-based matching result for left-side images EER=8.1662%,SIFT-based matching result for right-side images
10-2 10-3
10-2
10-1
100
101
102
101
102
False Acceptance Rate (%)
(c)
False Rejection Rate (%)
102
101
100
10-1 EER=6.3577%,SIFT-based matching result EER=3.4105%,SIFT-based matching result EER=3.0761%,SIFT-based matching result EER=4.3879%,SIFT-based matching result
10-2 -3 10
10-2
of of of of
fusing three fusing three fusing three fusing three
views views views views
of of of of
images images images images
10-1
by MAX rule by MIN rule by SSUM rule by WSUM rule
100
False Acceptance Rate (%)
(d) Fig. 4.21 (continued)
were used and evaluated on the established database. It can be found: (i) DIP-based fingerprint recognition outperforms other compared features when only one feature used; (ii) the view selection strategy is more effective than other common used fusion strategies; (iii) it is hard to obtain high recognition accuracy by traditional minutiae-based systems for touchless fingerprint images (The EER was around 10%). However, it is effective to combine non minutia features and minutiae features for touchless fingerprint recognition systems (The best EER of ~ 0.5% was achieved). The experimental results presented in this Chapter are promising. It is believed that the performance of touchless fingerprint recognition system will be further improved since the fact that the area of touchless fingerprints is generally
56
4
3D Fingerprint Authentication
102
False Rejection Rate (%)
101
100
10-1 EER=0.5339%,Minutiae+sift+DIP by WSUM rule EER=0.94807%,Minutiae+DIP by WSUM rule EER=0.88609%,Minutiae+sift by WSUM rule EER=0.59295%,sift+DIP by WSUM rule
10-2 10-3
10-2
10-1
100
101
102
False Acceptance Rate (%)
Fig. 4.22 The ROC curves of different feature fusion
larger than that of touch-based fingerprints, which enables us to extract more distinctive information from touchless fingerprints than from touch-based fingerprints. There are also broad prospects to propose effective minutiae extraction and matching methods specific to touchless fingerprint images since minutiae are the most widely used feature in current AFISs.
References 1. IBG.: http://www.ibgweb.com/products/reports/bmir-2009-2014 2. Pankanti, S., Prabhakar, S., Jain, A.K.: On the individuality of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 24(8), 1010–1025 (2002) 3. TST Biometrics.: Touchless Fingerprint Capturin. http://www.tst-biometrics.com/wp-content/ uploads/2011/02/BiRD3-Product-Sheet-en.pdf 4. Ratha, N.K., Govindaraju, V. (eds.): Advances in Biometrics: Sensors, Algorithms and Systems. Springer, London (2007) 5. Lee, C., Lee, S., Kim, J.: A study of touchless fingerprint recognition system. Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR). Springer, Berlin/Heidelberg (2006) 6. Choi, H., Choi, K., Kim, J.: Mosaicing touchless and mirror-reflected fingerprint images. IEEE Trans. Inf. Forensics Secur. 5(1), 52–61 (2010) 7. Schneider, J.K., Wobschall, D.C.: Live scan fingerprint imagery using high resolution C-scan ultrasonography. In: Proceedings of 25th Annual 1991 IEEE International Carnahan Conference on Security Technology. IEEE (1991)
References
57
8. Song, Y., Lee, C., Kim, J.: A new scheme for touchless fingerprint recognition system. In: Proceedings of 2004 International Symposium on Intelligent Signal Processing and Communication Systems, 2004. ISPACS 2004. IEEE (2004) 9. Lee, C., et al.: Preprocessing of a fingerprint image captured with a mobile camera. International Conference on Biometrics. Springer, Berlin/Heidelberg (2006) 10. Fatehpuria, A., Lau, D.L, Hassebrook, L.G.: Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan. In: Biometric Technology for Human Identification III, vol. 6202. International Society for Optics and Photonics (2006) 11. Hiew, B.Y., Teoh Andrew, B.J., Pang, Y.H.: Touch-less fingerprint recognition system. 2007 IEEE Workshop on Automatic Identification Advanced Technologies. IEEE (2007) 12. Kumar, A., Zhou, Y.: Contactless fingerprint identification using level zero features. CVPR 2011 Workshops. IEEE (2011) 13. Fingerprint Science Group.: HandShot. http://privacy.cs.cmu.edu/dataprivacy/projects/ handshot/index.html 14. Parziale, G., Diaz-Santana, E., Hauke, R.: The surround imager tm: A multi-camera touchless device to acquire 3D rolled-equivalent fingerprints. International Conference on Biometrics. Springer, Berlin, Heidelberg (2006) 15. Yi, C., et al.: 3D touchless fingerprints: Compatibility with legacy rolled images. 2006 Biometrics Symposium: Special Session on Research at the Biometric Consortium Conference. IEEE (2006) 16. TBS.: Touchless fingerprint imaging: 3D-enroll 3D-terminal. http://www.tbsinc.com 17. Liu, F., et al.: Touchless multiview fingerprint acquisition and mosaicking. IEEE Trans. Instrum. Meas. 62(9), 2492–2502 (2013) 18. Chen, X., Tian, J., Yang, X.: A new algorithm for distorted fingerprints matching based on normalized fuzzy similarity measure. IEEE Trans. Image Process. 15(3), 767–776 (2006) 19. Jiang, X., Yau, W.Y.: Fingerprint minutiae matching based on the local and global structures. In: Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, vol. 2. IEEE (2000) 20. Kittler, J., et al.: Automatic thresholding algorithm and its performance. In: ProceedingsInternational Conference on Pattern Recognition (1984) 21. Ashbaugh, D.R.: Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology. CRC Press, Boca Raton (1999) 22. CDEFFS.: Data Format for the Interchange of Extended Fingerprint and Palmprint feaTures, Version 0.5 (2010) 23. Jain, A., Lin, H., Bolle, R.: On-line fingerprint verification. IEEE Trans. Pattern Anal. Mach. Intell. 19(4), 302–314 (1997) 24. Kong, A.W.K., Zhang, D.: Competitive coding scheme for palmprint verification. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 1. IEEE (2004) 25. Zhang, D., et al.: Online palmprint identification. IEEE Trans. Pattern Anal. Mach. Intell. 25(9), 1041–1050 (2003) 26. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60 (2), 91–110 (2004) 27. Feng, J.: Combining minutiae descriptors for fingerprint matching. Pattern Recogn. 41(1), 342–352 (2008) 28. Liu, F., Zhang, D.: 3D fingerprint reconstruction system using feature correspondences and prior estimated finger model. Pattern Recogn. 47(1), 178–193 (2014) 29. Maltoni, D., et al.: Handbook of Fingerprint Recognition. Springer, New York (2009) 30. Ross, A.A., Nandakumar, K., Jain, A.K.: Handbook of Multibiometrics, vol. 6. Springer, New York (2006) 31. Zhao, Q., et al.: Direct pore matching for fingerprint recognition. International Conference on Biometrics. Springer, Berlin/Heidelberg (2009)
Chapter 5
Applications of 3D Fingerprints
Abstract Human fingers are 3D objects. More information will be provided if three dimensional (3D) fingerprints are available compared with two dimensional (2D) fingerprints. Thus, this chapter firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER) of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition. Keywords Distinctive 3D shape ridge feature · 3D fingerprint recognition · Fingerprint coarse alignment
5.1
Introduction
As one of the most widely used biometrics, fingerprints have been investigated for more than a century [1]. Meanwhile, with the rapid development of fingerprint acquisition devices and the advent of advanced fingerprint recognition algorithms, effective Automated Fingerprint Recognition Systems (AFRSs) are available in the market. However, they are almost based on 2D fingerprint features, even though the fact is that human fingers are 3D objects. Distortions and deformations will be introduced, while 3D information will be loss when 2D fingerprint images are used, which degrades the performance of AFRSs. 3D fingerprints come into researchers’ view in recent years with the expansion of acquisition technology [2–13]. Most of the work focused on the acquisition and preprocessing of 3D fingerprints, or at the most expanded two dimensional © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_5
59
60
5 Applications of 3D Fingerprints
fingerprint features into 3D space but no recognition results were given [2–10]. Until 2015, researchers from Biometric Research Centre, the Hong Kong Polytechnic University [10, 13] began to investigate the utility of 3D fingerprint features and reported some experimental results of user authentication using the acquired biometric information. In [13], new low level of fingerprint features were proposed and used as additional features to fingerprint recognition. In [10], the authors reported the result of 3D minutiae matching, as well as the matching result of 3D curvature features. In both of those studies, low Equal Error Rate (EER) (from 13% to 35%) was obtained when features from 3D fingerprint images were used for recognition. The EER was not high either for 3D minutiae matching (around 10% when unknown users were added). There may be two reasons why those features are not so effective. One is that both of the mentioned 3D fingerprint images were reconstructed by 3D reconstruction techniques. The accuracy of 3D finger shape is degraded by reconstruction algorithms. The other one is that the distinctiveness of features extracted from 3D finger shape is lower than traditional three levels of features on 2D fingerprint images. However, the EER was greatly improved if those features were combined with 2D traditional features extracted from 2D fingerprint images for matching. Thus, there is no doubt that higher accuracy will be achieved if 3D fingerprint images are used for recognition. This chapter is motivated by analyzing the advantages and disadvantages of current 3D fingerprint recognition techniques. The contributions of this chapter include two parts: (i). More accurate 3D finger shape was obtained by using Structured-light Illumination (SLI) method. The SLI method is widely used as a 3D imaging method for its high accuracy, speed and stability. (ii). Features on 3D fingerprint images were studied thoroughly and detailedly. By analyzing their distinctiveness, 3D fingerprint features can be used for different applications. It can be found that: (1) an EER of ~15% can be achieved if the whole 3D finger point cloud data are taken as 3D shape features for fingerprint recognition. However, it is very effective to align two fingerprints in a short time. (2) The proposed distinctive 3D shape ridge feature are suitable for assisting fingerprint recognition and also helpful to remove false core point. Fusion strategy is employed to combine 2D and 3D fingerprint matching results to figure out the effectiveness of improving recognition accuracy by including 3D fingerprint features. An EER of ~8.3% can be obtained when only distinctive 3D shape ridge feature is used for fingerprint recognition, while EER of ~1.3% is achieved by combing this 3D feature with 2D fingerprint features. The organization of this chapter is as follows. In Sect. 5.2, the acquisition and preprocessing of 3D finger point cloud data is introduced. 3D fingerprint features are investigated in Sect. 5.3. The corresponding feature extraction and matching algorithms are also given in Sect. 5.3. The applications and experimental results are shown in Sect. 5.4. Section 5.5 finally concludes this work and suggests the future research.
5.2 3D Finger Point Cloud Data Acquisition and Preprocessing
5.2
61
3D Finger Point Cloud Data Acquisition and Preprocessing
Currently, there are mainly two ways to generate 3D fingerprints. One is 3D reconstruction technique and the other one is Structured-light scanning [6, 14, 15]. Generally speaking, the 3D reconstruction technique has the advantage of low cost. However, it has the disadvantage of low accuracy, because it is difficult to find and match corresponding point pairs in two or more images [10, 16]. Structured-light imaging has high accuracy as well as a moderate cost [6, 14, 15]. Considering the requirements of accuracy for biometric authentication, we choose to use structuredlight scanning to acquire the finger depth information. As we all know, structured-light imaging is widely used in 3D imaging for its high accuracy, speed and stability. The structure diagram of the collection device is shown in Fig. 5.1. A projector casts 13 structured-light stripes onto finger surface, the structured-light is modulated by the surface shape, and the modulated stripes are captured by a CCD camera at a constant distance from the projector. The distance from the measured surface to the reference plane can be calculated according to the modulated stripe images and the geometric correlation between the measured surface, projector and CCD camera. The structure diagram of the collection device is shown in Fig. 5.1. Figure 5.2 illustrates the principle of the adopted structure light imaging technique. In Fig. 5.2, the relative height of point D at spatial position (x, y) on the 3D object surface can be calculated by following Eq. (5.1) [15]. Where the height of the reference surface is defined as 0. P0 is the wavelength of the projected light on the reference surface, Q0 is the projecting angle, Qn is the angle between the reference surface and the line which passes through the current point and the CCD center, and H ¼ 25.4 h/r, W ¼ 25.4 w/r is the phase difference between points C and D. Since the phase of point D is equal to the phase of point A on the reference surface, ridge width ¼ sqrt(52 + 52)/(ridge density 2) can be calculated as Eq. (5.2). hðx, yÞ ¼ BD ¼
Fig. 5.1 Schematic diagram of device used to capture 3D point cloud data of human fingers
P0 tan Q0 ϕCD 2π ð1 þ tan Q0 = tan Qn Þ
ð5:1Þ
62
5 Applications of 3D Fingerprints
Fig. 5.2 The principle of structured-light imaging
Fig. 5.3 An example of 3D point cloud data of a human finger shown by (a) 3D display software, (b) MATLAB display tool
ϕCD ¼ ϕCA ¼ ϕOC ϕOA
ð5:2Þ
The depth information of the 3D object surface can be retrieved by using Eq. (5.1) and the phase shifting and unwrapping technique in [15]. Interested readers can refer to [14, 15] for more details. With this processing, the relative height of each point h (x, y) could be calculated. The range data of the finger surface can then be obtained. Figure 5.3 shows an example of finger point cloud data we obtained. In the captured data, the size of the 3D image is 640 480 with about 380 dpi resolution, i.e., there are totally 307,200 cloud points to represent the 3D fingerprint information.
5.3 Investigation of 3D Fingerprint Features
63
Fig. 5.4 Preprocessing procedure. (a) Original 3D finger point cloud data, (b) Extracted ROI in 2D plane, (c) Pre-processed 3D finger point cloud data
Since it is inevitable to introduce noises (e.g. false estimated depth near the edge of Region of Interest (ROI), missing or misestimated depth in ROI) when data was collected, it is necessary to pre-process the original point cloud data. In this chapter, the point cloud data was firstly projected into 2D plane, i.e. a 3D point (x, y, z) corresponds to a point (row, column and intensity) on a figure, as shown in Fig. 5.4a. Then, the morphological operators (e.g. interpolation of inner missed points, edge corrosion) provided by MATLAB toolbox were adopted to remove edge points and fill in the inner points. As shown in Fig. 5.4b, inner missed points and edge points are removed when it was compared with Fig. 5.4a. The area with the largest size in the figure was chosen as the ROI of the finger. Then, Gaussian smoothing algorithm with window size of 5 5 pixels was used to process the missed or misestimated depth points in ROI. Simple MAX-MIN rule was finally employed to normalize the depth value into [0, 1], the preprocessed 3D point cloud data of fingerprint is shown in Fig. 5.4c. The normalized depth value (W.norm) can be calculated: W. norm ¼ (Wmin (W))/(max (W)min(W)) when the MAX-MIN rule [40] is used. Where W is the set of depth value.
5.3
Investigation of 3D Fingerprint Features
After preprocessing, stable and unique features are expected to be extracted for further application. Through observation (Fig. 5.5), we found that 3D depth information reflects the overall structure of human finger. However, there are many invalid points in the whole 3D finger shape due to the structure of human finger. As shown in Fig. 5.5b, the horizontal sectional profiles are almost parabola-like curves in different fractions (the real data are labelled in green dashed line and the fitted data are labelled in red solid line in Fig. 5.5b). The vertical profiles are also fitted by parabolic equation (red labelled in Fig. 5.5c), it can be seen that most of
64
5 Applications of 3D Fingerprints
Fig. 5.5 A Fingerprint image in 3D space. (a) Fingerprint image with texture in 3D space, (b) 3D finger point cloud data and its horizontal sectional profiles, (c) 3D finger point cloud data and its vertical sectional profiles, (d) Distinctive 3D shape ridge feature defined in the chapter
lines can be well fitted by parabola, however, the closer the line near the highest point, the less like the real line to parabola. Thus, we extracted this line in the chapter, and defined it as the distinctive 3D shape ridge feature. The detailed method to extract the distinctive 3D shape ridge feature is as follows: (i). Fit each horizontal sectional profile by a parabolic equation; (ii). Find the maximum value of each fitted line; (iii). Connect all of maximum values to form a vertical line, namely the distinctive 3D shape ridge feature. Figure 5.5d shows the distinctive 3D shape ridge feature extracted using the proposed method. The method to extract the distinctive 3D shape ridge feature can be formulated as Eq. (5.3). Where x is the variable of row-coordinate of the image, and y is the variable of column-coordinate of the image. Nrow Mcolumn is the size of the image. ax, bx, and cx represent the coefficients of the fitting parabola functions.
5.4 Applications and Experimental Results
F¼ fx,y
fx,y , x ¼ 1, 2, N row , y ¼ 1, 2, M column 4ax cx bx 2 ¼ f x, 4ax
65
ð5:3Þ
f ðx, yÞ ¼ ax y2 þ bx y þ cx , y ¼ 1, 2, M column In addition, we introduced the matching method used in this chapter here since it is very simple and classical. Intuitively, the iterative closest point (ICP) algorithm is suitable for solving such matching problem. ICP method [14] is widely used in many 2D image and 3D object recognition systems for matching. In this chapter, we slightly modified the ICP method to measure the distances between two sets of points. The flowchart of the algorithm is given in Fig. 5.6.
5.4 5.4.1
Applications and Experimental Results 3D Fingerprint Applications and the Experimental Database
As described in detail previously [17, 18], a minutia feature in a 2D fingerprint image can be extended to 3D space. (e.g. in 2D: location {x, y} and orientationθ, in 3D: {x, y, z, θ, ϕ}, where x, y and z is the spatial coordinates. θ and ϕ is the orientation of the ridge). Thus, fingerprint recognition with higher security can be achieved by matching features in 3D space (e.g. 3D minutia matching [10, 19]). This chapter tried to discover the additional 3D fingerprint features and their applications to fingerprint recognition. Experiments were implemented in our own established 3D fingerprint database. The database contains 440 samples from 22 volunteers, including 12 males and 10 females between 20 and 40 years old. The 3D fingerprint samples were collected in two separate sessions, and in each session, each finger of a person was taken as one subject and was collected once. i.e. the database contains 440 samples from 220 fingers, 2 pictures for each finger. The average time interval between the two sessions was about 2 weeks. It is noted that the data were collected under the guidance of the collector to ensure the frontal view of each finger was taken. Three cases were summarized and studied in the chapter, as shown in the following subsections.
5.4.2
Case 1: Coarse Alignment by 3D Finger Point Cloud Data
Coarse alignment is a crucial step in fingerprint recognition, especially for fingerprint matching. In 2D fingerprint images (Fig. 5.7a), finger skin is forced to plane.
66
5 Applications of 3D Fingerprints
Fig. 5.6 The flowchart of the ICP algorithm
Level-1 fingerprint features (e.g. orientation map, core point) are usually extracted at first for coarse alignment in fingerprint recognition [20–24]. Obviously, it takes some time to extract fingerprint features, and the accuracy is also affected by the results of feature extraction. While in 3D fingerprint images, as shown in Fig. 5.7b, two images can be aligned quickly by aligning their corresponding finger shapes. This is because the center part of the human finger is higher than the side part in 3D space from the frontal view of the finger. By making
5.4 Applications and Experimental Results
67
Fig. 5.7 Comparison of 2D fingerprint image and 3D fingerprint image in fingerprint coarse alignment. (a) Two 2D images of the same finger, (b) Two 3D images of the same finger. (Cited from our earlier work in [17])
full use of this character, 3D fingerprint images can be quickly coarse aligned before matching in fingerprint recognition. Figure 5.8 shows an example of coarse alignment result according to their corresponding 3D finger point cloud data in our database. We also tried to match this feature directly to see its discriminative performance. This feature was matched by 3D ICP method introduced in Sect. 5.3 since it is 3D point cloud data. The mean distance between matched pairs (Mdist) and the percentage of matched points (Pm ¼ matched pair number/overall pairs to be matched) are taken as the match score. The EERs were obtained from 220 genuine scores and 48,180 imposter scores (generated from 220 fingers, 2 pictures of each finger). Figure 5.9 shows the Receiver Operating Characteristic Curves (ROCs) of different match score indexes evaluated on the established database. It can be seen from the result that the EERs are
68
5 Applications of 3D Fingerprints
Fig. 5.8 An example of coarse alignment results in our database. (a) Texture and 3D finger point cloud data of a template image and a test image, (b) Coarse alignment result of the template image and the test image ((b) is cited from our earlier work in [17])
100 EER=16.7373%,Percentage of Matched Points EER=16.8503%,Mean Distance between Matched lines
90
EER=14.9718%,Fusion Result of Percentage and Mean Distance
80 70
FRR(%)
60 50 40 30 20 10 0 0
10
20
30
40
50 FAR(%)
60
70
80
90
100
Fig. 5.9 ROCs by matching the 3D finger point cloud data under different match score indexes
5.4 Applications and Experimental Results
69
all very high (best one: ~15%), which shows the identifiability of this feature is not very high. However, it is no doubt that this feature can be used for coarse alignment.
5.4.3
Case 2: Core Point Detection Under the Guidance of Distinctive 3D Shape Ridge Feature
It is interesting to find that core points in fingerprint images almost locate at the center part of the finger with highest curvature by observing human fingers. Thus, the position of the core point will be more likely located near the distinctive 3D shape ridge feature we defined in this chapter. By making full use of the distinctive 3D shape ridge feature, false core points which are far away from the distinctive 3D shape ridge feature can be easily removed. Figure 5.10a shows an example of core point detection results by Poincare Index introduced in [1]. It can be seen that many false core points are detected by using this method. We then used the extracted distinctive 3D shape ridge feature (Fig. 5.10b) to guide the location of the true core point. The final result is given in Fig. 5.10c, which shows the helpfulness of the usage of distinctive 3D shape ridge feature to true core point detection. The statistical result of true core point detection from original core point set detected by Poincare Index in our established database with 220 samples (each finger selects one picture) is shown in Fig. 5.11. From the result, it can be seen that most of the false core points are removed from original core point set even though not only the true core point is detected each time, which fully demonstrates the effectiveness of using the distinctive 3D shape ridge feature to true core point detection.
30 20 10 0 -10 -20 -30 -40 -50 -60 -70 0
(a)
50
100 150 200 250 300
(b)
(c)
Fig. 5.10 An example of core point detection. (a) Original core point set detected Poincare Index (red point), (b) The extracted distinctive 3D shape ridge feature, (c) Location of true core point under the guidance of distinctive 3D shape ridge feature (red circled). ((a) and (c) are cited from our earlier work in [17])
70
5 Applications of 3D Fingerprints 35 Original core point set detected by Poincare Index
DMaetected Core Point Number
30
True core point detection result
25
20
15
10
5
0 0
20
40
60
80
100 120 Sample Number
140
160
180
200
220
Fig. 5.11 The statistical result of true core point detection from original core point set detected by Poincare Index in the established database with 220 samples
5.4.4
Case 3: User Authentication Using Distinctive 3D Shape Ridge Feature
As mentioned in Sect. 5.3, the defined distinctive 3D shape ridge feature is in fact a representative vertical section profile of 3D finger point cloud data. The dimension of this feature is reduced from 3 to 2. Thus, this feature can be matched by 2D ICP. In the study, Mdist and Pm mentioned in subsection 4.2 are taken as the match score, too. Figure 5.12 shows the ROCs of different match score indexes evaluated on the established database. From the results, we can see that the index of mean distance between matched pairs is better than the percentage of matched points. Best EER of around 8.3% can be obtained when the distinctive 3D shape ridge feature was used for matching by simple ICP algorithm, which demonstrates this feature is helpful to distinguish different fingers even though it is not as accurate as other higher level fingerprint features. Since both 2D fingerprint images and 3D finger point cloud data of the same finger are provided simultaneously by 3D fingerprints, we tried to study whether improved performance can be achieved by combining 2D and 3D fingerprint features or not. Here, for 2D fingerprint features, the Distal Interphalangeal Crease based (DIP-based) feature was selected due to its effectiveness on our database compared with minutiae and Scale Invariant Feature Transformation (SIFT) features [25]. The angular distance was used as the match scores (MS2D). DIP is defined as the only permanent flexion crease which is located between medial and distal segments of finger except thumb (between proximal and distal segments), and the DIP-based feature was formed by coding the region using competitive coding scheme introduced in [26]. The detailed DIP feature extraction and matching algorithms can be obtained in [25] which was attached as supporting information.
5.4 Applications and Experimental Results
71
100 EER=10%,Percentage of Matched Points EER=9.7175%,Mean Distance between Matched lines EER=8.3051%,Fusion Result of Percentage and Mean Distance
90 80 70
FRR(%)
60 50 40 30 20 10 0
0
10
20
30
40
50
60
70
80
90
100
FAR(%)
Fig. 5.12 ROCs by matching the distinctive 3D shape ridge feature under different match score indexes
Meanwhile, the mean distance between matched pairs be matching the defined distinctive 3D shape ridge feature was taken as the match score (MS3D). A simple adaptive weighted sum rule is used to combine the 2D and 3D matching scores. The combined score can be expressed as: MS2Dþ3D ¼ w=MS2D þ ð1 wÞ=MS3D w 2 ½0, 1
ð5:4Þ
The weight w is adaptively tuned to provide the best verification results at step length of 0.01. Figure 5.13 shows the ROCs achieved by matching DIP-based feature and the distinctive 3D shape ridge feature separately, as well as their combination. It is notable that the DIP-based feature clearly outperforms curve-skeleton in terms of accuracy. However, the best result is achieved when combining these two features where an EER of 1.3% is achieved. This experiment fully demonstrates that higher accuracy can be achieved if adding the distinctive 3D shape ridge feature. Thus, we concluded that this feature, at least, can be taken as an additional feature to current fingerprint feature set for user authentication.
72
5 Applications of 3D Fingerprints
100 EER=2.2846% ,2D feature--DIP based matching EER=9.7175% ,3D feature--the distinctive 3D shape ridge feature based matching EER=1.2672% ,2D+3D features--Fusion Result
90 80
FRR(%)
70 60 50 40 30 20 10 0
0
10
20
30
40
50
60
70
80
90
100
FAR(% )
Fig. 5.13 ROCs by matching 2D, 3D, and 2D + 3D fingerprint features mentioned in this chapter Table 5.1 Comparison of matching performance using the additional features extracted from 3D fingerprint images and fusion results with DIP feature extracted from 2D images Experiments Curve-skeleton mentioned in [20] 3D curvature mentioned in [21] 3D finger point cloud data using in this chapter Distinctive 3D shape ridge feature using in this chapter DIP based feature+ curve-skeleton mentioned in [20] DIP based feature+3D curvature mentioned in [21] DIP based feature+3D finger point cloud data using in this chapter DIP based feature+ distinctive 3D shape ridge feature using in this chapter
5.4.5
Equal error rate 13.87% 11.95% 16.85% 9.72% 1.88% 1.65% 2.96% 1.27%
Comparisons with the Art-of-the-State 3D Fingerprint Matching Performance
Due to the lack of 3D fingerprint database, few studies reported the matching performance using 3D fingerprint images. Currently, the art-of-the-state 3D fingerprint recognition results were given in [10, 13]. It is noted that different features, as well as different feature extraction and matching methods were adopted in their papers. What’s more, their results were obtained based on their own established database. In this chapter, we summarized and compared the matching results using different features extracted from our own established 3D fingerprint database. The fusion results of these additional features with DIP based feature extracted from 2D fingerprint images were also given. Table 5.1 shows the EERs obtained by matching features mentioned in [10, 13] and this chapter.
References
73
From the results, it can be seen that the distinctiveness of all these features are poor. They are not suitable to be used alone to identify one’s identity. However, the accuracy is greatly improved when they are combined with 2D fingerprint features. Thus, there is no doubt that improved recognition accuracy can be achieved when 2D fingerprint features are combined with these additional features extracted from 3D fingerprint images. From Table 5.1, it also can be seen that the result of matching distinctive 3D shape ridge feature outperforms other features used in [10, 13], which further demonstrates the usefulness of the proposed distinctive 3D shape ridge feature.
5.5
Summary
This chapter had investigated the possible applications of 3D fingerprints. Thanks to the availability of 3D fingerprint images, more features can be extracted. The 3D finger shape features and the newly defined distinctive 3D shape ridge feature were then studied and extracted, the applications of these features were finally discussed as well. Results showed that coarse alignment could be easily achieved when 3D finger point cloud data were available. EER of ~8.3% can be obtained when the distinctive 3D shape ridge feature was used for personal authentication. Also, it was discovered to be helpful to remove false core point. Furthermore, promising EER was realized by combining this feature with 2D features for fingerprint recognition which indicated the prospect of 3D fingerprint recognition. In the chapter, just simple feature extraction and matching methods were used. We believe that higher accuracy can be achieved if better feature extraction and matching methods are adopted. Thus, discovering more effective 3D feature extraction and matching methods (e.g. the 3D Gabor features, SSLBP features, improved SVM classifiers, sparse representation scheme, deep learning scheme etc. introduced in [27–37]) are our further work. Meanwhile, our future work includes exploring the relationship between different levels of fingerprint features and proposing more powerful fusion strategy. Information security is of great importance to current electronic age. 3D fingerprints provides more complicated features and increases dimensions of 2D fingerprint features which will be more suitable to embedded into the schemes of preventing attacks to current devices (e.g. 3D printer, multi-cloud-server) [38– 43]. These will be our future direction to extend 3D fingerprints applications.
References 1. Maltoni, D., et al.: Handbook of Fingerprint Recognition. Springer, New York (2009) 2. Parziale, G., Diaz-Santana, E., Hauke, R.: The surround imager tm: A multi-camera touchless device to acquire 3D rolled-equivalent fingerprints. International Conference on Biometrics. Springer, Berlin, Heidelberg (2006)
74
5 Applications of 3D Fingerprints
3. Fatehpuria, A., Lau, D.L., Hassebrook L.G.: Acquiring a 2D rolled equivalent fingerprint image from a non-contact 3D finger scan. In: Biometric Technology for Human Identification III, vol. 6202. International Society for Optics and Photonics (2006) 4. Xie, W., Song, Z., Zhang, X.: A novel photometric method for real-time 3D reconstruction of fingerprint. International Symposium on Visual Computing. Springer, Berlin, Heidelberg (2010) 5. Wang, Y., Lau, D.L., Hassebrook, L.G.: Fit-sphere unwrapping and performance analysis of 3d fingerprints. Appl. Opt. 49(4), 592–600 (2010) 6. Wang, Y., Hassebrook, L.G., Lau, D.L.: Data acquisition and processing of 3-D fingerprints. IEEE Trans. Inf. Forensics Secur. 5(4), 750–760 (2010) 7. Troy, M., et al.: Non-contact 3D fingerprint scanner using structured light illumination. Emerging digital micromirror device based systems and applications III, vol. 7932. International Society for Optics and Photonics (2011) 8. TBS.: http://www.tbs-biometrics.com. Accessed Nov 2013 9. FlashScan.: http://www.FlashScan3D.co. Accessed Nov 2013 10. Kumar, A., Kwong, C.: Towards contactless, low-cost and accurate 3D fingerprint identification. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 37(3), 681–696 (2013) 11. Pang, X., Song, Z., Xie, W.: Extracting valley-ridge lines from point-cloud-based 3D fingerprint models. IEEE Comput. Graph. Appl. 33(4), 73–81 (2012) 12. Huang, S., et al.: 3D fingerprint imaging system based on full-field fringe projection profilometry. Opt. Lasers Eng. 52, 123–130 (2014) 13. Liu, F., Zhang, D., Shen, L.: Study on novel curvature features for 3D fingerprint recognition. Neurocomputing. 168, 599–608 (2015) 14. Zhang, D., et al.: Robust palmprint verification using 2D and 3D features. Pattern Recogn. 43 (1), 358–368 (2010) 15. Srinivasan, V., Liu, H.-C., Halioua, M.: Automated phase-measuring profilometry of 3-D diffuse objects. Appl. Opt. 23(18), 3105–3108 (1984) 16. Liu, F., Zhang, D.: 3D fingerprint reconstruction system using feature correspondences and prior estimated finger model. Pattern Recogn. 47(1), 178–193 (2014) 17. Zhang, D., et al.: 3D biometrics, 1st edn, pp. 171–230. Springer New York Press, New York (2013) 18. Liu, F.: New Generation of Automated Fingerprint Recognition System. Dissertation. The Hong Kong Polytechnic University (2014) 19. Parziale, G., Niel, A.: A fingerprint matching using minutiae triangulation. International Conference on Biometric Authentication. Springer, Berlin/Heidelberg (2004) 20. Liu, L.-M.: Fingerprint orientation alignment and similarity measurement. Imaging Sci. J. 55 (2), 114–125 (2007) 21. Lindoso, A., et al.: Correlation-based fingerprint matching with orientation field alignment. International Conference on BIOMETRICS. Springer, Berlin/Heidelberg (2007) 22. Wang, L., Bhattacharjee, N., Srinivasan, B.: A method for fingerprint alignment and matching. In: Proceedings of the 10th International Conference on Advances in Mobile Computing & Multimedia (2012) 23. Yager, N., Amin, A.: Fingerprint alignment using a two stage optimization. Pattern Recogn. Lett. 27(5), 317–324 (2006) 24. Zhao, Q., et al.: High resolution partial fingerprint alignment using pore-valley descriptors. Pattern Recogn. 43(3), 1050–1061 (2010) 25. Liu, F., Zhang, D., Guo, Z.: Distal-interphalangeal-crease-based user authentication system. IEEE Trans. Inf. Forensics Secur. 8(9), 1446–1455 (2013) 26. Kong, A.W.K., Zhang, D.: Competitive coding scheme for palmprint verification. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, vol. 1. IEEE (2004) 27. Wong, W.K., et al.: Joint tensor feature analysis for visual object recognition. IEEE Trans. Cybern. 45(11), 2425–2436 (2014)
References
75
28. Zhu, Z., et al.: Three-dimensional Gabor feature extraction for hyperspectral imagery classification using a memetic framework. Inf. Sci. 298, 274–287 (2015) 29. Shi, X., et al.: Face recognition by sparse discriminant analysis via joint L2, 1-norm minimization. Pattern Recogn. 47(7), 2447–2453 (2014) 30. Shen, L., Bai, L., Ji, Z.: FPCODE: An efficient approach for multi-modal biometrics. Int. J. Pattern Recognit. Artif. Intell. 25(02), 273–286 (2011) 31. Lu, X., Yuan, Y., Zheng, X.: Joint dictionary learning for multispectral change detection. IEEE Trans. Cybern. 47(4), 884–897 (2016) 32. Yuan, Y., Mou, L., Lu, X.: Scene recognition by manifold regularized deep learning architecture. IEEE Trans. Neural Netw. Learn. Syst. 26(10), 2222–2233 (2015) 33. Guo, Z., et al.: Robust texture image representation by scale selective local binary patterns. IEEE Trans. Image Process. 25(2), 687–699 (2015) 34. Shi, X., et al.: A framework of joint graph embedding and sparse regression for dimensionality reduction. IEEE Trans. Image Process. 24(4), 1341–1355 (2015) 35. Gu, B., Sheng, V.S.: A robust regularization path algorithm for v-support vector classification. IEEE Trans. Neural Netw. Learn. Syst. 28(5), 1241–1248 (2016) 36. Gu, B., et al.: Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 26(7), 1403–1416 (2014) 37. Gu, B., et al.: Incremental learning for v-support vector regression. Neural Netw. 67, 140–150 (2015) 38. Do, Q., Martini, B., Choo, K.-K.R.: A data exfiltration and remote exploitation attack on consumer 3D printers. IEEE Trans. Inf. Forensics Secur. 11(10), 2174–2186 (2016) 39. Kumari, S., et al.: Design of a provably secure biometrics-based multi-cloud-server authentication scheme. Futur. Gener. Comput. Syst. 68, 320–330 (2017) 40. Peng, J., Choo, K.-K.R., Ashman, H.: User profiling in intrusion detection: a review. J. Netw. Comput. Appl. 72, 14–27 (2016) 41. Yuan, C., Sun, X., Lv, R.: Fingerprint liveness detection based on multi-scale LPQ and PCA. China Commun. 13(7), 60–65 (2016) 42. Zhou, Z., et al.: Effective and efficient global context verification for image copy detection. IEEE Trans. Inf. Forensics Secur. 12(1), 48–63 (2016) 43. Fu, Z., et al.: Toward efficient multi-keyword fuzzy search over encrypted outsourced data with accuracy improvement. IEEE Trans. Inf. Forensics Secur. 11(12), 2706–2716 (2016)
Chapter 6
Overview: High Resolution Fingerprints
Abstract This chapter provides an overview of Part II with focus on the background and development of fingerprint recognition using high resolution images. We first discuss the significance of high resolution fingerprint recognition in the context of fingerprint recognition history, and then introduce fingerprint features, particularly the features available on high resolution fingerprint images. Some high resolution fingerprint recognition systems are then discussed, followed by benchmarks of high resolution fingerprint images in the literature and the recent development of deep learning based fingerprint recognition methods. Finally, a brief summary of the chapters in Part II is given. Keywords Automated Fingerprint Recognition Systems (AFRSs) · High resolution fingerprint images · Level-3 features · Extended fingerprint feature set · Ridge detail
6.1
Introduction
Fingerprints are fully formed at about 7 months of fetus development and do not change throughout the life of an individual except due to accidents such as bruises and cuts on the fingertips. Although the overall fingerprint structure is determined by gene, some factors during the skin differentiation process, such as the flow of the amniotic fluids around the fetus and its position in the uterus, produce some minor deformations that lead to some irregularities in the fingerprint ridges. These factors compose the microenvironment in which the fingerprints are formed. The finer details of the fingerprints are determined by this microenvironment that is unique and also varies across hands and fingers. Thus it is widely believed that fingerprints are unique across individuals and across fingers of the same individuals [30]. Some researchers theoretically studied the uniqueness or individuality of fingerprints and proposed some bounds on the probability of two identical fingerprints [7, 11, 33] Their research results prove the strong unicity of fingerprints and demonstrate that it would be virtually impossible for two fingerprints (even two fingerprints of identical twins) to be exactly alike. Thanks to their permanence and uniqueness as well as their good universality and acceptability, fingerprints were utilized by people for © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_6
77
78
6
Overview: High Resolution Fingerprints
identifying persons long ago [2, 14, 30, 37]. In recent years, the emergence of small and comfortable fingerprint sensors and the development of advanced automatic fingerprint recognition algorithms further broaden the applications of fingerprint biometrics from forensic fields to civilian fields. Fingerprint recognition is to compare two fingerprints to judge whether they are from a same finger or not. In the early years of fingerprint biometrics, the comparison of fingerprints was done manually by human fingerprint experts [2]. However, such manual comparison of fingerprints is very time-consuming and as the number of fingerprints in the database increases, it will become prohibitive to manually compare a query fingerprint with all the fingerprint templates in the database. In order to improve the efficiency of fingerprint recognition, lots of researchers have endeavored to develop automatic fingerprint recognition systems (AFRSs) since the 1960s [30, 31, 35, 37, 51]. The original idea of automatic fingerprint recognition methods was simply based on the observations of how human fingerprint experts perform fingerprint recognition [30]. So far, many different methods have been proposed in the literature for various problems involved in AFRS and some of them have already been widely used in civilian applications [4, 5, 9, 12, 15, 17, 19, 21–23, 28, 29, 38, 40–42, 55, 61]. Yet, as we will discuss in the following sections, while human examiners usually rely on both minutiae and fine ridge details when matching fingerprints, contemporary AFRSs deployed in civilian applications mostly ignore such features [2, 20, 56, 62].
6.2
High Resolution Fingerprint Features
In order for good fingerprint recognition accuracy, it is very important to base the recognition on discriminant features on fingerprints. As kind of oriented patterns composed by alternating ridges and valleys, fingerprints have their features mainly defined by ridges either globally or locally. Roughly, the features available on 2D fingerprint images can be divided into three levels [6, 30]. Features on the first level are defined by fingerprint ridge flow and general morphological information, e.g., singular points (cores and deltas) and fingerprint classes (arch, tented arch, whorl, left loop, right loop, and twin loop). Figure 6.1 illustrates these features. The level-1 features are not very distinctive. Therefore, they are often used for fingerprint classification rather than fingerprint recognition. Level-2 features refer to individual fingerprint ridge paths and fingerprint ridge events, e.g., minutiae, dots, and incipient ridges. Minutiae, also known as Galton points, describe various ridge path deviations where single or multiple ridges form abrupt stops, splits, spurs, enclosures, etc. Two basic types of minutiae are ridge endings and ridge bifurcations, and they are characterized by their locations, directions, and types. They are very distinctive and also very stable, and have thus become the basis of most existing AFRSs. Figure 6.2 shows some example level-2 features. Level-3 features, referring to fine ridge details [49, 62], are defined as fingerprint ridge dimensional attributes, such as pores and ridge edge shapes as shown in Fig. 6.3. Level-3 features are also quite
6.2 High Resolution Fingerprint Features
79
Fig. 6.1 Example level-1 fingerprint features, i.e. fingerprint classes and singular points. The classes of the fingerprints from (a–f) are respectively arch, tented arch, left loop, right loop, twin loop, and whorl. The cores and deltas on them are marked by circles and triangles
Fig. 6.2 Example level-2 fingerprint features, i.e. minutiae (ridge endings and bifurcations), dots, and incipient ridges
discriminant, but it requires high resolution (1000dpi) fingerprint images to reliably extract them. Because most existing AFRSs are equipped with fingerprint sensors of around 500dpi, level-3 features are often ignored by them. Although existing minutia-based AFRSs can get good recognition accuracy, the limitation of using only minutiae features is becoming more and more obvious and
80
6
Overview: High Resolution Fingerprints
Fig. 6.3 Example level-3 fingerprint features, i.e. pores (closed and open), ridge edge indentations and protrusions
stringent as the population using the AFRSs explodes and the preference for small fingerprint sensors (e.g., those equipped on mobile devices) increases. This is because the number of usable minutiae on a small fingerprint segment would be often insufficient and the limited number of reliable minutiae on the fingerprint images will then seriously degrade the reliability of AFRSs in recognizing individuals among a large population of users. In order to come over the above limitation of using only minutiae in fingerprint recognition, more and more attention has been recently paid to other features on fingerprints, for instance, ridges surrounding the minutiae. While many researchers focus on designing more elaborate descriptors for minutiae by sampling neighboring ridges or by exploring the inter-minutiae structures [8, 15, 16, 24, 25], some other researchers consider new fingerprint features as supplementary features to minutiae [10, 20, 46]. Examples include pores, ridge contours, dots, and incipient ridges, etc. These features have been for a long time utilized by forensic workers in comparing fingerprints [2]. Their effectiveness in enhancing the performance of AFRS has also been proved by recent studies [26, 27, 34, 39, 50, 53]. Being aware of the usefulness and significance of these non-minutiae features for fingerprint recognition, experts from the Scientific Working Group on Friction Ridge Analysis, Study, and Technology (SWGFAST) have recently set up the Committee to Define an Extended Fingerprint Feature Set (CDEFFS) for the purpose of identifying, defining and providing guidance on additional fingerprint features beyond the traditional minutiae features currently defined in the ANSI/ NIST-ITL-2000 standard [6]. In the latest standard drafted by the CDEFFS, a set of fingerprint additional features are specified with their attributes, including dots (DOT), incipient ridges (INR), creases and linear discontinuities (CLD), ridge edge features (REF), and pores (POR). Towards a better understanding of these additional features, we quote below the definitions by CDEFFS. Pores, as kind of level-3 features, are located on fingerprint ridges. Although many different attributes can be found for pores, e.g. shapes and sizes of pores [3],
6.3 High Resolution Fingerprint Recognition
81
Fig. 6.4 Two fingerprint images captured from a same finger at different resolutions (left: 500dpi; right: 1200dpi) two fingerprint images captured from a same finger at different resolutions (left: 500dpi; right: 1200dpi)
the locations of pores are believed to be the most reliable attributes considering the varying skin conditions and deformations that occur to fingerprints during image acquisition [2]. Therefore, in the standard by CDEFFS, each pore is marked by its center point and represented by the location of its center. A dot is a single or partial ridge unit that is shorter than local ridge width. It is marked by its center point. Elongated dots may optionally have their length marked along the longest dimension. Thus, a dot is characterized by the location of its center and another optional attribute defined as the length of the dot along its longest dimension. An incipient ridge is a thin ridge, substantially thinner than local ridge width, and is marked with its two endpoints along its longest dimension. If the incipient is a series of clearly separate (thin) dots, they should be marked as separate incipient ridges, and if an unbroken incipient ridge curves, it should be marked as a series of adjoining line segments. The attributes of an incipient ridge consist of the locations of its two endpoints. From the above discussion, we can see that the fingerprint additional features include a variety of level-2 and level-3 features. Among these features, some require high resolution fingerprint images for reliable detection of them, e.g. pores. Even for other additional features like dots and incipient ridges, using high resolution fingerprint images will also greatly facilitate the extraction of them. Figure 6.4 shows two fingerprint images captured from a same finger at 500dpi and 1200dpi respectively. From these two fingerprint images, we can clearly see that compared with low resolution fingerprint images, high resolution fingerprint images not only have better image quality in general, but also provide much clearer additional features. In the next section, the study on high resolution fingerprint recognition will be briefly reviewed.
6.3
High Resolution Fingerprint Recognition
With the advent of high resolution imaging technology, it is now possible to conveniently collect high resolution fingerprint images, and an increasing effort has been made to develop devices and algorithms for high resolution fingerprint
82
6
Overview: High Resolution Fingerprints
recognition. The first modern high resolution AFRS was developed by Stosz and Alyea in 1994 [46]. They used a custom-built fingerprint scanner to capture fingerprint images under a maximal resolution of 2400dpi. In their AFRS, they combined pores and minutiae to recognize fingerprints and obtained a great improvement on the recognition accuracy compared with the AFRS based on merely minutiae. The distinctiveness of pores was later theoretically analyzed by Roddy and Stosz for predicting the performance of their developed pore-based automated fingerprint matching routine [39]. Krzyszczuk and his colleagues [26, 27] experimentally investigated the contribution of level-3 features (i.e. pores) to fingerprint recognition and concluded that the use of level-3 features can offer at least a comparable recognition potential from a small area fingerprint fragment as the level-2 features (i.e. minutiae) offer for fragments of larger area, and the contribution of level-3 features becomes more significant as the area of fingerprint fragments decreases. Parsons et al. [34] examined the evidence from the pores in fingerprints by some rotationally invariant statistics of pore configurations, and proved again the effectiveness of using pores for fingerprint recognition. Chen and Jain [11] ended the traditional fingerprint individual model by incorporating the pores into the model and reported a very small theoretical probability of random correspondence (PRC) which is more consistent with that based on empirical matching. This demonstrates the effectiveness and feasibility of using pores for fingerprint recognition. One state-of-the-art prototype system using high resolution fingerprint images was developed by Jain et al. [20] The system captures 1000dpi fingerprint images by using a commercial high resolution fingerprint scanner, and recognizes fingerprints based on the fingerprint features from level-1 to level-3 (i.e. ridge orientation field, minutiae, pores, and ridge contours). By combining the features in either a parallel or a hierarchical way, the high resolution fingerprint recognition system can achieve an equal error rate (EER) that is about 20% lower than that of a system using solely minutiae. Recently, the International Biometric Group (IBG) has also conducted a study on the using of level-3 features extracted from high resolution fingerprint images [18]. They implemented the algorithms proposed by Jain et al. [20] with some minor adaptation and carried out a series of experiments on a set of 2000dpi fingerprint images collected in their project. In the study, they considered minutiae, pores, ridge contours, and edgeoscopic features, and investigated the recognition performance of both individual features and combined features. As claimed in their conclusion, fusing level-2 and level-3 fingerprint features provides considerable potential for increased fingerprint recognition accuracy. Some other additional features, i.e. dots and incipient ridges, have also been studied by Chen and Jain [10] at both 1000dpi and 500dpi under the context of partial fingerprint recognition. They used the NIST special database 30 [32] which contains a set of dual resolution ten-prints (500dpi and 1000dpi) to evaluate the performance of their proposed automatic methods for extracting and matching dots and incipient ridges. In the experiments, significantly higher recognition accuracy was observed on both 1000dpi and 500dpi fingerprint images when dots and incipient ridges were utilized together with minutiae.
6.4 Benchmarks of High Resolution Fingerprint Images
83
Although little research has been done so far on high resolution fingerprint recognition, previous studies as discussed above have already proven the advantages of using high resolution fingerprint images thanks to both the relatively high quality of the images and the new distinctive features (e.g. pores, dots, and incipient ridges) available on the images. The high image quality facilitates the extraction of various features from the fingerprint images. The new features extracted from the high resolution fingerprint images, as important supplements to the traditional minutiae features, help to enable high-confidence and more accurate matching, especially when small fingerprint fragments with insufficient minutiae are used for recognition [26, 63]. Therefore, owing to the increasing desire of more accurate fingerprint recognition systems with smaller fingerprint sensors, high resolution fingerprint recognition has been attracting more and more attention from both researchers and engineers in the fingerprint recognition and biometrics community.
6.4
Benchmarks of High Resolution Fingerprint Images
NIST Special Database 30 [32], one of the widely used databases for research on level-3 fingerprint features [60], consists of dual resolution (500dpi and 1000dpi) inked fingerprint images from 36 ten-print paired cards with both rolled and plain fingerprints. It is, however, not a data set in live-scan fingerprint recognition. Besides, it does not provide ground truth additional features on the fingerprint images. This makes it inconvenient to evaluate algorithms for high resolution fingerprint recognition or fingerprint additional features. Another benchmark data set of live-scan high resolution fingerprint images was established by the Hong Kong Polytechnique University [36], which includes two high resolution (1200dpi) fingerprint image databases collected by using custombuilt fingerprint image acquisition device. The first database (DBI) consists of two parts, the training and test sets. In the training set, there are 210 partial fingerprint images from 35 fingers; whereas in the test set, there are 1480 partial fingerprint images from 148 fingers (including the fingers in the training set). These fingerprint images were collected in two sessions that were separated by about 2 weeks. As for the training set, three images were captured from each finger in each session; and for the test set, five images were captured from each finger in each of the two sessions. The spatial size of the images in DBI is 320 pixels in width and 240 pixels in height. The fingerprint images in the second database (DBII) were also collected from the 148 fingers but with a larger spatial size of 640 by 480 pixels. The collection was done in two sessions (about 2 weeks apart) with five images captured from each finger in each session. Totally, DBII has 1480 fingerprint images. Figure 6.5 shows two example fingerprint images in the two databases. In order for better evaluating the algorithms of fingerprint additional features, some of the fingerprint images in DBI have had the pores, dots, and incipient ridges on them manually marked as ground truth.
84
6
Overview: High Resolution Fingerprints
Fig. 6.5 Example fingerprint images in the PolyU high resolution fingerprint database (left: DBI; right: DBII)
6.5
Recent Development Based on Deep Learning
Deep learning (DL) has been proven superior in many image processing, pattern recognition, and computer vision tasks thanks to its ability of learning effective feature representations in an end-to-end manner, which inspires fingerprint researchers to seek for DL-based solutions for processing fingerprint images. [52] proposed a DL model for fingerprint anti-spoofing by classifying patches of fingerprint images into real or fake through a deep convolutional neural network (DCNN). [13, 54], respectively, used convolutional neural networks to segment fingerprint images and to assess fingerprint image quality. [47, 48] proposed a deep network for extracting minutiae in fingerprint images. [43–45] improved the fingerprint indexing performance by learning minutia-centred deep convolutional features. [57–59] used similar techniques to learn feature representations for minutiae particularly for partial high resolution fingerprint matching on mobile devices. Recently, some researchers [1] also successfully applied deep networks to pore detection in high resolution fingerprint images and achieved promising results.
6.6
Chapters Overview
The rest chapters in this part will cover different aspects of high resolution fingerprint recognition. Chapter 7 introduces the principle of acquiring high resolution fingerprint images and experimentally analyzes the selection of appropriate resolution for practical high resolution fingerprint recognition systems. Chapters 8, 9, and 10 then focuses on fingerprint pores, a typical kind of level-3 features available on high resolution fingerprint images. Chapter 8 first introduces methods for extracting pores. Chapter 9 then presents a method for applying pores to partial fingerprint alignment, and Chap. 10 introduces methods for matching pores on pairs of fingerprints. The last two chapters in this part discuss, respectively, the quality assessment problem in the utilization of high resolution fingerprints for identity recognition and the potential of fusing various extended fingerprint features including minutiae, pores, dots and incipient ridges.
References
6.7
85
Summary
This chapter introduces the background and development of fingerprint recognition using high resolution images. The significance of high resolution fingerprint recognition is first discussed. Fingerprint features, particularly the features available on high resolution fingerprint images, are then introduced in detail. Typical high resolution fingerprint recognition systems are presented along with findings by relevant studies. Afterwards, benchmarks of high resolution fingerprint images in the literature and the latest development in fingerprint recognition since the emergence of deep learning are reviewed. Finally, a brief overview of the chapters in this part of the book is given.
References 1. Anand, V., Kanhangad, V.: Pore detection in high-resolution fingerprint images using deep residual network. J. Electron. Imaging. 28(2), 1–4 (2019) 2. Ashbaugh, D.R.: Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology. CRC Press LLC, Boca Raton (1999) 3. Bindra, B., Jasuja, O.P., Singla, A.K.: Poroscopy: a method of personal identification revisited. Internet J. Forensic Med. Toxicol. 1, (2000) 4. Cao, K., Jain, A.K.: Automated latent fingerprint recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(4), 788–800 (2019) 5. Cappelli, R., Maio, D., Maltoni, D.: Fingerprint classification by directional image partitioning. IEEE Trans. Pattern Anal. Mach. Intell. 21(5), 402–421 (1999) 6. CDEFFS: Data Format for the Interchange of Extended Fingerprint and Palmprint Features. Draft Version 0.4, available at http://fingerprint.nist.gov/standard/cdeffs/index.html (2009) 7. Chen, J., Moon, Y.S.: The statistical modeling of fingerprint minutiae distribution with implications for fingerprint individuality studies. In: Proceedings of 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008) 8. Chen, X., Tian, J., Yang, X., Zhang, Y.: An algorithm for distorted fingerprint matching based on local triangle feature set. IEEE Trans. Inf. Forensics Secur. 1, 169–177 (2006) 9. Chen, Y., Dass, S., Jain, A.K.: Fingerprint quality indices for predicting authentication performance. In: Proceedings of Audio- and Video-based Biometric Person Authentication, pp. 160–170 (2005) 10. Chen, Y., Jain, A.K.: Dots and Incipients: extended Features for Partial Fingerprint Matching. Presented at Biometric Symposium, BCC, Baltimore (2007) 11. Chen, Y., Jain, A. K.: Beyond minutiae: A fingerprint individuality model with pattern, ridge and pore features. In: Proceedings of the 3rd International Conference on Biometrics, pp. 523–533 (2009) 12. Chugh, T., Cao, K., Zhou, J., Tabassi, E., Jain, A.K.: Latent fingerprint value prediction: crowdbased learning. IEEE Trans. Inf. Forensics Secur. 13(1), 20–34 (2018) 13. Dai, X., Liang, J., Zhao, Q., Liu, F.: Fingerprint segmentation via convolutional neural networks. In: Proceedings of Chinese Conference on Biometric Recognition, pp. 324–333 (2017) 14. FBI (Federal Bureau of Investigation): The Science of Fingerprints: Classification and Uses. U.S. Government Printing Office, Washington, DC (1984) 15. Feng, J.: Combining minutiae descriptors for fingerprint matching. Pattern Recogn. 41(1), 342–352 (2008)
86
6
Overview: High Resolution Fingerprints
16. Feng, J., Ouyang, Z., Cai, A.: Fingerprint matching using ridges. Pattern Recogn. 39(11), 2131–2140 (2006) 17. Hong, L., Wan, Y., Jain, A.K.: Fingerprint image enhancement: algorithms and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 20, 777–789 (1998) 18. IBG (International Biometric Group): Analysis of level 3 features at high resolutions. Phase II – Final Report (2008) 19. Jain, A.K., Arora, S.S., Cao, K., Best-Rowden, L., Bhatnagar, A.: Fingerprint recognition of young children. IEEE Trans. Inf. Forensics Secur. 12(7), 1501–1514 (2017) 20. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2007) 21. Jain, A.K., Feng, J.: Latent fingerprint matching. IEEE Trans. Pattern Anal. Mach. Intell. 33(1), 88–100 (2011) 22. Jain, A.K., Hong, L., Bolle, R.: On-line fingerprint verification. IEEE Trans. Pattern Anal. Mach. Intell. 19, 302–314 (1997) 23. Jain, A.K., Prabhakar, S., Hong, L., Pankanti, S.: Filterbank-based fingerprint matching. IEEE Trans. Image Process. 9, 846–859 (2000) 24. Jea, T.Y., Govindaraju, V.: A minutia-based partial fingerprint recognition system. Pattern Recognition 38, 1672–1684 (2005) 25. Kovacs-Vajna, Z.M.: A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1266–1276 (2000) 26. Kryszczuk, K., Drygajlo, A., Morier, P.: Extraction of level 2 and level 3 features for fragmentary fingerprints. In: Proceedings of the 2nd COST Action 275 Workshop, pp. 83–88. Vigo, Spain (2004) 27. Kryszczuk, K., Morier, P., Drygajlo, A.: Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison. In: Proceedings of Biometric Authentication, ECCV 2004 International Workshop, pp. 124–133 (2004) 28. Liu, E., Cao, K.: Minutiae extraction from level 1 features of fingerprint. IEEE Trans. Inf. Forensics Secur. 11(9), 1893–1902 (2016) 29. Liu, L., Jiang, T., Yang, J., Zhu, C.: Fingerprint registration by maximization of mutual information. IEEE Trans. Image Process. 15, 1100–1110 (2006) 30. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer, New York (2009) 31. Mehtre, B., Murthy, M.: A minutiae based fingerprint identification system. In: Proceedings of the 2nd International Conference on Advances in Pattern Recognition and Digital Techniques (1986) 32. NIST Special Database 30, available at http://www.nist.gov/srd/nistsd30.htm 33. Pankanti, S., Prabhakar, S., Jain, A.K.: On the individuality of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 24(8), 1010–1025 (2002) 34. Parsons, N.R., Smith, J.Q., Thonnes, E., Wang, L., Wilson, R.G.: Rotationally invariant statistics for examining the evidence from the pores in fingerprints. Law Probab. Risk. 7, 1–14 (2008) 35. Pernus, F., Kovacic, S., Gyergyek, L.: Minutiae-based fingerprint recognition. In: Proceedings of the 5th International Conference on Pattern Recognition, pp. 1380–1382 (1980) 36. The PolyU High Resolution Fingerprint Database. http://www4.comp.polyu.edu.hk/~biomet rics/HRF/HRF_old.htm (2008) 37. Ratha, N., Bolle, R.: Automatic Fingerprint Recognition Systems. Springer, New York (2004) 38. Ratha, N.K., Connell, J.H., Bolle, R.M.: Image mosaicing for rolled fingerprint construction. In: Proceedings of the 14th International Conference on Pattern Recognition, vol. 2, pp. 1651–1653 (1998) 39. Roddy, A., Stosz, J.: Fingerprint features – statistical analysis and system performance estimates. Proc. IEEE. 85(9), 1390–1421 (1997) 40. Ross, A., Dass, S., Jain, A.K.: A deformable model for fingerprint matching. Pattern Recogn. 38, 95–103 (2005)
References
87
41. Ross, A., Jain, A.K., Reisman, J.: A hybrid fingerprint matcher. Pattern Recogn. 36, 1661–1673 (2003) 42. Sherlock, B.G., Monro, D.M., Millard, K.: Fingerprint enhancement by directional Fourier filtering. IEE Proc. Vision Image Signal Process. 141(2), 87–94 (1994) 43. Song, D., Feng, J.: Fingerprint indexing based on pyramid deep convolutional feature. In: Proceedings of International Joint Conference on Biometrics, pp. 200–207 (2017) 44. Song, D, Tang, Y, Feng, J.: Fingerprint indexing based on minutia-centred deep convolutional features. In: Proceedings of Asian Conference on Pattern Recognition, pp. 770–775 (2017) 45. Song, D., Tang, Y., Feng, J.: Aggregating minutia-centred deep convolutional features for fingerprint indexing. Pattern Recogn. 88, 397–408 (2019) 46. Stosz, J.D., Alyea, L.A.: Automated system for fingerprint authentication using pores and ridge structure. In: Proceedings of SPIE Conference on Automatic Systems for the Identification and Inspection of Humans, San Diego, vol. 2277, pp. 210–223 (1994) 47. Tang, Y., Gao, F., Feng, J.: Latent fingerprint minutia extraction using fully convolutional network. In: Proceedings of International Joint Conference on Biometrics, pp. 117–123 (2017) 48. Tang, Y., Gao, F., Feng, J, Liu, Y.: FingerNet: an unified deep network for fingerprint minutiae extraction. In: Proceedings of International Joint Conference on Biometrics, pp. 108–116 (2017) 49. Teixeira, R.F.S., Leite, N.J.: Improving pore extraction in high resolution fingerprint images using spatial analysis. In: Proceedings of IEEE International Conference on Image Processing, pp. 4962–4966 (2014) 50. Teixeira, R.F.S., Leite, N.J.: A new framework for quality assessment of high-resolution fingerprint images. IEEE Trans. Pattern Anal. Mach. Intell. 39(10), 1905–1917 (2017) 51. Trauring, M.: Automatic comparison of finger-ridge patterns. Nature. 197, 938–940 (1963) 52. Wang, C., Li, K., Wu, Z., Zhao, Q.: A DCNN based fingerprint liveness detection algorithm with voting strategy. In: Proceedings of Chinese Conference on Biometric Recognition, pp. 241–249 (2015) 53. Xu, Y., Lu, G., Lu, Y., Zhang, D.: High resolution fingerprint recognition using pore and edge descriptors. Pattern Recogn. Lett. 125, 773–779 (2019) 54. Yan, J., Dai, X., Zhao, Q., Liu, F.: A CNN-based fingerprint image quality assessment method. In: Proceedings of Chinese Conference on Biometric Recognition, pp. 344–352 (2017) 55. Yang, X., Feng, J., Zhou, J.: Localized dictionaries based orientation field estimation for latent fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 36(5), 955–969 (2014) 56. Zhang, D., Liu, F., Zhao, Q., Lu, G., Luo, N.: Selecting a reference high resolution for fingerprint recognition using minutiae and pores. IEEE Trans. Instrum. Meas. 60(3), 863–871 (2011) 57. Zhang, F., Feng, J.: High-resolution mobile fingerprint matching via deep joint KNN-triplet embedding. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 5019–5020 (2017) 58. Zhang, F., Xin, S., Feng, J.: Deep dense multi-level feature for partial high-resolution fingerprint matching. In: Proceedings of International Joint Conference on Biometrics, pp. 397–405 (2017) 59. Zhang, F., Xin, S., Feng, J.: Combining global and minutia deep features for partial highresolution fingerprint matching. Pattern Recogn. Lett. 119, 139–147 (2019) 60. Zhao, Q., Jain, A.K.: On the utility of extended fingerprint features: a study on pores. In: Proceedings of CVPR Workshop on Biometrics, pp. 9–16 (2010) 61. Zhao, Q., Jain, A.K.: Model based separation of overlapping latent fingerprints. IEEE Trans. Inf. Forensics Secur. 7(3), 904–918 (2012) 62. Zhao, Q., Zhang, D., Zhang, L., Luo, N.: Adaptive fingerprint pore modeling and extraction. Pattern Recogn. 43(8), 2833–2844 (2010) 63. Zhao, Q., Zhang, D., Zhang, L., Luo, N.: High resolution partial fingerprint alignment using pore-valley descriptors. Pattern Recogn. 43(3), 1050–1061 (2010)
Chapter 7
High Resolution Fingerprint Acquisition
Abstract High-resolution automated fingerprint recognition systems (AFRS) offer higher security because they are able to make use of level 3 features, such as pores, that are not available in lower-resolution ( j2 =2σ 2 Þ cos ð3σ iÞ > < Pθ ði, jÞ ¼ Rot ðP0 , θÞ ¼ e ð bi ¼ icosðθÞ jsinðθÞ,bj ¼ isinðθÞ þ jcosðθÞ > > : 3σ i, j 3σ
ð8:1Þ
ð8:2Þ
Equation (8.1) is the reference model (i.e. the zero-degree model) and Eq. (8.2) is the rotated model. Here, σ is the scale parameter which is used to control the pore size. It can be determined by the local ridge frequency. θ is the orientation parameter which is used to control the direction of the pore model. It can be estimated by the local ridge orientation. Figure 8.4 shows some example instances of the proposed DAPM. With the proposed DAPM, next we present an adaptive pore extraction method.
8.2.3
Adaptive Pore Extraction
Pore extraction is essentially a problem of object detection. Generally, given a model of an object, we can detect the object by using the model as a matched filter. When convoluting an image with a matched filter describing the desired object, strong responses will be obtained at the locations of the object on the image. The techniques of matched filters have been successfully used in many applications, for example, vessel detection on retinal images [34]. Here, we will first estimate the parameters in the DAPM to instantiate the pore model, and then discuss the important implementation issues in using the instantiated pore models to extract pores. Finally, the adaptive pore extraction algorithm will be presented.
114
8 Fingerprint Pore Extraction
Fig. 8.4 Examples of the dynamic anisotropic pore model. (a) θ ¼ 0, (b) θ ¼ 45, and (c) θ ¼ 90
8.2.3.1
DAPM Parameter Estimation
The matched filters for pore extraction can be generated by instantiating the pore models. In order to instantiate the DAPM in Eqs. (8.1 and 8.2), it is necessary to initialize two parameters, orientation θ and scale σ. As for the orientation parameter θ, an intuitive way is to set it as the local fingerprint ridge orientation. To estimate the ridge orientation field on the fingerprint, we first smooth the fingerprint image by using a smoothing kernel and then calculate the gradients along the x and y directions by using some derivative operator (e.g., the Sobel operator). Let Gx(i, j) and Gy(i, j) be the gradients at the pixel (i, j), and the squared gradients be Gxx(i, j)¼ Gx(i, j) Gx(i, j), Gxy(i, j)¼ Gx(i, j) Gy(i, j), and Gyy(i, j)¼ Gy(i, j)Gy(i, j). xx, The squared gradients are then smoothed by using a Gaussian kernel, resulting in G Gxy , and Gyy . The ridge orientation at (i, j) is estimated by. Oði, jÞ ¼
xx ði, jÞ G yy ði, jÞ G π 1 þ ∙ arctan 2 2 2 ∙ Gxx ði, jÞ
ð8:3Þ
which is in the range of [0, π]. For more details on fingerprint ridge orientation field estimation, please refer to [35]. With regard to the scale parameters, if we can estimate the range of pore scales, we can then use a bank of multi-scale matched filters to detect the pores; however,
8.2 Model-Based Pore Extraction
115
this is very time-consuming. Therefore, we estimate and use the maximum valid pore scale when designing the matched filters in this chapter. As shown in Sect. 8.2.1, the pores are located on ridges. Consequently, the pore scales should be restricted by the ridge widths. This motivates us to associate the maximum pore scale with the local fingerprint ridge period by a ratio, i.e. σ ¼ τ/k, where τ is the local ridge period (or the reciprocal of local ridge frequency) and k a positive constant. Here, we empirically set k ¼ 12. The local ridge frequency is estimated in a local window by using the projection-based method in [36].
8.2.3.2
Implementation Issues
With the estimated parameters θ and σ in Sect. 8.2.3.1, an adaptive pore model can be instantiated for each pixel and then we can apply it as a matched filter to extracting pores from the fingerprint image. However, there will be two problems if directly applying the matched filters in a pixel-wise way. Next, we discuss these issues in detail and present the solutions to practical implementation. The first problem is the computational cost. Obviously, it will be very expensive to calculate the DAPM in a pixel-wise way. Noting that in a local region on the fingerprint, the ridges run nearly parallel with each other, and the intervals between them vary slightly, we could therefore calculate a common DAPM in a local region to detect pores. The second problem is that on some parts of a fingerprint image it is difficult to get an accurate estimate of the local ridge orientation and frequency, which is needed in order to initialize an accurate instance of DAPM. For example, on the image shown in Fig. 8.5a, the region highlighted by the red circle is mashed and no dominant orientation can be obtained. The sharp change of ridge orientation at the singular points of a fingerprint will also raise difficulties in estimating the ridge orientation and frequency surrounding the singular points. To deal with these issues, we propose a block-wise approach to implementing the matched filters for pore
Fig. 8.5 (a) A fingerprint image. The ridge orientation and frequency cannot be accurately estimated on the region marked by the red circle. (b) The partition result. The dominant ridge orientations of the well-defined blocks are shown by the green lines
116
8 Fingerprint Pore Extraction
extraction. This approach defines three kinds of blocks on fingerprint images: welldefined blocks, ill-posed blocks, and background blocks. Well-defined and ill-posed blocks are both foreground fingerprint regions. On a well-defined block, it is able to directly estimate a dominant ridge orientation and a ridge frequency. On an ill-posed block, there is not a dominant ridge orientation but the ridge frequency can be estimated by interpolation of the frequencies on its neighboring blocks. The block partition and classification is performed in a hierarchical way. First, a large block size is applied to the image. For each block B, the following structure tensor is calculated. J¼
1 X ∇Bi ∇BTi ¼ i2B NB
j11 j21
j12 j22
ð8:4Þ
where NB denotes the number of pixels in the block, ∇Bi¼((∂Bi/∂x), (∂Bi/ ∂y))T is the gradient vector at pixel i, and ‘T’ represents the transpose operator. The structure tensor J contains information of ridge orientation in the block and the eigenvalues of J can be used to measure the consistency of ridge orientation. Specifically, we use the orientation certainty (OC) defined as follows [37]: OS ¼
2 ðλ1 λ2 Þ2 ð j11 j22 Þ þ 4j212 ¼ 2 ðλ 1 þ λ2 Þ ð j11 þ j22 Þ2
ð8:5Þ
where λ1 and λ2 are the two eigenvalues of 2 2 structure tensor J and we assumeλ1 λ2. This quantity of OC can indicate how strongly the energy is concentrated along the ridge orientation. If there is a dominant ridge orientation, then λ1 λ2 and OC will be close to 1. Otherwise, λ1 and λ2 will not differ much and consequently OC will be close to 0. We also calculate a measurement related to the intensity contrast (IC) of the block as follows: IC ¼ std ðBÞ
ð8:6Þ
where std denotes the standard deviation. The purpose of this is to exclude the background from the foreground fingerprint. The two measurements, OC and IC, are evaluated with pre-specified thresholds. If both of them are above the thresholds, the block is recorded as a well-defined block and will not be further partitioned. Otherwise, the block larger than the minimum size is evenly partitioned into four equal sub-blocks, each of which is further examined. Suppose a block has reached the minimum size, this block will be marked as a well-defined block if its OC and IC measures are above the thresholds; it is marked as an ill-posed block if its OC measure is less than the threshold but the IC measure is above the threshold; otherwise, it is marked as a background block. Figure 8.5b shows the partition result of the image in Fig. 8.5a. The dominant ridge orientations of the well-defined blocks are shown by the green lines.
8.2 Model-Based Pore Extraction
117
After partitioning the fingerprint image into the three kinds of blocks, the pores can be extracted from each of the foreground (well-defined or ill-posed) blocks. For a well-defined block, the dominant ridge orientation and the mean ridge frequency on it can be calculated directly, and hence the DAPM can be consequently instantiated. For an ill-posed block, there is no dominant ridge orientation but the mean ridge frequency can be calculated by interpolating the mean ridge frequencies of its neighboring blocks. Hence, as a compromise, we apply to the ill-posed blocks the adaptive DoG based pore models [32]. Next we discuss on how to calculate the dominant ridge orientation of a well-defined block and the mean ridge frequency on a foreground block. The dominant orientation of a well-defined block is defined as the average orientation of the ridge orientation field on the block. To average the orientation field of block B, denoted by BOF, we first multiply the orientation angle at each pixel by two, and then calculate its cosine and sine values. Finally, the dominant orientation of the block is calculated as BDO ¼
aver ð sin ð2 ∙ BOF ÞÞ 1 arctan 2 aver ð cos ð2 ∙ BOF ÞÞ
ð8:7Þ
where aver(F) denotes the average of the elements in F. For each well-defined block, the average ridge frequency on the block is calculated by using the method in [36]. The ridge frequencies on the ill-posed blocks are estimated by interpolating their surrounding blocks whose ridge frequencies have already been calculated. Specifically, after the ridge frequencies on well-defined blocks have been calculated, we iteratively check the fingerprint image until all the ridge frequencies of the foreground blocks have been calculated. If the ridge frequency of a foreground block has not been calculated, we take the mean of the ridge frequencies of its neighboring blocks as its ridge frequency. Finally, all foreground fingerprint blocks, no matter with or without dominant orientation, are assigned with ridge frequencies.
8.2.3.3
The Pore Extraction Algorithm
We now summarize the complete adaptive fingerprint pore extraction algorithm. As shown in Fig. 8.6, the proposed pore extraction algorithm consists of five main steps. Take the fingerprint fragment in Fig. 8.7a, which is a part of Fig. 8.5a, as an example. The first step is to partition the fingerprint image into a number of blocks, each being a well-defined block, an ill-posed block or a background block (see Fig. 8.7b). In the second step, the ridge orientation field of the fingerprint image is calculated. Meanwhile, the mean ridge frequencies on all foreground blocks are estimated, which form the ridge frequency map of the fingerprint image (see Fig. 8.7c). It then proceeds to the third step, in which the binary ridge map of the fingerprint image is calculated as follows. Based on the estimated ridge orientation field and ridge
118
8 Fingerprint Pore Extraction
Fig. 8.6 The main steps of the proposed pore extraction method
Fig. 8.7 (a) is a fingerprint fragment, (b) shows the blocks on it (the value in red is the orientation certainty and the value in green is the intensity contrast), (c) displays the estimated dominant ridge orientations and periods (the green lines denote the orientations on well-defined blocks, and if there is no orientation shown on a block, the block is an ill-posed block), (d) is the ridge map, (e) is the initial pore map, and (f) shows the final detected pores (marked by circles)
8.2 Model-Based Pore Extraction
119
frequency map, the fingerprint image is first enhanced by using a bank of Gabor filters [36] to enhance the bright valleys and suppress dark ridges. In order to extract the ridges from the fingerprint image, we binarize the enhanced image and calculate its complement, where the ridge pixels have value ‘1’. With this complement image, we can readily obtain the binary ridge map by setting the corresponding ridge pixels in the foreground fingerprint blocks to ‘1’ and the other pixels to ‘0’. This binary ridge map (see Fig. 8.7d) will be used in the post-processing step to remove spurious pores because pores can only locate on ridges. It is worth mentioning that the first three steps can be performed on a downsampled small image of the original high resolution fingerprint image because they do not depend on the Level 3 features. In our experiments, we down-sampled the images to half of their original resolution and then carried out steps A, B, and C. Afterwards, the obtained image partition result, ridge orientation field, ridge frequency map, and the ridge map were all up-sampled to the original resolution. They will be used in the subsequent pore detection and post-processing. Working on the down-sampled images can reduce a lot of computational cost. The pore detection and post-processing are performed on the original fingerprint images because the Level 3 pore features can hardly be reliably extracted in the down- sampled low resolution fingerprint images. In the pore detection step, the foreground fingerprint blocks are processed one by one to detect pores on them. A local instantiation of the DAPM is established for each well-defined block based on the local ridge orientation and frequency on that block, and a local instantiation of the adaptive DoG based pore model is established for each ill-posed block based on the local ridge frequency on the block. Applying the adaptively instantiated pore model to the block as a matched filter will enhance the pores while suppressing valleys and noise. A threshold is then applied to the filtering response to segment out the candidate pores on the block. After processing all the blocks, we obtain a binary image where the candidate pores have the value ‘1’ and other pixels have value ‘0’. This binary image gives the initial pore extraction result (pore map). Figure 8.7e shows an example. We can see that there could have some spurious and false pores in this map. The last step is to remove the spurious and false pores from the initial pore extraction result. In previous work, most methods remove false pores by applying the restraint that pores should reside only on ridges [5, 7, 16, 23] and that the size of pores should be within a valid range [5, 7, 16]. Some researchers also propose to refine the pore extraction result based on the intensity contrast [12]. In [12], a PCA method is applied to a set of extracted putative pores to estimate a model for the gray level distribution over pores. This model is then used to exclude falsely detected pores based on a method of minimizing squared error. This method is however greatly affected by the chosen putative pores. In this chapter, we take the following steps to post-process the extracted candidate pores. First, we use the binary ridge map as a mask to filter the pore map. In this step, the pixels which are not on ridges are removed. Second, we sort the remaining candidate pore pixels according to their gray level values descendingly and then discard the last 5% pixels because they are more probably caused by noise. Third, we
120
8 Fingerprint Pore Extraction
Fig. 8.8 Some example pore extraction results obtained by using the proposed method
identify all the connected components on the pore map, and each component is taken as a candidate pore. We check the size of each component, i.e. the number of pixels it has. If the size is out of the pre-specified range of valid pore size (from 3 to 30 in our experiments), the candidate pore is removed from the pore map. The final pore map is obtained after the above refinement. We record the extracted pores by recording the coordinates of their mass centers. See Fig. 8.7f for an example. More examples of different fingerprint fragments are given in Fig. 8.8.
8.2.4
Experiments and Performance Evaluation
To evaluate the proposed fingerprint pore extraction method, a high resolution fingerprint image dataset is required. It is well accepted that the fingerprint image resolution should be at least 1000 dpi to reliably capture the Level 3 features such as pores [10]. We built an optical fingerprint scanner by ourselves, which could collect the fingerprint images at a resolution of about 1200 dpi. Figure 8.9 shows the scanner we developed. It uses a CCD camera (Lumenera Camera LU135M) to capture the fingerprint image when the finger touches against the prism of the scanner. Two databases have been established by using the scanner we developed. The first database (denoted as DBI) is a partial fingerprint image database where the image size is 320 pixels in width and 240 pixels in height. The second database (denoted as DBII) contains full- size fingerprint images which are 640 pixels in width and 480 pixels in height. Both databases have 1480 fingerprint images taken from 148 fingers with each finger having 10 samples in two sessions. Five images were captured for each finger in each of the two sessions which were about 2 weeks apart. Using the established databases, we have conducted extensive experiments to evaluate the proposed pore extraction method in comparison with three state-of-theart methods (Jain’s method [7, 16], Ray’s method [23], and the adaptive DoG based method [32]. Three types of experiments were conducted. First, we compared the proposed method with its counterparts in terms of pore detection accuracy using a set of fingerprint images chosen from DBI. Second, using the partial fingerprint image database DBI and a minutia-pore-based fingerprint matcher, we evaluated the fingerprint recognition performance by using the pores extracted by the proposed
8.2 Model-Based Pore Extraction
121
Fig. 8.9 (a) The high-resolution fingerprint scanner we developed and (b) its inner structure
method and the other three methods. Third, we evaluated the fingerprint recognition performance of the four methods on the full-size fingerprint image database DBII. In the following, we present the experiments in detail.
8.2.4.1
Pore Detection Accuracy
We first assess the pore detection accuracy of the proposed method. For this purpose, we chose a set of 24 fingerprint images from DBI. The chosen images have relatively good quality so that the pores on them can be easily marked. Figure 8.10 shows two example fingerprint images. We manually marked the pores on these fingerprint images as the ground truth for our experiments. We then used our proposed method, Jain’s method, Ray’s method, and the adaptive DoG based method to extract pores on them. Figure 8.11 shows the pore extraction results of the four methods on the fingerprint image in Fig. 8.10b. On this fingerprint fragment, the ridges on the left hand side are thinner than those on the right hand side, and both open and closed pores can be observed. Figure 8.11a and b, we can see that due to the unitary scale they use, Ray’s method and Jain’s method cannot work consistently well on the left and the right parts of the fingerprint image because of the varying ridge widths and pore sizes. In addition, all the three comparison methods miss many open pores because their isotropic pore models cannot accurately handle open pores. In contrast, the proposed method successfully detects most of the pores on both the left and the right parts of the fingerprint image no matter they are open or closed. This demonstrates that the proposed DAPM model can better adapt to varying ridge widths and pore sizes, and can better cope with both closed and open pores. In addition to the visual evaluation of the pore detection results, we calculated the average detection accuracy on the 24 fingerprint images by using two metrics: RT (true detection rate) and RF (false detection rate). RT is defined as the ratio of the number of detected true pores to the number of all true pores, while RF is defined as the ratio of the number of falsely detected pores to the total number of detected pores. A good pore extraction algorithm should have a high RT and a low RF
122
8 Fingerprint Pore Extraction
Fig. 8.10 Two example fingerprint images used in the experiments
Fig. 8.11 Example pore extraction results of (a) Ray’s method, (b) Jain’s method, (c) the adaptive DoG based method, and (d) the proposed method. The pores are manually marked by bright dots and the detected pores are marked by red circles
simultaneously. Table 8.1 lists the average detection accuracy and the standard deviation of detection accuracy of the four methods. According to the average detection accuracy listed in Table 8.1, the proposed method achieves not only the
8.2 Model-Based Pore Extraction
123
Table 8.1 The average pore detection accuracy (%) and the standard deviation of the four methods on the 24 fingerprint images RT RF
Ray’s method 60.6 (11.9) 30.5 (10.9)
Jain’s method 75.9 (7.5) 23.0 (8.2)
Adaptive DoG based method 80.8 (6.5) 22.2 (9.0)
The proposed method 84.8 (4.5) 17.6 (6.3)
highest true detection rate but also the lowest false detection rate. With regard to the standard deviation, as shown in Table 8.1, the proposed method again achieves the smallest deviation over the whole image set for both true detection rate and false detection rate. As for the other three methods, none beats its counterparts in all cases. From these results, we can see that the proposed method can detect pores on fingerprint images more accurately and more robustly.
8.2.4.2
Pore Based Partial Fingerprint Recognition
Since the purpose of pore extraction is to introduce new features for fingerprint recognition, it is necessary to test how the pores extracted by the methods will contribute to a fingerprint recognition system. According to [14, 15, 32], the fingerprint recognition benefits more from the pores when the used fingerprint images cover small fingerprint area. Therefore, in order to emphasize the contribution of pores, we evaluated in this subsection the improvement of fingerprint recognition accuracy made by the extracted pores based on the partial fingerprint image database DBI. We implemented an AFRS like the one in [38] which is based on minutiae and pores. The block diagram of the AFRS is shown in Fig. 8.12. It consists of five main modules, i.e. minutia extraction, pore extraction, minutia matching, pore matching, and match score fusion. We use the methods in [8] for minutia extraction and matching modules. The pore matching is accomplished by using the direct pore matching method in [38]. It firstly establishes initial correspondences between the pores on two fingerprints based on their local features, and then uses the RANSAC (Random Sample Consensus) algorithm [39] to refine the pore correspondences, and finally calculates a pore-based similarity score between the two fingerprints based on the number of corresponding pores. The pore matching is independent of minutia matching in this method. This method is very suitable for small partial fingerprint recognition where the minutia matching results are often unreliable due to the limited number of minutiae on the small fingerprint fragments [38]. The pore match score and the minutia match score are finally fused by using a simple weighted summation scheme to give the final match score between two fingerprint images (before fusion, both match scores are normalized to the range between 0 and 1) as follows: MS ¼ ωMSminu þ ð1 ωÞMSpore where o is the weight of minutiae with respect to pores.
ð8:8Þ
124
8 Fingerprint Pore Extraction
Fig. 8.12 Block diagram of the AFRS used in the partial fingerprint recognition experiments
By using database DBI and the above described AFRS, we evaluated the fingerprint recognition performance of the four pore extraction methods. Considering the expensive computational cost, the following matches were carried out: (1) Genuine matches: Each of the fingerprint images in the second session was matched with all the fingerprint images in the first session, leading to 3700 genuine matches, and (2) Imposter matches: the first fingerprint image of each finger in the second session was matched with the first fingerprint image of all the other fingers in the first session, resulting in 21,756 imposter matches. Figure 8.13 shows the equal error rates (EER) obtained by the four methods on DBI under different weights. By using only minutiae, the EER is 17.67%. The EERs when using only pores (i.e.ω ¼ 0) are, respectively, 21.96% by Ray’s method, 21.53% by Jain’s method, 22.99% by the adaptive DoG based method, and 20.49% by the proposed method. By fusing minutiae and pores, the best results are 12.41% (ω¼0.9), 12.4% (ω¼0.8), 14.18% (ω¼0.9), and 11.51% (ω¼0.8) by the four methods, respectively. Figure 8.14 shows their receiver operating characteristics (ROC) curves when the best EERs are obtained. It is seen that the proposed method leads to the best recognition results. The improvement of recognition accuracy made by fusing the pore features over using only minutia features are 29.77%, 29.82%, 19.75% and 34.86%, respectively by the four methods.
8.2.4.3
Pore Based Full-Size Fingerprint Recognition
The experiments in this sub-section were to evaluate the contribution of the extracted pores to full-size fingerprint recognition. We compared the four pore extraction methods by using a different AFRS in [16], which is appropriate for full-size fingerprint images, and using the full-size fingerprint image database DBII. Figure 8.15 shows the block diagram of the AFRS. We used the same minutia extraction and matching modules and the same score fusion module as in the last sub-section, but implemented the pore matching module by using the ICP (iterative closest point) based method as in [7, 16]. This is because on fingerprint images covering large fingerprint area, there are sufficient minutiae to provide reliable minutia match results. We can thus compare the pores locally in the neighborhoods of matched minutiae. In this way, the pores can be matched much more efficiently. Specifically, after matching the minutiae on two fingerprints, the pores lying in the neighborhoods of each pair of matched minutiae are matched by using the ICP algorithm [7, 16], resulting in N match scores (N is the number of pairs of matched minutiae), which
8.2 Model-Based Pore Extraction
Fig. 8.13 The EERs of the four methods on DBI with different fusing weights
Fig. 8.14 The ROC curves of the four methods on DBI when the lowest EERs are obtained
125
126
8 Fingerprint Pore Extraction
Fig. 8.15 Block diagram of the AFRS used in full-size fingerprint recognition experiments
are defined as the summation of two terms: the mean distance between all matched pores and the percentage of unmatched pores. The pore match score between the two fingerprints is finally defined as the average of the first three smallest match scores. By using the above AFRS, we matched pair-wise all the fingerprint images in DBII (avoiding symmetric matches), generating 6660 genuine match scores and 1,087,800 imposter match scores. Figure 8.16 presents the EERs obtained by the four methods on DBII. Because the fingerprint images in DBII are full-size fingerprint images and have more minutiae, it can be seen that the EER of using only minutiae is 0.61%, which is much better than that obtained on DBI (17.67%, referring to Sect. 8.2.4.2). When only using pores, the EER of Ray’s method is 9.45%, Jain’s method 8.82%, the adaptive DoG based method 10.85%, and the proposed method 7.81%. The best results of these methods after fusion with minutia scores are 0.59% (ω¼0.9), 0.6% (ω¼0.9), 0.56%(ω¼0.8), and 0.53% (ω¼0.7). Figure 8.17 shows their corresponding ROC curves when the best results are obtained. The proposed method improves on the best EERs of Ray’s method, Jain’s method, and the adaptive method by 10.17%, 11.67%, and 5.36%, respectively.
8.2.4.4
Computational Complexity Analysis
We now briefly analyse the computational complexity of the methods. As shown in Fig. 8.6, the proposed pore extraction method has five main steps: (A) partition, (B) ridge orientation and frequency estimation, (C) ridge map extraction, (D) pore detection, and (E) post-processing. Among these steps, the steps (B) and (C) are common to most automatic fingerprint recognition systems. The step (A) needs to calculate the two measurements, OC and IC, for each of the blocks, which can be done very efficiently. On a PC with 2.13 GHz Intel(R) Core(TM) 26,400 CPU and RAM of 2 GB, the Matlab implementation of the method took about 0.05 ms on average to conduct the step (A) for one partial fingerprint image used in the experiments (note that down-sampling was applied in the experiments). In the pore detection step (D), the main operation is the convolution of pore models with the fingerprint image. This is essentially common to all filtering-based pore extraction methods including the three counterpart methods considered in the experiments
8.2 Model-Based Pore Extraction
127
Fig. 8.16 The EERs of the four methods on DBII when different weights are used. (b) is the zoomin of (a) when the weight is from 0.6 to 1
here. However, because Jain’s and Ray’s methods both apply a single pore filter to the whole fingerprint image, they are more efficient than the Adaptive DoG-based method and the method proposed here which apply different pore filters to different blocks. Specifically, Jain’s and Ray’s methods took less than 0.1 ms to detect the pores on one partial fingerprint image, whereas the other two methods used about 0.5 ms. The last post- processing step (E) is also common to all pore extraction
128
8 Fingerprint Pore Extraction
Fig. 8.17 The ROC curves of the four methods on DBII when the best results are obtained
methods. Using our Matlab implementation, it took about 0.4 ms to carry out all the post-processing operations defined in Sect. 8.2.3.3. From the above analysis, we can see that the proposed pore extraction method is a little more complex than Jain’s and Ray’s methods due to the more elaborated pore models on which it is based. However, considering the gain of accuracy by the proposed method, the increased computational complexity is deserved. More importantly, its computational cost is still acceptable. With the Matlab implementation, it took about 1–2 s to extract the pores on one partial fingerprint image, and we expect that the computational cost can be much reduced by using languages like C/C++ and after optimization.
8.3 8.3.1
Learning-Based Pore Extraction Methodology Statement
The flowchart of a learning-based pore extraction method is shown in Fig. 8.18. The method can be roughly divided into three steps: (i) coarse detection of pore candidates using logical operation; (ii) CNN judgement of pore candidates; (iii) refinement by logical and morphological operation after judgement. These steps will be introduced detailedly in the following sub-sections.
8.3 Learning-Based Pore Extraction
129
Fig. 8.18 The flowchart of the learning-based pore extraction method
8.3.1.1
Coarse Detection of Pore Candidates
The CNN can be used for sweat pore extraction after training. Considered that CNN test on each pixel will produce big calculation quantities, it is reliable to propose a coarse detection to test images in order to select candidate position of the pore first. Coarse detection of pore candidates consists of a series of logical operations. As shown in Fig. 8.19, it is required to get binary and ridge images of test fingerprint images. First, using Bernsen’s binary method [40] to get binary images, which is based partial adaptive thresholds. Adaptive thresholds achieve better performance than global threshold in fingerprint image binaryzation [41]. Then using Lin’s enhance method [42] to get ridge images and get the candidate position of test image by implementing XOR operating on ridge images and binary images pixel by pixel. In test step, testing with CNN only operates in these candidate pixels. This detecting step will reduce the calculation quantity, also help to improve recognition accuracy because it greatly removes the impossible position of the pore.
8.3.1.2
CNN Architecture and Implement Details
The “Judge-CNN” architecture is inspired by VGGNet [43]. The purpose of this network is to judge if there is a pore in the center area of the input image or not. Therefore, the CNN is used to solve binary classification problems in this study. That is, when labeling the training data, if there is a pore’s centroid located in the center of the input patch, they are labeled as positive samples. Otherwise, the patch is labeled as negative samples.
130
8 Fingerprint Pore Extraction
Fig. 8.19 The flow chart of detecting pore candidates
Fig 8.20 “Judge-CNN” architecture for candidate pore judgement
The architecture of judge-net is illustrated in Fig. 8.20. The network is composed of 5 convolution layers and 2 fully-connected layers, and finally a softmax output layer. All layers apart from the output layer followed by a rectified linear unit (ReLU) [31] and all of them share a standard set of hyper-parameters. The input image is set to 11 11. The same to VGGNet [43], the convolutional filter size is set to 3 3 with a stride of 1. The number of filters in the convolutional layers is set to 2dl/2e + 4, where l is the layer number. The two fully-connected layers consist of 128, 32 nodes, respectively. In order to prevent overfitting, both of them use dropout [44] with a ratio of 0.5. One drawback of our architecture can be inferred is that because of the input image size limit, the pores which located in each boundary side of the image perhaps cannot be detected very well. In order to make the CNN network achieve better capacity of judgement. It is necessary to select the training data strictly. The training data selection area is produced as Fig. 8.21. When we get a fingerprint image which pores’ centroid were labeled with ground truth, we assume a threshold d to calculate if each pixel is sufficiently near to a pore. Using Euclidean distance to calculate the distance between each pixel to its nearest pore centroid. Defining dp and dn to denotes positive distance and negative distance, respectively. As shown in Fig. 8.21, the areas which all the pixels’ distance is higher than dn, we label it the negative area. On the other hand, the areas which all the pixels’ distance is lower than dp, we label it the positive area. The dp and dn are empirically set to 3 and 5, respectively. As for the other area besides the positive and negative area, it can be labeled the ignored area.
8.3 Learning-Based Pore Extraction
131
Fig. 8.21 Training data selection in an automatic way
Then we select a location (pixel) in the fingerprint image for training data randomly to get enough positive and negative samples from corresponding area. Each training data is a patch of size 11 11 cropped from the fingerprint image whose center is a selected pixel. In order to prevent ambiguous training data labels, it is important not to pick a patch whose center is located in the ignored area. In order to improve the robustness of the model trained, we use data augmentation. A random selection of grayscale transformation, contrast variation and noise degradation were applied to every patch once it was picked up from the fingerprint image. The parameters for this augmentation were also randomly selected within limits. Furthermore, a random rotation and picture flipping were applied to patches. Millions of training samples were picked up by this means. The training implementation details are stated as follows. Initializing the weights in each layer from a zero-mean Gaussian distribution with standard deviation 0.01. Training adopted stochastic gradient descent as optimization method with 0.9 momentum. Mini-batch is set to 256, base learning-rate is set to 0.005 and reduced every 10,000 iteration and max-iteration is set to 50,000. When the loss is converged for a sufficient period, training can be stopped manually. Caffe [45] framework was used to implement the network training and testing. Workstation’s CPU is 2.8 GHz RAM 32 GB and GPU is NVIDIA Quadro M5000 M 8 GB. All methods were implemented using MATLAB.
8.3.1.3
Refinement
After getting judged map through CNN judge on candidate pore map, it is necessary for us to propose a refinement step to make the test result better. As shown in Fig. 8.22, the refinement step includes two substeps, the elimination step and locating step. The same to the coarse detection step, the refinement step implements through logical transformation operation.
132
8 Fingerprint Pore Extraction
Fig. 8.22 The flow chart of refinement step
The elimination step aims mainly to eliminate false detected results. There are many false detected pores in the judge map. Many of them are located in the valley area in original test input fingerprint images, because the ridge image has wider ridge than the binary image and the CNN judge net cannot remove them completely. So we implement morphology erosion processing to get thinner ridge image from ridge image. The thinner ridge image has smaller ridge width than the original ridge image. Then, we get eliminated judge map from by implementing XOR operating on judge map and thinner ridge map pixel by pixel. By this way, a lot of false detected pores can be removed. But in the same time, this operation will remove also some true detected pores whose center is located far from ridge skeleton (most of them are open pores with small size). The Last step is to get pore map by calculating all the connected components’ centroid from the eliminated judge map. The coordinates of each pixel which is centroid in the eliminated judge map stands for the location of each detected pore. We discard the white regions with area equal or less than an empirical threshold before calculating the centroids.
8.3.2
Experimental Results and Analysis
8.3.2.1
Databases and Protocols
In the experiment, the High-Resolution-Fingerprint (HRF) database [17] from PolyU was used to evaluate the performance of pore extraction. The HRF database contains 30 fingerprint images with ground truth of pores labeled manually. Each image has a spatial size of 320 pixels by 240 pixels and a resolution of 1200 dpi. The labels denote that there are 12,767 pores in the database totally. Here, k-fold cross validation strategy [18] is used for evaluating the performance. We choose k ¼ 5 in experiment, threefold of them for training, onefold for validation and onefold for testing. Each fold is composed of six different images and were mutual exclusion with other folds. The performance of the method is calculated by averaging the accuracy of five testing folds.
8.3 Learning-Based Pore Extraction
133
Two figures of merit are used for evaluating the performance: the true detection rate (RT) and the false detection rate (RF) [19]. RT represents the ratio of the number of detected real pores to the number of all true pores present in the image, where RF indicates the ratio of the number of falsely detected pores to the total number of detected pores. If the Euclidean distance between the coordinate of the detected pore and ground truth label is less than d pixels, it is considered that the pore is detected correctly. Setting d ¼ rw/2, where rw is the average ridge width in the image. Obviously, the algorithm performs the best while RT and RF are one and zero, respectively, which indicates all pores were detected correctly.
8.3.2.2
Validation of the Learning-Based Method
A skill of decreasing test position of pore is introduced in Sect. 8.3.1.1. The compress rate of the method is decided by the size of window during binary processing. Where w denotes the size of window (2 w + 1), the result of different w is shown in Table 8.2. It is obvious that with the growth of window size, the compress rate becomes higher. That indicates the calculation quantity of CNN is reduced greatly. Compared with w ¼ 0 (test on all 230 by 310 pixels) and w ¼ 1, the result shows that testing only on candidate position can reduce the calculations and make better performance. Considering that RF is reduced obviously because many invalid regions were removed when choosing candidate position, RT is improved slightly, maybe because many regions of two or more pores have connected when all positions are tested. That will lead mistakes when picking up the centroid of candidate region and cause lower RT. But with the growth of w, more candidate positions are removed, so the RT decreases. Reducing the candidate region will also cause higher RT because the detected pores decreased. Before computing the centroid of each white region, a refinement step is introduced in Sect. 8.3.1.3. Because of the performance, we choose the accuracy w ¼ 1 as our best result for the following experiment. The result after refinement is shown in Table 8.3. As inferred before, the refinement step removes all the candidate pixels far away from ridge. It can prevent candidate pore appearing in the impossible position Table 8.2 Average performance metrics in percentage and standard deviation (in parenthesis) w RT RF Compress
0 93.92 (1.09) 14.52 (3.51) 0%
1 94.27 (0.72) 8.62 (0.83) 74.47%
2 93.70 (0.85) 10.45 (1.37) 80.69%
3 92.75 (0.98) 10.54 (1.48) 82.82%
4 91.61 (1.12) 10.28 (1.48) 83.53%
134
8 Fingerprint Pore Extraction
Table 8.3 Average performance metrics contrast while the parameter w ¼ 1
Refinement RT RF
Non refine 94.27 (0.72) 8.62 (0.83)
After refine 93.14 (1.43) 4.39 (1.44)
well but remove genuine pore which is located far from ridge skeleton at the same time. It is obvious that after refinement, RF is reduced to be half nearly after refinement. Meanwhile, the RT is slightly decreased but kept at the same level as a non- refined result.
8.3.2.3
Comparison with the State-of-the-Art Methods
Below, we compare the presented learning-based method with other existing methods in terms of RT and RF. The counterpart methods include model-based methods such as Jain’s model [16], Adapt. DoG and DAPM [22], morphologybased methods such as Xu’s method [27], and learning-based methods such as Labati’s model [28] and DeepPore [30]. The comparison results are shown in Table 8.4. It can be seen that the proposed method achieves lower RF and more standard deviation than other methods. As for RT, the proposed method achieves similar results as DeepPore method and better results than other methods. Some extraction results are shown in Fig. 8.23, which demonstrates the effectiveness of the proposed method. Most of pores were detected correctly. As illustrated in Sect. 8.3.1.2, due to the limit of input image, most of the pores located in each boundary side of the image are lost.
8.4
Summary
This chapter reviews the methods for extracting pores on fingerprint images. We divide pore extraction methods into different categories. While earlier methods detect pores via tracking ridge skeletons of fingerprint images, we focus on the other two categories, i.e., model-based and learning-based pore extraction methods. For each category, we introduce in detail one state-of-the-art method together with extensive evaluation results.
RT RF
Gabor Filter [16] 75.90 (7.5) 23.00 (8.2)
Adapt. Dog [22] 80.80 (6.5) 22.20 (9.0)
DAPM [22] 84.80 (4.5) 17.60 (6.3)
Xu et al. [27] 85.70 11.90
Labati et al. [28] 84.96 (7.81) 15.31 (6.2)
Deep pore [30] 93.09 (4.63) 8.64 (4.15)
Table 8.4 Average performance metrics in percentage and standard deviation (in parenthesis) of six different state-of-the-art methods Proposed Method 93.14 (1.43) 4.39 (1.44)
8.4 Summary 135
136
8 Fingerprint Pore Extraction
Fig. 8.23 Examples of the pores detected by the proposed method. The true predictions are denoted by green dots, where the miss pores and false predictions are yellow and red, respectively
References 1. Yoon, S., Jain, A.K.: Longitudinal study of fingerprint recognition. Proc. Natl. Acad. Sci. U. S. A. 112(28), 8555 (2015) 2. Xu, P., Yan, Y.: HPTLC fingerprint identification of commercial ginseng drugs - reinvestigation of HPTLC of ginsenosides. J. High Resolut. Chromatogr. 10(11), 607–613 (2015) 3. Liu, F., Zhang, D., Shen, L.: Study on novel curvature features for 3D fingerprint recognition. Neurocomputing. 168(C), 599–608 (2015) 4. Ratha, N., Bolle, R.: Automatic Fingerprint Recognition Systems. Springer, New York (2004) 5. Ratha, K., Karu, K., Chen, S., Jain, A.K.: A real-time matching system for large fingerprint databases. IEEE Trans. Pattern Anal. Mach. Intell. 18, 799–813 (1996) 6. Jain, A., Hong, L., Bolle, R.: On-line fingerprint verification. IEEE. Trans. Pattern Recogn. Mach. Intell. 19(4), 302–314 (1997) 7. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2007) 8. Ross, A., Jain, A., Reisman, J.: A hybrid fingerprint matcher. Pattern Recogn. 36, 1661–1673 (2003) 9. He, Y., Tian, J., Li, L., Chen, H., Yang, X.: Fingerprint matching based on global comprehensive similarity. IEEE Trans. Pattern Anal. Mach. Intell. 28(6), 850–862 (2006) 10. CDEFFS.: Data Format for the Interchange of Extended Fingerprint and Palmprint Features. Working Draft Version 0.4 (2009). Available at http://fingerprint.nist.gov/standard/cdeffs/ index.htmlS 11. Roddy, A., Stosz, J.: Fingerprint features – statistical analysis and system performance estimates. Proc. IEEE. 85, 1390–1421 (1997) 12. Parsons, N.R., Smith, J.Q., Thonnes, E., Wang, L., Wilson, R.G.: Rotationally invariant statistics for examining the evidence from the pores in fingerprints. Law Probab. Risk. 7, 1–14 (2008) 13. Stosz, J.D., Alyea, L.A.: Automated system for fingerprint authentication using pores and ridge structure. In: Proceedings of the SPIE Conference on Automatic Systems for the Identification and Inspection of Humans, pp. 210–223, San Diego (1994) 14. Kryszczuk, K., Drygajlo, A., Morier, P.: Extraction of level 2 and level 3 features for fragmentary fingerprints. In: Proceedings of the 2nd COST Action 275 Workshop, pp 83–88. (2004)
References
137
15. Kryszczuk, K., Morier, P., Drygajlo, A.: “Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison.” In: BioAW2004, LNCS 3087, pp. 124–133. (2004) 16. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: Fingerprint matching using level 3 features. In: Proceedings of ICPR06, pp. 477–480 (2006) 17. Zhao, Q., Zhang, L., Zhang, D., Luo, N.: Direct pore matching for fingerprint recognition. Adv. Biometrics ICB. 5558, 597–606 (2009) 18. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: high-resolution fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2007) 19. Liu, F., Zhao, Q., Zhang, L., Zhang, D.: Fingerprint pore matching based on sparse representation. In: Proceedings of the 20th International Conference on Pattern Recognition, (2010) 20. Liu, F., Zhao, Q., Zhang, D.: A novel hierarchical fingerprint matching approach. Pattern Recogn. 44(8), 1604–1613 (2011) 21. Liu, F., Zhao, Y., Shen, L.: Feature guided fingerprint pore matching. In: Zhou, J., et al. (eds.) CCBR 2017. LNCS, vol. 10568, pp. 334–343. Springer, Cham. (2017) https://doi.org/10.1007/ 978-3-319-69923-3_36 22. Zhao, Q., Zhang, D., Zhang, L., Luo, N.: Adaptive fingerprint pore modeling and extraction. Pattern Recogn. 43(8), 2833–2844 (2010) 23. Ray, M., Meenen, P., Adhami, R.: A novel approach to fingerprint pore extraction. In: Proceedings of the 37th South-eastern Symposium on System Theory, pp. 282–286 (2005) 24. Abhyankar, A., Schuckers, S.: Towards integrating level-3 Features with perspiration pattern for robust fingerprint recognition. In: IEEE International Conference on Image Processing, vol. 59, no. 1, pp. 3085–3088. IEEE (2010) 25. Malathi, S., Maheswari, S., Meena, C.: Fingerprint pore extraction based on marker controlled watershed segmentation. In: The International Conference on Computer and Automation Engineering, vol. 3, pp. 337–340. IEEE (2010) 26. Da Silva Teixeira, R.F., Leite, N.J: On adaptive fingerprint pore extraction. In: Kamel M., Campilho A. (eds.) ICIAR 2013. LNCS, vol. 7950, pp. 72–79. Springer, Heidelberg. (2013). https://doi.org/10.1007/978-3-642-39094-4_9 27. Xu, Y., Lu, G., Liu, F., Li, Y.: Fingerprint pore extraction based on multi-scale morphology. In: Zhou, J. et al. (eds.) CCBR 2017. LNCS, vol. 10568, pp. 288–295. Springer, Cham. (2017) https://doi.org/10.1007/978-3-319-69923-3_31 28. Labati, R., Genovese, A., Muñoz, E., Piuri, V., Scotti, F.: “A novel pore extraction method for heterogeneous fingerprint images using convolutional neural networks.” Pattern Recogn. Lett. (2017) 29. Wang, H., Yang, X., Ma, L., Liang, R.: Fingerprint pore extraction using U-Net based fully convolutional network. In: Zhou, J. et al. (eds.) CCBR 2017. LNCS, vol. 10568, pp. 279–287. Springer, Cham (2017). doi:https://doi.org/10.1007/978-3-319-69923-3_30 30. Jang, H., Kim, D., Mun, S., Choi, S., Lee, H.: Deeppore: fingerprint pore extraction using deep convolutional neural networks. IEEE Signal Process. Lett. 24(12), 1808–1812 (2017) 31. Krizhevsky, A., Sutskever, I., Hinton, I.G.: ImageNet classification with deep convolutional neural networks. Int. Conf. Neural Infor. Proces. Syst. 60(2), 1097–1105 (2012) 32. Zhao, Q., Zhang, D., Zhang, L., Luo, N.: High resolution partial fingerprint alignment using pore-valley descriptors. Pattern Recogn. 43(3), 1050–1061 (2010) 33. Ashbaugh, D.R.: Quantitative–Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology. CRC Press LLC (1999) 34. Sofka, M., Stewart, C.V.: Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures. IEEE Trans. Med. Imaging. 25, 1531–1546 (2006) 35. Bazen, A.M., Gerez, S.H.: Systematic methods for the computation of the directional fields and singular points of fingerprints. IEEE Trans. Pattern Anal. Mach. Intell. 24, 905–919 (2002) 36. Hong, L., Wan, Y., Jain, A.K.: Fingerprint image enhancement: algorithms and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 777–789 (1998)
138
8 Fingerprint Pore Extraction
37. Chen, Y., Dass, S., Jain, A.: Fingerprint quality indices for predicting authentication performance. In: Proceedings of AVBPA, pp. 160–170 (2005) 38. Zhao, Q., Zhang, L., Zhang, D., Luo, N.: Direct pore matching for fingerprint recognition. In: Proceedings of ICB, pp. 597–606 (2009) 39. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd ed. Cambridge University (2006) 40. Bernsen, J.: Dynamic Thresholding of Grey-Level Images. In: International Conference on Pattern Recognition (1986) 41. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition. Springer, London (2009). https://doi.org/10.1007/978-1-84882-254-2 42. Lin, H., Wan, Y., Jain, A.K.: Fingerprint image enhancement: algorithm and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 777–789 (1998) 43. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014) 44. Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Improving neural networks by preventing co-adaptation of feature detectors. Comput. Sci. 3(4), 212–223 (2012) 45. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J.: Caffe: convolutional architecture for fast feature embedding, pp. 675–678 (2014)
Chapter 9
Pore-Based Partial Fingerprint Alignment
Abstract This chapter shows an application of pores in the alignment of high resolution partial fingerprints, which is a crucial step in partial fingerprint recognition. Previously developed fingerprint alignment methods, including minutia-based and non-minutia feature based ones, are unsuitable for partial fingerprints because small fingerprint fragments often do not have enough features required by these methods. In this chapter, we introduce an approach to aligning high resolution partial fingerprints based on pores, a type of high resolution fingerprint features that are abundant on even small fingerprint areas. Pores are first extracted from the fingerprint images by using a difference of Gaussian filtering approach. After pore detection, a novel pore–valley descriptor (PVD) is proposed to characterize pores based on their locations and orientations, as well as the ridge orientation fields and valley structures around them. A PVD-based coarse-to-fine pore matching algorithm is then developed to locate pore correspondences. Once the corresponding pores are determined, the alignment transformation between two partial fingerprints can be estimated. The proposed method is compared with representative minutia based and orientation field based methods using a high resolution partial fingerprint dataset and two fingerprint matchers. Experimental results show that the PVD-based method can more accurately locate corresponding feature points, estimate the alignment transformations, and hence significantly improve the accuracy of high resolution partial fingerprint recognition. Keywords Fingerprint alignment · Partial fingerprints · High resolution fingerprints · Pores
9.1
Introduction
Automatic fingerprint recognition systems (AFRS) have been nowadays widely used in personal identification applications such as access control [1, 2]. Roughly speaking, there are three types of fingerprint matching methods: minutia-based, correlation-based, and image-based [2, 3]. In minutia-based approaches, minutiae (i.e. endings and bifurcations of fingerprint ridges) are extracted and matched to © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_9
139
140
9 Pore-Based Partial Fingerprint Alignment
Fig. 9.1 An example of high resolution partial fingerprint. It has only five minutiae as marked in (a), but hundreds of pores as marked in (b)
measure the similarity between fingerprints [4–8]. These minutia-based methods are now the most widely used ones [1, 2]. Different from the minutia-based approaches, both correlation-based and image-based methods compare fingerprints in a holistic way. The correlation-based methods spatially correlate two fingerprint images to compute the similarity between them [9], while the image-based methods first generate a feature vector from each fingerprint image and then compute their similarity based on the feature vectors [3, 10–13]. No matter what kind of fingerprint matchers are used, the fingerprint images usually have to be aligned when matching them. Later in this section, we will discuss more about the fingerprint alignment methods. In order to further improve the accuracy of AFRS, people are now exploring more features in addition to minutiae on fingerprints. The recently developed high resolution fingerprint scanners make it possible to reliably extract level-3 features such as pores. Pores have been used as useful supplementary features for a long time in forensic applications [14, 15]. Researchers have also studied the benefit of including pores in AFRS and validated the feasibility of pore based AFRS [16–21]. Using pores in AFRS has two advantages. First, pores are more difficult to be damaged or mimicked than minutiae [21]. Second, pores are abundant on fingerprints. Even a small fingerprint fragment could have a number of pores (refer to Fig. 9.1). Therefore, pores are particularly useful in high resolution partial fingerprint recognition where the number of minutiae is very limited. In this chapter, we focus on the alignment of high resolution partial fingerprints and investigate the methods for high resolution fingerprint image processing.
9.1 Introduction
9.1.1
141
High Resolution Partial Fingerprint
In a live-scan AFRS, a user puts his/her finger against the prism and the contact fingerprint region will be captured in the resulting image. A small contact region between the finger and the prism will lead to a small partial fingerprint image. On such small fingerprint region, there could be very limited minutiae available for recognition. A natural way to solve the partial fingerprint recognition problem is to make full use of other fine fingerprint features abundant on the small fingerprint fragments. Sweat pores are such kind of features and high resolution fingerprint imaging makes it possible to reliably extract the sweat pores on fingerprints [15]. Most existing high resolution fingerprint recognition methods use full-size fingerprint images which capture large fingerprint areas. However, to capture the full fingerprints, high resolution fingerprint images should have much bigger sizes than conventional low resolution fingerprint images. As a result, much more computational resources are required to process the images. Considering the increasing demand of AFRS on mobile devices and other small portable devices, small fingerprint scanners and limited computational resources are very common [22]. Consequently, the algorithms for aligning and matching partial fingerprint images are becoming important. Therefore, this chapter, different from previous study of high resolution fingerprint recognition, uses high resolution partial fingerprint images to study the partial fingerprint image alignment problem and a feasible algorithm will be proposed. Although some methods have been proposed to construct full fingerprint templates from a number of partial fingerprint images [23], it is expensive or even impossible to collect sufficient fingerprint fragments to construct a reliable full fingerprint template. Moreover, some errors (e.g. spurious features) could be introduced in the construction process. Thus, it is meaningful and very useful if algorithms can be developed for aligning and matching partial fingerprints to partial fingerprints. Some researchers have studied the problem of matching a partial fingerprint to full template fingerprints. In [22], Jea and Govindaraju proposed a minutia-based approach to matching incomplete or partial fingerprints with full fingerprint templates. Their approach uses brute-force matching when the input fingerprints are small and few minutiae are presented, and uses secondary feature matching otherwise. Since this approach is based on minutiae, it is very likely to produce false matches when there are very few minutiae, and it is not applicable when there are no minutiae on the fingerprint fragments. Kryszczuk et al. [16, 17] proposed to utilize pore locations to match fingerprint fragments. Using high resolution fingerprint images (approx. 2000 dpi in [16, 17]), they studied how pores might be used in matching partial fingerprints and showed that the smaller the fingerprint fragments, the greater the benefits of using pores. In their method, Kryszczuk et al. aligned fingerprints by searching for the transformation parameters which maximize the correlation between the input fingerprint fragment and the candidate part on the full fingerprint template. Recently, Chen and Jain [24] employed minutiae, dots, and
142
9 Pore-Based Partial Fingerprint Alignment
incipient ridges, to align and match partial fingerprints with full template fingerprints. One drawback of most of the above approaches in aligning fragmental fingerprints is that they are mainly based on the features which are probably very few (e.g. minutiae) or even do not exist (e.g. dots and incipient ridges) on small fingerprint fragments (refer to Fig. 9.1). When the template fingerprints are also small fingerprint fragments, it will become difficult to get correct results due to the lack of features. In [16, 17], Kryszczuk et al. proposed a correlation based blind searching approach to fragmental fingerprint alignment. As we will show later, however, this method has limited accuracy because it has to discretize the transformation parameter space.
9.1.2
Fingerprint Alignment
Fingerprint alignment or registration is a crucial step in fingerprint recognition. Its goal is to retrieve the transformation parameters between fingerprint images and then align them for matching. Some non-rigid deformation or distortion could occur in fingerprint image acquisition. It is very costly to model and remedy such distortions in fingerprint registration, and they can be compensated to some extent in subsequent fingerprint matching. Thus, the majority of existing fingerprint alignment methods considers only translation and rotation, although some deformable models [25, 26] have been proposed. According to the features used, existing fingerprint alignment methods can be divided into two categories, minutia based and non-minutia feature based methods. Minutia based methods are now the most widely used ones [3–8, 27– 31]. Non-minutia feature based methods [9, 10, 32–36] include those using image intensity values, orientation fields, cores, etc. One problem in applying these methods to partial fingerprints is that the features required by them could be very few on the fragments. Consequently, they will either lead to incorrect results or be not applicable. There are roughly two kinds of methods for estimating alignment transformations. The first kind of methods quantizes the transformation parameters into finite sets of discrete values and searches for the best solution in the quantized parameter space [9, 16, 17, 28, 29, 33–36]. The alignment accuracy of these methods is thus limited due to the quantization. The second kind of methods first detects corresponding feature points (or reference points) on fingerprints and then estimates the alignment transformation based on the detected corresponding points [4–8, 10, 27, 30–32]. Most of such methods make use of minutiae as the feature points. As discussed before, however, it is problematic to align partial fingerprints based on minutiae because of the lack of such features on the fingerprint fragments.
9.1 Introduction
9.1.3
143
Partial Fingerprint Alignment Based on Pores
Following the second kind of alignment methods, we need to find some reference points other than minutiae on fingerprints for the purpose of aligning partial fingerprints. One possible solution is to use sufficiently densely sampled points on ridges as such reference points. However, it is hard, or even impossible, to ensure that identical points are sampled on different fingerprint images, and a too dense sampling of points will make the matching computationally prohibitive. On the contrary, sweat pores (as well as minutiae) are unique biological characteristics and are persistent on a finger throughout the life. Compared with minutiae, they are much more abundant on small partial fingerprints. Therefore, the pores can serve as reliable reference points in aligning partial fingerprint images. Although pore shapes and sizes are also important and biophysically distinctive features [14], they cannot be reliably captured on fingerprint images because they are greatly affected by the pressure of the fingertip against the scanner. On the other hand, the pore statuses can change between open and close from time to time. Therefore, in general only the locations of pores are used in recognizing the pores and the fingerprints [15]. Considering the plenty of pores on partial fingerprints, in this chapter we introduce, to the best of our knowledge, for the first time an approach to aligning partial fingerprints based on the pores reliably extracted from high resolution partial fingerprint images. This approach, by making use of the pores on fingerprints as reference feature points, can effectively align partial fingerprints and estimate the transformation between them even when there is a small overlap and large translation and rotation. We first propose an efficient method to extract pores, and then present a descriptor of pores, namely the pore–valley descriptor (PVD), to determine the correspondences between them. The PVD describes a pore using its location and orientation, the ridge orientation inconsistency in its neighborhood, and the structure of valleys surrounding it. The partial fingerprints are first matched based on their PVDs, and the obtained pore correspondences are further refined using the global geometrical relationship between the pores. The transformation parameters are then calculated from the best matched pores. The experiments demonstrate that the proposed PVD-based alignment method can effectively detect corresponding pores and then accurately estimate the transformation between partial fingerprints. It is also shown that the proposed alignment method can significantly improve the recognition accuracy of partial fingerprint recognition. The rest of this chapter is organized as follows. Section 9.2 presents methods for extracting pores and valleys and defines the pore–valley descriptors (PVD). Section 9.3 presents the PVD-based fingerprint alignment method in detail. Section 9.4 performs extensive experiments to verify the effectiveness of the proposed method. Section 9.5 concludes the chapter.
144
9.2
9 Pore-Based Partial Fingerprint Alignment
Feature Extraction
The fingerprint features, including pores, ridges and valleys, will be used in the proposed method. The extraction of ridge orientations and frequencies and ridge maps has been well studied in the literature [1, 2]. In this chapter, we use the classical methods proposed by Jain et al. [4, 37] to extract ridge orientations, frequencies and ridge maps. Because ridges and valleys are complementary on fingerprints, it is a simple matter to get skeleton valley maps by thinning the valleys on the complement of ridge maps. To extract pores, we divide the fingerprint into blocks and use Gaussian matched filters to extract them block by block. The scales of Gaussian filters are adaptively determined according to the ridge frequencies on the blocks. After extracting orientation fields, valleys, and pores, we can then generate the pore– valley descriptor for each pore. Next we describe the feature extraction methods in detail.
9.2.1
Ridge and Valley Extraction
The considered partial fingerprint image has a higher resolution (approx. 1200 dpi in this chapter) than the conventional fingerprints (about 500 dpi) so that level-3 features such as pores can be reliably extracted from them. To extract ridges and valleys, it is not necessary to directly work on images of such a high resolution. In order to save computational cost, we smooth the image and down-sample it to half of its original resolution, and use the method in [37] to calculate the ridge orientations and frequencies. Based on local ridge orientations and frequencies, a bank of Gabor filters are used to enhance the ridges on the fingerprint. The enhanced fingerprint image is then binarized to obtain the binary ridge map. On fingerprints, valleys and ridges are complementary to each other. Therefore, we can easily get the binary valley map as the complement of the binary ridge map. In order to exclude the effect of background on complement calculation, the fingerprint region mask [37] is employed to filter out the background if any. The binary valley map is then thinned to make all valleys be single-pixel lines. On the resulting skeleton valley map, there could be some false and broken valleys due to scars and noise. Thus we post-process it by connecting valley endings if they are very close and have opposite directions, and by removing valley segments between valley endings and/or valley bifurcations if they are very short or their orientations differ much from the local ridge orientations. Finally, we up-sample the obtained ridge orientation and frequency images, binary ridge map and skeleton valley map to the original resolution. Figure 9.2b shows the skeleton valley map extracted from the original fingerprint fragment in Fig. 9.2a.
9.2 Feature Extraction
145
Fig. 9.2 (a) Original fingerprint image; (b) extracted skeleton valley map; Gaussian filtering output (c) at a small scale and (d) at a large scale; (e) difference between (c) and (d); (f) extracted pores after post-processing (pores are marked by red circles)
9.2.2
Pore Extraction
Referring to Figs. 9.1 and 9.2a, on the fingerprint images captured using an optical contact fingerprint sensor, ridges (valleys) appear as dark (bright) lines, whereas pores are bright blobs on ridges, either isolated (i.e. closed pores) or connected with valleys (i.e. open pores). In general pores are circle-like structures and their spatial distributions are similar to 2-D Gaussian functions. Meanwhile, the cross sections of valleys are 1-D Gaussian-like functions with different scales. To be specific, valleys usually have bigger scales than pores. Based on this observation, we use two 2-D Gaussian filters, one with a small scale and the other with a large scale, to enhance the image. The difference between their outputs can then give an initial pore extraction result. This procedure is basically the DoG (difference of Gaussian) filtering, which is a classic blob detection approach. The difficulty here is how to estimate the scales of the Gaussian filters. Considering that the scale of either pores or valleys is usually not uniform across a fingerprint image and different fingerprints could have different ridge/valley frequencies, we partition the fingerprint into a number of blocks and estimate adaptively the scales of Gaussian filters for each block. Take a block image IB as an example. Suppose the mean ridge period over this block is p. It is a good measure of
146
9 Pore-Based Partial Fingerprint Alignment
the scale in its corresponding fingerprint block. Thus, we set the standard deviations of the two Gaussian filters to k1p and k2p, respectively (0 < k1 < k2 are two constants). The outputs of them are F 1 ¼ Gk1p I B , F 2 ¼ Gk2p I B
ð9:1Þ
2 2 2 1 Gσ ðx, yÞ ¼ pffiffiffiffiffi eðx þy Þ=2σ mG , jxj, jyj 3σ 2π σ
ð9:2Þ
where ‘’ denotes convolution and mG is used to normalize the Gaussian filter to be zero-mean. Note that the settings of k1 and k2 should take into consideration the ridge and valley widths and the size of pores. In our experiments, we empirically chose the values for them based on the fingerprint database we used. The filtering outputs F1 and F2 are further normalized to [0, 1] and binarized, resulting in B1 and B2. The small scale Gaussian filter Gk1p will enhance both pores and valleys, whereas the large scale filter Gk2p will enhance valleys only. Therefore, subtracting B2 from B1, we obtain the initial result of pore extraction: PB ¼ B1 B2. To remove possible spurious pores from the initial pore extraction result PB, we apply the following constraints to post-process the result. (i) Pores should reside on ridges only. To implement this constraint, we use the binary ridge map as a mask to filter the extracted pores. (ii) Pores are circle-like features. We require that for a true pore, the eccentricity of its region should be less than a threshold. From Figs. 9.2e and f, it can be seen that this operation can successfully remove the spurious pores caused by valley contours, i.e. those line-shaped features in Fig. 9.2e. (iii) Pores should be within a range of valid sizes. We measure the size of a pore by counting the pixels inside its region. In our experiments, we set the size between 3 and 30. (iv) The mean intensity of a true pore region should be large enough and its variance should be small. Otherwise, the detected pores are viewed as false ones caused by noise. Finally, we get the extracted pore image. Figures 9.2c–f illustrate the pore extraction process of the fingerprint in Fig. 9.2a. It is worth mentioning that some other methods based on similar assumption (i.e. pores are circle-like features) have also been proposed in the literature [20, 38]. Compared with those methods, the pore extraction method proposed here takes into consideration the varying pore scales and thus has better pore extraction accuracy according to our experiments.
9.2.3
Pore–Valley Descriptors
In order to use pores to align fingerprints, a descriptor is needed to describe the pore features so that the correspondences between pores can be accurately determined. A good descriptor should be invariant to the deformations of rotation and translation, which are very common when capturing fingerprints. Most previous studies on pore based fingerprint recognition [16, 17, 19, 20] describe a pore simply by its location because they compare the pores on two fingerprints with the alignment between the
9.2 Feature Extraction
147
Fig. 9.3 Illustration of a pore–valley descriptor with kn ¼ 4 and θs ¼ 22.5
two fingerprints known or estimated beforehand. However, if the alignment is not given, it is not sufficient to tell one individual pore from others by using only the location feature. Thus, it is necessary to employ some other information which can be useful in distinguishing pores. According to recent work on minutia-based fingerprint recognition methods, the ridge and valley structures and the ridge orientation field surrounding minutiae are also very important in minutia matching [5, 8]. Thus in this section we describe pores by using the neighboring valley structures and ridge orientation field. We call the resulting descriptor the pore–valley descriptor (PVD). The basic attribute of a pore is its location (X, Y ), which is defined as the column and row coordinates of the center of its mass. In this chapter, for the purpose of alignment, we introduce the orientation feature θ for a pore. It is defined as the ridge orientation at (X, Y ). Referring to Fig. 9.3, in order to sample the valley structures in the pore‘s neighborhood, we establish a local polar coordinate system by setting the pore‘s location as origin and the pore‘s orientation as the polar axis pointing to the right/bottom side. The polar angle is set as the counterclockwise angle from the polar axis. A circular neighborhood, denoted by Np, is then chosen. It is centered at the origin with radius being Rn ¼ knpmax, where pmax is the maximum ridge period on the fingerprint and kn is a parameter to control the neighborhood size. Some radial lines are drawn starting from φ1 ¼ 0 with a degree step θs until φm ¼ m ∙ θs, where m ¼ b360 /θs c is the total number of radial lines. For each line, we find where it intersects with valleys in the neighborhood. These intersections together with the pore give rise to a number of line segments. We number these segments from inside to outside and calculate their lengths. As shown in Fig. 9.3, a degree of 22.5 is taken as the step and hence 16 lines are employed. Taking the 0 and 180 lines as examples, the former has two segments and the latter
148
9 Pore-Based Partial Fingerprint Alignment
has five segments. The ridge orientation field in the pore‘s neighborhood is another important feature. We define the ridge orientation inconsistency (OIC) in Np as follows to exploit this information:
OIC N p
1 X ¼ ði,jÞ2N p Np
(
½ cos ð2 OF ði, jÞÞ mcos 2 þ½ sin ð2 OF ði, jÞÞ msin 2
) ð9:3Þ
where OF |Np| denotes P is the ridge orientation field, P the number of pixels in Np, mcos ¼ ði,jÞ2N p cos ð2 ∙ OF ði, jÞÞ=N p and msin ¼ ði,jÞ2N p sin ð2 ∙ OF ði, jÞÞ=N p . With the above-mentioned features, we define the PVD as the following feature vector Θ: h ! ! ! i Θ ¼ X, Y, θ, OIC N p , S 1 , S 2 , . . . , S m ! Sk
¼ ½nk , Lk,1 , Lk,2 , . . . , Lk,nk , k ¼ 1, 2, . . . , m
ð9:4Þ ð9:5Þ
where nk is the number of line segments (1 n nk) along the kth line. The OIC component and the sampled valley structure features in the proposed PVD are invariant to rotation and translation because they are calculated in circular neighborhood of the pore which is intrinsically rotation-invariant and they are defined with respect to the local coordinate system of the pore. The OIC component is a coarse feature which captures the overall ridge flow pattern information in the neighborhood of a pore on a very coarse level. It will be used as an initial step to roughly match the pores. The sampled valley structure features are fine features. They will be used as the second step to accurately match pores. The pore locations and orientations will be used to double check pore correspondences. Finally, the transformation between fingerprints will be estimated based on the locations and orientations of their corresponding pores. In the next section, we will present the proposed PVD-based alignment algorithm.
9.3
PVD-Based Partial Fingerprint Alignment
This chapter aims to align partial fingerprints by using pores. To this end, we need to first identify pore correspondences on fingerprints. However, even a small fingerprint fragment can carry many pores (hundreds in the 6.24 4.68 mm2 fragments used in our experiments), making it very time consuming to match pores in pairs directly using their surrounding valley structures (i.e. the segment lengths recorded in the PVD). Therefore, a coarse-to-fine matching strategy is necessary. The OIC components in the PVD can serve for the coarse matching. Given two pores, we first compare their OIC features. If the absolute difference between their OIC features is
9.3 PVD-Based Partial Fingerprint Alignment
149
Fig. 9.4 Illustration of the relevant measures used in pore correspondence double checking
larger than a given threshold Toic, they will not be matched; otherwise, proceed to the next fine matching step. Coarse matching will eliminate a large number of false matches. In the subsequent fine matching, we compare the valley structures in the two pores’ neighborhoods. According to the definition of PVD, each pore is associated with several groups of line segments which capture the information of its surrounding valleys. We compare these segments group by group. When comparing the segments in the kth group, where there are n1k and n2k segments in the two pores’ descriptors, we first find the common segments in the group, i.e. the first nbk ¼ min n1k , n2k segments. The dissimilarity between the two pores is then defined as Xm k¼1
2 ! Xnbk L1k,n L2k,n n1k n2k þ n¼1 nbk n1k n2k
ð9:6Þ
The first term in the formula calculates the mean absolute difference between all common segments in each group, and the second term is to penalize the missing segments. The smaller the dissimilarity is, the more similar the two pores are. After comparing all possible pairs of pores which pass coarse matching, each pair of pores is assigned with a dissimilarity calculated by (9.6). They are then sorted ascendingly according to the dissimilarities, producing the initial correspondences between the pores. The top K initial pore correspondences (i.e. those with the smallest degree of dissimilarity) are further double checked to get the final pairs of corresponding pores for alignment transformation estimation. The purpose of double checking is to calculate the supports for all pore correspondences based on the global geometrical relationship between the pores. At the beginning of double checking, the supports to all pore correspondences are initialized to zero. Figure 9.4 illustrates the relevant measures we use.
150
9 Pore-Based Partial Fingerprint Alignment
Assume P11 , P12 and P21 , P22 are two pairs of corresponding pores among the top ones. To check them, we compare (1) the distances, denoted by d1and d2, 1 1 between 2 the pores on the two fingerprints, and (2) the angles, denoted by α1 , α2 2 and α1 , α2 , between their orientations and the lines connecting them. If both the distance differences and the angle differences are below the given thresholds Td and Tα, i.e. jd1 d 2 j T d , α11 α12 T α , α21 α22 T α
ð9:7Þ
the supports for these two correspondences are increased by 1; otherwise, the support for the correspondence with higher dissimilarity is decreased by 1, whereas the support for the other one stays the same. After checking all the top K correspondences two by two, those with a non-negative support are taken as the final pore correspondences. If none of the correspondences has non-negative support, the two fingerprints cannot be aligned. If some corresponding pores are found, we can then estimate the transformation between the two fingerprints. Here, we consider rotation and translation (since all the fingerprints are captured by the same type of scanner, we assume that the scaling factor is one) as follows: 2 4
e2 X e2 Y
3 5¼
cosβ
sinβ
sinβ
cosβ
"
X2 Y2
#
" þ
ΔX
"
#
ΔY
¼R
X2 Y2
# þt
ð9:8Þ
f2 , Y f2 where (X2, Y2) are the coordinates of a pore on the second fingerprint and X are its transformed coordinates in the first fingerprint’s " # coordinate system, R ¼ ΔX cosβ sinβ is the rotation matrix and t ¼ is the translation vector. sinβ cosβ ΔY Our goal is to estimate the transformation parameters (β, ΔX, ΔY), where β is the rotation angle and ΔX and ΔY are the column and row translations, respectively. If there is only one pair of corresponding pores found on the two fingerprints, we directly estimate the transformation parameters by the locations and orientations of the two pores, (X1, Y1, θ1) and (X2, Y2, θ2), as follows: β¼
β1
if absðβ1 Þ absðβ2 Þ
β2
else
ð9:9Þ
ΔX ¼ X 1 X 2 cosβ þ Y 2 sinβ
ð9:10Þ
ΔX ¼ Y 1 X 2 cosβ Y 2 sinβ
ð9:11Þ
where β1 ¼ θ1 θ2 and β2 ¼ sgn (β1) ∙ (|β1| π). If there are more than one pairs of corresponding pores, we employ the method similar to [39] to estimate the rotation and translation parameters based on the
9.4 Experiments
151
i i and locations pores. Let X 1 , Y 1 ji ¼ 1, 2, . . . , C i i of the corresponding X 2 , Y 2 ji ¼ 1, 2, . . . , C be C pairs of corresponding pores. We determine R and t by minimizing
2
i i
1 XC
X 1 R X 2 t
i i i¼1 C Y1 Y2
ð9:12Þ
where ‘k∙k’ is the L2-norm. Following the proof in [39], it is easy to show that " t¼ where X j ¼
P
C i i¼1 X j
=C, Y j ¼
X1
#
Y1
P
1
2
R
C i i¼1 Y j
" PC i i 1 i¼1 X 1 X 1 X 2 X 2 B¼ C PC X i X 1 Y i Y 2 i¼1
"
X2
# ð9:13Þ
Y2
=C, j ¼ 1, 2. Let
PC
i i¼1 Y 1 PC i i¼1 Y 1
# Y 1 X i2 X 2 Y 1 Y i2 Y 2
ð9:14Þ
and its singular value decomposition be B ¼ UDV, then R ¼ VUT and β ¼ arcsin (R21), where R21 is the entry at the second row and first column of R.
9.4
Experiments
In general, the feature of pores can only be reliably extracted from fingerprints with a resolution of at least 1000 dpi [15]. We established a set of high resolution partial fingerprint images by using a custom-built fingerprint scanner of approximate 1200 dpi (refer to Fig. 9.5 for example images). With the established high resolution partial fingerprint image dataset, we evaluate the proposed fingerprint alignment method in comparison with a minutia-based method and an orientation field-based method. Next in Sect. 9.4.1 we first introduce the collected dataset of high resolution partial fingerprint images; in Sect. 9.4.2 we investigate the two parameters involved in the method; Sect. 9.4.3 compares the method with the minutia based method in corresponding feature point detection; Sect. 9.4.4 compares the method with the orientation field based method in alignment transformation estimation; in Sect. 9.4.5 we compare the three methods in terms of fingerprint recognition accuracy; finally, in Sect. 9.4.6 we analyze the computational complexity of the method.
152
9 Pore-Based Partial Fingerprint Alignment
Fig. 9.5 Example of fingerprint images used in the experiments. Their quality indexes are (a) 0.8777, (b) 0.7543, (c) 0.6086, and (d) 0.5531 according to the frequency domain quality index defined in [40]
9.4.1
The High Resolution Partial Fingerprint Image Dataset
We first collected 210 partial fingerprint images from 35 fingers as the training set for parameter selection and evaluation, and then collected 1480 fingerprint fragments from 148 fingers (including the fingers in the training set) as the test set for performance evaluation. The data were collected in two sessions (about 2 weeks apart). Most of the participants are students and staff in our institute, whose ages are between 20 and 50 years old. In the training set, there are three images captured from each finger in each session; whereas in the test set, each finger has five images scanned in each of the two sessions. The resolution of these fingerprint images is approximately 1200 dpi and their spatial size is 320 pixels in width and 240 pixels in height. Therefore, they cover an area of about 6.5 mm by 4.9 mm on fingertips. When capturing the fingerprint images, we simply asked the participants to naturally put their fingers against the prism of the scanner without any exaggeration of fingerprint deformation. As a result, typical transformations between different impressions of the same finger in the dataset include translations with tens of pixels and rotations by around 8 . The maximal translations and rotations are, respectively, about 200 pixels and 20 . Hence, the minimal overlap between a finger’s different impressions is about one
9.4 Experiments
153
fourth of the fingerprint image area. In subsequent experiments, we will give representative examples of these cases.
9.4.2
The Neighborhood Size and Sampling Rate of Directions
The proposed alignment method uses the valley structures in the neighborhood of pores. The valley structures are sampled along a number of different directions, determined by the degree step θs. Here we refer to θs as the sampling rate of directions. Obviously, the neighborhood size and sampling rate are two critical parameters in the proposed alignment method. We set the neighborhood size as kn times the maximum ridge period. Intuitively, a small kn or large θs will cost less computational resource but will make the resulting PVDs less discriminative, whereas a large kn or small θs will lead to more noise-sensitive and costly PVDs. We evaluated the effect of kn and θs on the accuracy of corresponding feature point detection using 50 pairs of fingerprints which were randomly chosen from the training set. Each pair is from the same finger but taken at different sessions. These fingerprints show different quality. Some example images are shown in Fig. 9.5. We used two measures to evaluate the accuracy: the percentage of correct top one pore correspondence (M1) and the average percentage of correct correspondences among the top five pore correspondences (M2). Let N be the total number of pairs of fingerprint images, and NT1 the number of pairs of fingerprints on which the top one pore correspondence is correct. We also counted the number of correct pore correspondences among the top five correspondences on each pair of fingerprints. Denote by N iT5 the number of correct pore correspondences among the top five correspondences on the ith pair of fingerprints. Then the two measures M1 and M2 are defined as M1 ¼ N T1 =N 1 XN i N =5 M2 ¼ i¼1 T5 N
ð9:15Þ ð9:16Þ
We investigate several combinations of different values for kn and θs, i.e. kn 2 {3, 3.5, 4, 4.5, 5} and θs 2 {15 , 18 , 20 , 22.5 , 30 , 45 }. Table 9.1 lists the results on the 50 pairs of fingerprints. From the results, we can see that the best accuracy is obtained at a sampling rate of θs ¼ 20 or 22.5 , and no significant difference is observed between these two different sampling rates. With respect to the neighborhood size, it appears that kn ¼ 4 is a good choice. Furthermore, it was observed that neither too small nor too large neighborhoods can produce the best accuracy. In our following experiments, considering both the accuracy and the computational cost, we set kn ¼ 4 and θs ¼ 22.5 .
154
9 Pore-Based Partial Fingerprint Alignment
Table 9.1 Accuracies (%) of corresponding pore detection on 50 pairs of fingerprints under different settings of kn and θs M1/M2 (%) kn 3 3.5 4 4.5 5
θs 15
18
32/45.2 52/60.1 66/74.6 76/80.2 54/49
44/51.6 62/75.2 80/80.5 84/86 62/56.7
20
22.5
30
45
50/57.5 74/80.2 96/95.1 88/90.5 66/60.5
50/58.2 78/82.5 98/94.7 86/89.1 60/61.7
46/52.5 70/69.6 94/88.5 80/78.1 54/59.2
40/49.1 58/62.5 80/78.2 72/70.6 52/52.1
Note that the settings of these two parameters should be dependent on the resolution of fingerprint images and the population from which the fingerprint images are captured. If a different fingerprint image dataset is used, the above training process has to be done again by using a subset of the fingerprint images in that dataset.
9.4.3
Corresponding Feature Point Detection
Detecting feature point correspondences is an important step in the proposed alignment method as well as in many state-of-the-art minutia-based methods. The optimal alignment transformation is estimated based on the detected corresponding feature points (i.e. pores or minutiae). Considering the significance of corresponding feature point detection, we carried out experiments to compare the proposed method with a representative minutia-based method [31] in terms of corresponding feature point detection accuracy. In the experiments, we used 200 pairs of partial fingerprints randomly chosen from the training set for evaluation. In each pair, the two fingerprints are from the same finger but were captured at different sessions. Figure 9.6 shows some example pairs of fingerprint fragments with the detected corresponding minutiae (left column) or pores (right column). When there are more than five pairs of corresponding minutiae or pores, we show only the first five pairs. In Figs. 9.6a and b, both methods can correctly find the top five feature point correspondences. However, when the fingerprint quality changes between sessions, for example because of perspiration, the minutiae based method will tend to detect false minutiae and hence false minutia correspondences. In Fig. 9.6c, broken valleys occur on the second fingerprint. As a result, the detected two minutia correspondences are incorrect. Instead, the proposed PVD-based method is more robust and can correctly detect the corresponding pores as shown in Fig. 9.6d. The fingerprint fragments in Fig. 9.6e have large deformation and small overlap. Consequently, few (fewer than 10) minutiae can be found in their overlapping region. In this case, the minutia-based method fails again because there lack sufficient minutiae. Actually, even when two partial fingerprints overlap much, there could still be very few minutiae available on them because of the small fingerprint
9.4 Experiments
155
Fig. 9.6 Example of corresponding feature point detection results using minutia based (left column) and PVD based (right column) methods
Table 9.2 Accuracies of corresponding feature point detection by the two methods
Minutia based method [31] PVD based method
M1 (%) 40 98
M2 (%) 35.1 95.5
areas. As can be seen in Fig. 9.6g, some false correspondences are detected on the two fragments due to insufficient minutiae. In contrast, as shown in Figs. 9.6f and h, the results by the proposed PVD-based method on these partial fingerprints are much better. We calculated the two measures, M1 and M2, for the two methods on all the 200 pairs of partial fingerprints. The results are listed in Table 9.2. It can be seen that the minutia-based method works poorly whereas the proposed PVD-based method can detect the corresponding feature points with a very high accuracy, achieving significant improvements over the minutia-based method. This demonstrates that the PVD-based alignment method can cope with various fingerprint fragments more accurately than the minutia-based method, largely thanks to the abundance and distinctiveness of pores on fingerprints. Since the alignment transformation estimation is based on the detected corresponding feature points, it is obvious that the PVD-based method will also estimate the alignment transformation more accurately than the minutia-based method. Next, we compare the PVD-based method with an
156
9 Pore-Based Partial Fingerprint Alignment
orientation field-based method in terms of alignment transformation estimation accuracy.
9.4.4
Alignment Transformation Estimation
After obtaining the pore correspondences on two fingerprints, we can then estimate the alignment transformation between them based on the corresponding pores. To quantitatively evaluate the performance of the proposed method in alignment transformation estimation, we need some ground truth fingerprint fragment pairs. To this end, we randomly chose 10 pairs of fingerprints from the test set (each pair was captured from the same finger but in two different sessions), and manually computed their transformations as the ground truth. Because we consider only translation and rotation here, we need at least two pairs of corresponding feature points on a pair of fingerprints to calculate the transformation between them. Therefore, we first manually marked two pairs of corresponding feature points on each of the 10 pairs of fingerprints. Based on the coordinates of the two pairs of corresponding feature points, we then directly computed the transformation between the pair of fingerprints by solving a set of equations. The obtained ground truth on the 10 pairs of fingerprints is given in Table 9.3. The first three pairs of fingerprints are shown in Fig. 9.7. From Table 9.3, we can see that these chosen fingerprint pairs display translations from less than 10 pixels to about 180 pixels and rotations from less than 5 to more than 10 . In our experiments, we have observed that typical transformations in the dataset are translations by tens of pixels and rotations by around 8 . In this part, we compared our proposed method with the steepest descent orientation field (OF) based alignment method [34] in terms of alignment transformation estimation accuracy using the chosen fingerprint pairs.
Table 9.3 Alignment transformation ground truth and estimation results by the two methods Index of fingerprint pair 01 02 03 04 05 06 07 08 09 10
Ground truth ΔY ΔX 56 23 90 33 100 181 11 8 90 8 69 74 78 137 87 2 73 39 19 4
β 11.01 15.95 2.87 3.16 4.19 5.44 3.45 0.96 4.40 11.25
OF based method [34] ΔY ΔX β 11 3 2.00 43 51 1.00 9 72 31.02 1 1 0.00 100 0 1.00 23 21 0.00 45 24 0.00 74 1 1.00 69 47 3.00 4 2 1.00
PVD based method ΔY ΔX β 61 33 13.27 93 26 16.89 91 176 2.93 11 6 1.98 89 6 3.88 72 78 9.33 76 142 6.40 93 7 2.23 79 50 0.60 12 1 8.61
9.4 Experiments
157
Fig. 9.7 Three of the 10 chosen pairs of fingerprint fragments. (a, b), (c, d), and (e, f) are the first three pairs of fingerprints listed in Table 9.3
Fig. 9.8 Alignment results of PVD based method (a, c, e) and OF based method (b, d, f) on the fingerprint pairs shown in Figs. 9.7(a, b), (c, d), and (e, f)
Table 9.3 lists the estimation results by the OF based method (the step sizes of translation and rotation are set as one pixel and 1 , respectively) and the proposed method on the chosen fingerprint pairs. Figure 9.8 illustrates the aligned fingerprint images by overlaying the first image with the transformed second image in the pair shown in Fig. 9.7. Obviously, the PVD-based method estimates the transformation parameters much more accurately and it does not have the initialization and quantization problems, which will affect greatly the performance of OF based method. Moreover, there is no guarantee that the OF based method will always converge to the global optimal solution. In fact, it can be easily trapped at local minima, for
158
9 Pore-Based Partial Fingerprint Alignment
Table 9.4 Average absolute errors (AAE) by the two methods
OF based method [34] PVD based method
AAE (ΔY) 33.9 4.2
AAE (ΔX) 48.5 5.4
AAE (β) 8.5 2.5
Fig. 9.9 The simple partial fingerprint recognition system used in the experiments. FI and FT denote the input and template fingerprints, respectively
example the third pair of fingerprints which has small overlap (refer to the last column in Figs. 9.7 and 9.8). In Table 9.4, we list the average absolute errors of the two methods over the chosen 10 fingerprint pairs. These results clearly demonstrate that the PVD-based method can recover the transformation between partial fingerprints more accurately.
9.4.5
Partial Fingerprint Recognition
We have also evaluated the proposed alignment method in partial fingerprint recognition by setting up a simple partial fingerprint recognition system as shown in Fig. 9.9. In this system, the alignment transformation is first estimated between an input fingerprint image and a template fingerprint image by using one of the three methods: minutia-based method, orientation field based method, and the proposed PVD based method. As for the matcher, we employed two different approaches. The first one is a minutia and pore based matcher (called MINU-PORE matcher). It matches the minutiae and pores on the fingerprints, and then fuses the match scores of minutiae and pores to give a similarity score between the fingerprints. The second approach is an image-based matcher called GLBP matcher based on Gabor and local binary patterns (LBP), recently proposed by Nanni and Lumini [3]. Note that the purpose of the experiments here is to compare the contributions of the three different alignment methods to a fingerprint matcher. Therefore, we did not do any optimization on the matchers but considered only the relative improvement between the three alignment methods. The MINU-PORE matcher we implemented works as follows. The minutiae and pores on the input fingerprint image are transformed into the coordinate system of the template fingerprint according to the estimated transformation. Minutiae and pores on the two fingerprint images are then matched separately. Two minutiae are thought to be matched if the difference between their locations and the difference
9.4 Experiments
159
Fig. 9.10 The ROC curves of the MINU-PORE matcher by using the three alignment methods
between their directions are both below the given thresholds (15 pixels for location differences and 30 for direction differences in our experiments). As for two pores, if the difference between their locations is below a given threshold (15 pixels in our experiments), they are matched. The minutia matching score is defined as the ratio between the number of matched minutiae to the total number of minutiae, and the pore matching score is defined similarly. The final matching score is obtained by fusing the minutia and pore matching scores using the summation rule. By using the MINU-PORE matcher on the test set, we conducted the following matches. (1) Genuine matches: each of the fingerprint images in the second session was matched with all the fingerprint images of the same finger in the first session, resulting in 3700 genuine match scores. (2) Imposter matches: the first fingerprint image of each finger in the second session was matched with the first fingerprint images of all the other fingers in the first session, resulting in 21,756 imposter match scores. Based on the obtained match scores, we calculated the equal error rate (EER) of each of the three alignment methods: 29.5% by the PVD based alignment method, 38.66% by the minutia based alignment method, and 41.03% by the OF based alignment method. The receiver operating characteristic (ROC) curves of the three methods are plotted in Fig. 9.10. It can be clearly seen that the proposed PVD based alignment method makes much improvement in EER, specifically 23.69% over the minutia based method and 28.1% over the OF based method.
160
9 Pore-Based Partial Fingerprint Alignment
Fig. 9.11 The ROC curves of the GLBP matcher by using the three alignment methods
As for the GLBP matcher, we first transform the input fingerprint image into the coordinate system of the template fingerprint image according to the estimated alignment transformation, then extract the Gabor-LBP feature vectors from the transformed input fingerprint image and the template fingerprint image (we directly took the configuration parameters from [3]), and finally calculate the Euclidean distance between the Gabor-LBP feature vectors of the input fingerprint image and the template fingerprint image. By using the GLBP matcher, we carried out the same matching scheme as in the MINUPORE matcher. As a result, the PVD-based alignment method leads to the EER of 34.85%, the minutia based alignment method 39.98%, and the OF based alignment method 45.11%. Figure 9.11 shows the corresponding ROC curves. Compared with the other two methods, the proposed PVD based alignment method achieves 12.83% and 22.74% improvement in EER, respectively. In all the experiments, it is observed that matching errors are largely caused by inaccurate alignments. This validates that the proposed alignment method is more suitable for partial fingerprints and can significantly improve the accuracy of partial fingerprint recognition. Although the EER obtained here is relatively high, this is because the recognition of partial fingerprint images is itself very challenging due to the limited features.
9.4 Experiments
9.4.6
161
Computational Complexity Analysis
The proposed PVD based alignment method has the following main steps for each pair of fingerprint images to be aligned: (A) ridge orientation and frequency estimation; (B) ridge and valley extraction; (C) pore extraction; (D) PVD generation; (E) PVD comparison; (F) pore correspondence refinement; and (G) transformation estimation. The first two steps (A) and (B) are common to most automatic fingerprint recognition systems. The last step (G) involves a singular value decomposition of a 2 2 matrix, which can be implemented very efficiently. We have implemented the method by using Matlab and executed it on a PC with a 2.13 GHz Intel(R) Core (TM) 26,400 CPU and RAM of 2 GB. It takes about 0.02 ms to estimate the transformation from a set of corresponding feature points. The step (F), pore correspondence refinement, needs to calculate some Euclidean distances and angles, which can also be done in about 0.02 ms. The step (C), pore extraction, is a little bit more time-consuming. The pore extraction method we used in this chapter is a filtering based approach, which extracts pores by some linear filter operations. In our experiments, it takes about 2 s to extract the pores from a fingerprint image. The most time-consuming steps are PVD generation (step (D)) and comparison (step (E)). Although it does not take much time to generate the PVD for one pore (about 0.02 s) or to compare the PVD of two pores (about 0.02 ms), processing the whole set of pores on fingerprints takes more time because of the large quantity of pores. With regard to the fingerprint images used in our experiments, there are averagely around 500 pores on a fingerprint fragment. Therefore, it takes in average about 10 and 5 s, respectively, to generate the PVD for the pores on a fingerprint fragment and to compare the PVD on two fingerprint fragments. Considering that we did not optimize the code and that the Matlab code itself has low efficiency, we expect that the computational cost can be much reduced after optimization and the speed can be significantly improved by using languages like C/C++. Compared with the proposed method, the minutia based method is more efficient, taking usually less than 1 s for either extracting or matching the minutiae (but using C/C++ implementation). As for the OF based method, the time needed to align two fingerprints depends on a number of factors, such as the amount of transformation between the fingerprints, the initial estimation of the transformation, and the step sizes used in the search process. Therefore, it is difficult to draw a conclusion on its efficiency. In our experiments, the OF based method can sometimes converge in less than 1 s, but sometimes converge after more than 1 min. Generally speaking, the proposed method achieves much higher alignment accuracy than the other two approaches with an acceptable computational cost.
162
9.5
9 Pore-Based Partial Fingerprint Alignment
Summary
An approach was presented in this chapter to aligning partial high resolution fingerprints using pores. After pore detection, a novel descriptor, namely pore– valley descriptor (PVD), was defined to describe pores based on their local characteristics. Then a coarse-to-fine pore matching method was used to find the pore correspondences based on PVD. With the detected corresponding pores, we estimated the alignment transformation between the fingerprint fragments. To evaluate the performance of the proposed PVD based high resolution partial fingerprint alignment method, we established a set of partial fingerprint images and used them to compare the proposed method with state-of-the-art minutia-based and orientation field-based fingerprint alignment methods. The experimental results demonstrated that the PVD-based method can more accurately detect the corresponding feature points and hence estimate better the alignment transformation. It was also shown in our experiments that the accuracy of partial fingerprint recognition can be significantly improved by using the PVD based alignment method. One important issue in high resolution fingerprint recognition is the stability of pores. Despite that not all pores will appear in the fingerprint images of the same person but captured at different times, we experimentally found that usually there will be enough corresponding pores that can be detected on the fingerprint images from the same person. It is interesting and very important to further investigate the statistical characteristics of pores on fingerprint images. Although the PVD based alignment method proposed in this chapter is designed for high resolution partial fingerprint recognition, it is not limited to partial fingerprints. It can also be applied to full fingerprint images. One problem may be the expensive computational cost caused by the large amount of pore features. One solution could be to perform coarse registration first by using OF based schemes and then apply the PVD based method for a fine estimation of the alignment transformation. It also deserves to do more investigation of the discriminative power of pores.
References 1. Ratha, N., Bolle, R.: Automatic fingerprint recognition systems. Springer, New York (2004) 2. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition. Springer, New York (2003) 3. Nanni, L., Lumini, A.: Local binary patterns for a hybrid fingerprint matcher. Pattern Recogn. 41(11), 3461–3466 (2008) 4. Jain, A.K., Hong, L., Bolle, R.: On-line fingerprint verification. IEEE Trans. Pattern Anal. Mach. Intell. 19, 302–314 (1997) 5. Tico, M., Kuosmanen, P.: Fingerprint matching using an orientation-based minutia descriptor. IEEE Trans. Pattern Anal. Mach. Intell. 25, 1009–1014 (2003) 6. Jiang, X., Yau, W.Y.: Fingerprint minutiae matching based on the local and global structures. Proc. Int. Conf. Pattern Recogn. 2, 1042–1045 (2000)
References
163
7. Kovacs-Vajna, Z.M.: A fingerprint verification system based on triangular matching and dynamic time warping. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1266–1276 (2000) 8. Feng, J.: Combining minutiae descriptors for fingerprint matching. Pattern Recogn. 41, 342–352 (2008) 9. Bazen, A.M., Verwaaijen, G.T.B., Gerez, S.H., Veelenturf, L.P.J, van der Zwaag, B. J.: A correlation-based fingerprint verification system. In: Proceedings of the Workshop on Circuits Systems and Signal Processing, pp. 205–213 (2000) 10. Jain, A.K., Prabhakar, S., Hong, L., Pankanti, S.: Filterbank-based fingerprint matching. IEEE Trans. Image Process. 9, 846–859 (2000) 11. Ross, A., Jain, A., Reisman, J.: A hybrid fingerprint matcher. Pattern Recogn. 36, 1661–1673 (2003) 12. Teoh, A., Ngo, D., Song, O.T.: An efficient fingerprint verification system using integrated wavelet and Fourier–Mellin invariant transform. Image Vis. Comput. 22(6), 503–513 (2004) 13. Nanni, L., Lumini, A.: A hybrid wavelet-based fingerprint matcher. Pattern Recogn. 40(11), 3146–3151 (2007) 14. Bindra, B., Jasuja, O.P., Singla, A.K.: Poroscopy: a method of personal identification revisited. Internet J. Forensic Med. Toxicol. 1, (2000) 15. CDEFFS.: Data format for the interchange of extended fingerprint and palmprint features. Working Draft Version 0.2, http://fingerprint.nist.gov/standard/cdeffs/index.html, January 2008 16. Kryszczuk, K., Drygajlo, A., Morier, P.: Extraction of level 2 and level 3 features for fragmentary fingerprints. In: Proceedings of the Second COST Action 275 Workshop, Vigo, Spain, pp 83–88, (2004) 17. Kryszczuk, K., Morier, P., Drygajlo, A.: Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison. In: BioAW2004, Lecture Notes in Computer Science, vol. 3087, pp. 124–133. Springer, Berlin (2004) 18. Roddy, A., Stosz, J.: Fingerprint features—statistical analysis and system performance estimates. Proc. IEEE. 85, 1390–1421 (1997) 19. Stosz, J.D., Alyea, L.A.: Automated system for fingerprint authentication using pores and ridge structure. In: Proceedings of SPIE Conference on Automatic Systems for the Identification and Inspection of Humans, San Diego, vol. 2277, pp 210–223 (1994) 20. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29, 15–27 (2007) 21. Parthasaradhi, S.T.V., Derakhshani, R., Hornak, L.A., Schuckers, S.A.C.: Timeseries detection of perspiration as a liveness test in fingerprint devices. IEEE Trans. Syst. Man Cybern. Part C. 35, 335–343 (2005) 22. Jea, T.Y., Govindaraju, V.: A minutia-based partial fingerprint recognition system. Pattern Recogn. 38, 1672–1684 (2005) 23. Choi, K., Choi, H., Lee, S., Kim, J.: Fingerprint image mosaicking by recursive ridge mapping. IEEE Trans. Syst. Man Cybern. Part B. 37, 1191–1203 (2007) 24. Chen, Y., Jain, A.K.: Dots and incipients: extended features for partial fingerprint matching. In: Presented at Biometric Symposium, BCC, Baltimore (2007) 25. Cappelli, R., Maio, D., Maltoni, D.: Modeling Plastic Distortion in Fingerprint Images. In: Proceedings of ICAPR, Rio de Janeiro, 2001 26. Ross, A., Dass, S., Jain, A.: A deformable model for fingerprint matching. Pattern Recogn. 38, 95–103 (2005) 27. Huvanandana, S., Kim, C., Hwang, J.N.: Reliable and fast fingerprint identification for security applications. Proc. Int. Conf. Image Process. 2, 503–506 (2000) 28. Ratha, N.K., Karu, K., Chen, S., Jain, A.K.: A real-time matching system for large fingerprint databases. IEEE Trans. Pattern Anal. Mach. Intell. 18, 799–813 (1996) 29. Chang, S.H., Hsu, W.H., Wu, G.Z.: Fast algorithm for point pattern matching: invariant to translations, rotations and scale changes. Pattern Recogn. 30, 311–320 (1997) 30. Chen, X., Tian, J., Yang, X., Zhang, Y.: An algorithm for distorted fingerprint matching based on local triangle feature set. IEEE Trans. Infor. Forensics Secur. 1, 169–177 (2006)
164
9 Pore-Based Partial Fingerprint Alignment
31. Chen, X., Tian, J., Yang, X.: A new algorithm for distorted fingerprints matching based on normalized fuzzy similarity measure. IEEE Trans. Image Process. 15, 767–776 (2006) 32. Zhang, W., Wang, Y.: Core-based structure matching algorithm of fingerprint verification. Proc. Int. Conf. Pattern Recogn. 1, 70–74 (2002) 33. Yager, N., Amin, A.: Coarse fingerprint registration using orientation fields. EURASIP J. Appl. Signal Process. 13, 2043–2053 (2005) 34. Yager, N., Amin, A.: Fingerprint alignment using a two stage optimization. Pattern Recogn. 27, 317–324 (2006) 35. Liu, L., Jiang, T., Yang, J., Zhu, C.: Fingerprint registration by maximization of mutual information. IEEE Trans. Image Process. 15, 1100–1110 (2006) 36. Ross, A., Reisman, J., Jain, A.K.: Fingerprint matching using feature space correlation. In: Proceedings of Post-European Conference on Computer Vision Workshop on Biometric Authentication, Lecture Notes in Computer Science, vol. 2359, pp. 48–57. Springer, Berlin (2002) 37. Hong, L., Wan, Y., Jain, A.K.: Fingerprint image enhancement: algorithms and performance evaluation. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 777–789 (1998) 38. Ray, M., Meenen, P., Adhami, R.: A novel approach to fingerprint pore extraction. In: Proceedings of the 37th South-eastern Symposium on System Theory, pp. 282–286 (2005) 39. Haralick, R.M., Joo, H., Lee, C., Zhuang, X., Vaidya, V.G., Kim, M.B.: Pose estimation from corresponding point data. IEEE Trans. Syst. Man Cybern. 19, 1426–1446 (1989) 40. Chen, Y., Dass, S., Jain, A.: Fingerprint quality indices for predicting authentication performance. Proc. AVBPA. 160–170 (2005)
Chapter 10
Fingerprint Pore Matching
Abstract Fingerprint matching is an important and essential step in automated fingerprint recognition systems (AFRSs). The noise and distortion of captured fingerprints and the inaccurate of extracted features make fingerprint matching a very difficult problem. With the advent of high resolution fingerprint imaging techniques and the increasing demand for high security, sweat pores have been recently attracting increasing attention in automatic fingerprint recognition. Therefore, this chapter takes fingerprint pore matching as an example to show the robustness of our proposed matching method to the errors caused by the fingerprint representation. This method directly matches pores in fingerprints by adopting a coarse-to-fine strategy. In the coarse matching step, a tangent distance and sparse representation based matching method (denoted as TD-Sparse) is proposed to compare pores in the template and test fingerprint images and establish one-tomany pore correspondences between them. The proposed TD-Sparse method is robust to noise and distortions in fingerprint images. In the fine matching step, false pore correspondences are further excluded by a weighted Random Sample Consensus (WRANSAC) algorithm in which the weights of pore correspondences are determined based on the dis-similarity between the pores in the correspondences. The experimental results on two databases of high resolution fingerprints demonstrate that the proposed method can achieve much higher recognition accuracy compared with other state-of-the-art pore matching methods. Keywords Fingerprint matching · Tangent distance · Sparse representation · TDsparse · Weighted random sample consensus (WRANSAC)
10.1
Introduction
Fingerprint matching in automated fingerprint recognition systems (AFRSs) aims to offer a degree of similarity (value between 0 and 1) or a binary decision (matched or non-matched) between two given fingerprint images (template and test fingerprints). Generally, such fingerprints are not compared directly but based on the representation of them, such as minutiae, sweat pore, ridge contour and so on [1], as shown in © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_10
165
166
10
Fingerprint Pore Matching
Open pore Closed pore Minutiae: ridge bifurcation Ridge edge feature: indentation
Minutiae: ridge ending Ridge contour
Ridge edge feature: protrusion
Fig. 10.1 Features on a high resolution fingerprint image
Fig. 10.1. Because of noise and distortion introduced during fingerprint capture and the inexact nature of feature extraction, there are errors in the fingerprint representation (e.g. missing, spurious, or noisy features). Therefore, the matching algorithm should be robust to these errors. As the advent of high resolution fingerprint imaging techniques, new distinctive features, such as sweat pores, ridge contours, ridge edge features, are attracting increasing attention from researchers and practitioners who are working on AFRSs. They also have been proven to be very useful for improving the accuracy of existing minutiae-based AFRSs [2–7]. Sweat pores, among various new features, have attracted the most attention [2–11]. Some effective pore extraction methods have been proposed in [8–12]. However, there are few algorithms for pore matching [4–7]. The errors mentioned above make fingerprint pore matching very challenging. Thus, this chapter takes fingerprint pore matching as an example to introduce our proposed robust fingerprint matching method. Existing pore matching methods can be roughly divided into two categories. Methods in the first category align the fingerprint images before matching the pores in them [4– 6]. Various methods have been proposed for the alignment. Kryszczuk et al. [4] first aligned the test fragmentary fingerprint with the full template fingerprint using the imagecorrelation based method, and then matched the pores in the aligned fingerprint images based on their geometric distances. This method has the following two drawbacks: (1) it is time consuming to obtain the best alignment in a quantized transformation parameter space by trying all possible rotations and translations, and (2) recognition accuracy heavily relies on the alignment accuracy and is sensitive to the instability of extracted pores and nonlinear distortions in fingerprint images. Jain et al. [5, 6] proposed a minutiae-based method. The fingerprint images are first aligned by using minutiae. Then, pores lying in a rectangular neighborhood to each aligned minutiae pair are matched using a modified iterative closest point (ICP) algorithm. This method is more efficient than that in [4]. However, it requires a sufficient number of minutiae for effective alignment and considers only the pores in a small neighborhood of aligned minutiae. Methods in the second category directly match pores in fingerprints without explicit alignment of the fingerprint images. In [7], Zhao et al. proposed a
10.2
Coarse Pore Matching
167
hierarchical coarse-to-fine pore matching scheme. In the coarse step, one-to-one pore correspondences are roughly determined based on the correlation between the local patches around the pores. In the fine step, the obtained pore correspondences are further refined using a global transformation model. This method has the advantage of robustness to the instability of extracted pores by considering all the available pores in fingerprint images. However, it still has some limitations. (1) The correlation between local patches can be not discriminative enough to ensure that the similarity between a pore and its true corresponding pore is always higher than that between it and the other pores. For example, when the local patches mainly consist of parallel ridges or when they are very noisy or heavily distorted, true pore correspondences could have their similarity ranked not at the top. As a consequence, considering only the top 1 pore correspondences is very likely to miss many true correspondences. (2) Not all the pore correspondences established at the coarse step have the same reliability. Instead, the similarity between the pores in different correspondences can be quite different, and those correspondences with higher similarity are generally believed to be more reliable. Therefore, the similarity of the correspondences provides a natural indicator of their reliability. Yet, previous pore matching methods [7, 8] did not explore this information. In this chapter, we propose a novel hierarchical matching method, namely TDSWR, which is less sensitive to the instability of pores and gets rid of the above-mentioned limitations of existing pore matching methods. Compared with existing pore matching methods, the proposed method has the following characteristics: (1) a tangent distance and sparse representation based matching method (TD-Sparse), which is robust to noise and distortion, is proposed to determine the pore correspondences at the coarse step; (2) one-to-many pore correspondences are established at the coarse step, and thereby most of the true pore correspondences are retained in the results of coarse matching; and (3) a weighted random sample Consensus (WRANSAC) algorithm [13] which explores the reliability information of pore correspondences, is employed in the fine matching step to exclude false pore correspondences. Figure 10.2 gives the framework of the proposed method. The rest of this chapter is structured as follows. Section 10.2 introduces the establishment of one-to-many coarse pore correspondences by the TD-Sparse based matching method. Section 10.3 presents the WRANSAC algorithm that we have adopted in the fine matching step, and describes in detail the calculation of the weights used in WRANSAC. Section 10.4 then reports the experiments and analyzes the results. Finally, Sect. 10.5 concludes the chapter.
10.2
Coarse Pore Matching
A key issue in establishing coarse pore correspondences is to calculate the similarities or differences between individual pores. Unlike existing methods [7, 8], this chapter proposes a TD-Sparse based method, which is more robust to noise and
168
10
Fingerprint Pore Matching
Fig. 10.2 Framework of the proposed TDSWR method
distortion [18–22], to calculate the differences between pores and establish one-tomany pore correspondences in the coarse pore matching step. A local descriptor is first constructed for each pore. Here, we use the same local descriptor as in [7] so that we can fairly compare the proposed TD-Sparse based approach and the correlation-based approach in [7]. The local descriptor of a pore essentially captures the intensity variation in a circular neighborhood to the pore. To construct the local descriptors, the original fingerprint image is first smoothed by a Gaussian filter. Then, a circular neighborhood to each pore is cropped and rotated to keep the local ridge orientation at the pore horizontal. Finally, the intensity values of
10.2
Coarse Pore Matching
169
the pixels in the neighborhood are concatenated and normalized to form the local descriptor of the pore.
10.2.1 Difference Calculation by TD-Sparse Method To calculate the differences between pores, this chapter uses the sparse representation technique rather than the correlation-based technique. Sparse representation was originally developed in signal/image modeling to solve inverse problems [14, 15] and began to be practically used with the development of theory and algorithms of techniques [16, 17]. Wright et al. [18] have recently proposed the sparse representation classifier (SRC) for robust face recognition, and obtained promising results. The basic idea of SRC is to represent an input sample by a linear combination of a set of training samples, in which the combination coefficients are restricted to be sparse. It conducts classification based either on the assumption that the coefficients corresponding to the samples of the same class have larger absolute values or on the assumption that the residual of representing the input sample with the samples from the same class is smaller. The procedure of the SRC algorithm is given in Algorithm 10.1. From the similarity measurement viewpoint, the coefficient associated with a training sample indicates the similarity between this training sample and the input sample, whereas the residual by each class implies the difference between the input sample and the samples in that class. According to the results in [18], the residuals are more robust to noise than the coefficients. Therefore, in this chapter, the differences between pores are measured by the residuals in sparse representation. Euclidean distance (ED) is used in [18] to calculate the residuals of sparse representation (see Algorithm 10.1). As a result, the SRC in [18] is sensitive to local distortion, which is however very common in fingerprint images [23]. Therefore, for fingerprint pore matching, we propose to incorporate the tangent distance (TD) into the SRC to make it more robust to distortion. Algorithm 10.1: The SRC Algorithm [18] 1. Input: A set of training samples A ¼ [A1, A2, . . ., Ak] 2 Rm n of k classes and their class labels, a test sample y 2 Rm, as well as an error tolerance ε > 0, or a free parameter λ > 0 to balance the least squares error of representation and the sparsity of the coefficients. 2. Normalize the columns of A to obtain unit l2 ‐ norm. 3. Solve the l1-regularized least squares problem (LSP): (continued)
170
10
Fingerprint Pore Matching
Algorithm 10.1 (continued) n o bx ¼ arg min x kAx yk22 þ λkxk1
ð10:1Þ
4. Calculate δi(x) 2 ℜn, which is a vector whose only nonzero entries are the entries in x that are associated with class i. 5. Compute the residuals: δi(x). 6. Output: The category of test sample: identityðyÞ ¼ arg min i r i ðyÞ
TD is a distance measure first proposed by Simard et al. [19] for optical character recognition (OCR). It is very effective in handling distortion problems in distancebased classification algorithms. As illustrated in Fig. 10.3, if ED is used for classification (the Pearson correlation between Fig. 10.3a, b (or Fig. 10.3c) is 0.92 (or 0.97) if the ED is used), the fingerprint pattern in Fig. 10.3a will be misclassified into prototype B in Fig. 10.3c, but not the true prototype A with slight distortion in Fig. 10.3b. On the contrary, TD can easily solve this problem (the Pearson correlation between Fig. 10.3a, b (or Fig. 10.3c) is 0.99 (or 0.92) if the TD is used) thanks to its ability to make the input pattern locally invariant to any deformation [19]. In [19], it has also been demonstrated that TD, compared with ED, is closer to the real distance between two patterns in 3-dimensional space. However, it is difficult to exactly calculate the TD between two patterns. As there is no analytic expression for the manifolds of the patterns, an approximation method has to be adopted. In the following, we provide a procedure [20, 21] to calculate the TD between two images, x and y. For the image x 2 ℜI J (I and J represent the
Fig. 10.3 Examples of fingerprint segments to illustrate the effectiveness of TD compared with ED. (a) A fingerprint pattern that needs to be classified. (b) The prototype A, which is formed by rotating (a) by 10 degrees and then translating it to the left side by 5 pixels. (c) The prototype B, which represents a fingerprint pattern different from (a)
10.2
Coarse Pore Matching
171
numbers of rows and columns, respectively), its corresponding manifold is obtained by applying transforms, t(x, β), to it: Mx ¼
t ðx, βÞ : β 2 ℜC ⊂ ℜIJ
ð10:2Þ
where β 2 ℜC are the parameters of the transformation, and C is the number of transformation parameters. The approximated manifold is then calculated by Taylor expansion at β ¼ 0: bx ¼ x þ M ∂x where vector vk ¼ ∂β k
βk ¼0
C X ∂x βk þ Ο βk 2 ∂β k βk ¼0 k¼1 k¼1
C X
ð10:3Þ
is called the tangent vector. The TD between images
x and y is calculated as follows: 8 2 9 = C 0 to balance the least squares error of representation and the sparsity of the coefficients. 2. Solve the modified l1-regularized least squares problem (MLSP): (
) 2 m X 0 0 S 0 xji x j ¼ arg min A x j p j þ λ 0
2
i¼1
3. Calculate the difference between the jth ( j ¼ 1, 2, n) pore in S and the ith (i ¼ 1, 2, . . ., m) pore in T: (continued)
10.3
Fine Pore Matching
177
Algorithm 10.2 (continued) L X T S αk vk dji ¼ pi xji p j k¼1
2
4. Establish coarse pore correspondences: n
o PSl , PTq dlq < d d¼
n 1 X min d n j¼1 j
d min j ¼ min d ji ji ¼ 1, 2, ::, m , j ¼ 1, 2, . . . , n: i
5. Refine coarse pore correspondences by using WRANSAC algorithm: (a) Weight calculation for each coarse pore correspondence: w¼1
dlq , d max
d lq < d,
dmax ¼ max dji j j ¼ 1, 2, . . . , n; i ¼ 1, 2, ::, m (b) Selection of MSSs according to weight. (c) Model parameter calculation and affine transformation of coarse pore pairs on template fingerprint. (d) CS establishment. (e) Final refined pore correspondences once the termination conditions are reached, otherwise, go to step (b). 6. Output: Final refined pore correspondences: n
o PSx , PTy jx 2 l, y 2 q
178
10.4
10
Fingerprint Pore Matching
Experimental Results and Analysis
10.4.1 Databases Two databases of high resolution fingerprint images (~1200dpi) were used in the experiments. The first database, denoted as DBI, is the same database as the one used in [7], which contains 1480 fingerprint images from 148 fingers (five images collected for each finger in each of two sessions separated by a time period of about 2 weeks). The images in DBI have a spatial size of 320 pixels by 240 pixels which covers a small fingerprint area (about 6.5 mm by 4.9 mm on fingertips). The fingerprint images in the second database (denoted as DBII) were collected in the same way, but with a larger image size, i.e. 640 pixels by 480 pixels. Pores in these fingerprint images were extracted by using an improved version of the algorithm in [24]. To compare the fingerprint recognition accuracy of the proposed TDSWR method and state-of-the-art methods, including the minutiae and ICP based method [5, 6] (denoted by MICPP), direct pore matching method [7] (denoted by DP), and classical SRC based pore matching method (denoted by SRDP) [25], we conducted the following matches for each method on both DBI and DBII. (1) Genuine matches: each of the fingerprint images in the second session was matched with all the fingerprint images of the same finger in the first session, resulting in 3700 genuine match scores. (2) Imposter matches: the first fingerprint image of each finger in the second session was matched with the first fingerprint images of all the other fingers in the first session, resulting in 21,756 imposter match scores. Note that the pore match scores in our experiments were defined as the number of pairs of final matched pores in fingerprints, which was different from the one used in [7]. Based on the obtained match scores, the equal error rates (EER) were calculated for each method.
10.4.2 Robustness to the Instability of Extracted Pores In fingerprint pore matching, the instability of pores caused by fingerprint quality (dry or wet) is a crucial issue because it seriously affects the matching results. Figure 10.7a shows the extracted pores (marked by red dots) in two fingerprint images captured from the same finger at different times. We can see that some pores do not show up which makes fingerprint pore matching a challenging problem. MICPP only matches the pores that are included in a neighborhood (circled in Fig. 10.7a) to each aligned and matched minutiae pair (connected by lines in Fig. 10.7a). It is thus sensitive to the instability of pores because the number of reproduced pores in a small region is obviously smaller than that in a large region. On the contrary, TDSWR directly matches pores in a hierarchical way and all of the available pores in the fingerprint images are considered. By applying the MICPP and TDSWR methods to the fingerprints images in Fig. 10.7a, 15 and 83 pores are matched, respectively, as shown in Fig. 10.7b, c. It can be seen that TDSWR is more robust to the instability of pores than MICPP.
10.4
Experimental Results and Analysis
179
Fig. 10.7 Example pore matching results of MICPP and TDSWR. (a) Two example fingerprint images with extracted pores and corresponding minutiae. (b) Final pore correspondences obtained by MICPP. (c) Final pore correspondences obtained by TDSWR
Here, it should be noted that the circled neighborhood for MICPP set in this chapter is with radius 45. We select such radius by testing different neighborhood radiuses, such as 15, 30, 45, 60, 75, and 90. We found when using small
180
10
Fingerprint Pore Matching
neighborhood (with radius 15 or 30), the number of matched pores is small due to the few number of pore in a small neighborhood. While increasing the radius of neighborhood (45, 60, 75, and 90), more and more false pore correspondences are obtained by MICPP because the local alignment estimated from the mated minutiae cannot be applied to large regions. The middle neighborhood radius 45 is finally chose based on the number of detected matched pore pairs.
10.4.3 Effectiveness in Pore Correspondence Establishment Figure 10.8 gives the pore correspondences found by different methods in an example genuine pair of fingerprint images in DBI. Figure 10.8a–c show the first 20 coarse pore correspondences (red dashed lines denote the false ones) obtained by the correlation based method, the classical SR based method, and the TD-Sparse method, respectively. It can be seen that there are 15, 7, and 3 false pore correspondences in the results of the three methods, respectively. Table 10.1 reports the average number of true pore correspondences among the first 20 coarse pore correspondences (denoted as N Top20 ) in 100 pairs of genuine fingerprint images randomly chosen from DBI. These results demonstrate that the proposed TD-Sparse based method can more accurately determine the coarse pore correspondences than both correlation based and SR based methods, because it can better distinguish different pores and is more robust to noise and non-linear distortion which are very common in fingerprint images. Figure 10.8d, e show the final pore correspondences by applying the classical RANSAC and the WRANSAC to the coarse pore correspondences established by the TD-Sparse method. WRANSAC found 41 pore correspondences, whereas RANSAC found only 27 pore correspondences. Obviously, WRANSAC is more effective in refining pore correspondences. Moreover, according to our experimental results on DBI, on average, WRANSAC converges in 174 iterations, whereas RANSAC converges in 312 iterations. Hence, WRANSAC is also more efficient than RANSAC.
10.4.4 Fingerprint Recognition Performance In order to illustrate the importance of one-to-many coarse pore correspondences for accurate fingerprint recognition, we compare the EERs on DBI by using one-tomany TD-Sparse (denoted as 1toM_TD-Sparse), one-to-one TD-Sparse (denoted as 1to1_TD-Sparse) and one-to-one correlation (denoted as 1to1_Correlation) based methods to establish coarse pore correspondences and using RANSAC to refine the pore correspondences. Here, we choose RANSAC algorithm because there are no weights available for 1to1_Correlation method in [7]. The results are presented in Table 10.2. As can be seen, the lowest EER is obtained by 1toM_TD-Sparse, which shows the effectiveness of one-to-many coarse pore correspondences in improving fingerprint recognition accuracy.
10.4
Experimental Results and Analysis
181
Fig. 10.8 Example pore correspondence establishment results. The first 20 coarse pore correspondences obtained by (a) correlation, (b) SR, and (c) TD-Sparse based methods. Final pore correspondences obtained by applying (d) RANSAC and (e) WRANSAC to the coarse pore correspondences established by the TD-Sparse method
182
10
Fingerprint Pore Matching
Fig. 10.8 (continued)
Table 10.1 Average number of true pore correspondences among the first 20 coarse pore correspondences (N Top20 ) in 100 genuine fingerprint pairs randomly selected from DBI Method Correlation based method SR based method TD-Sparse based method
Table 10.2 EER of pore matching with different coarse pore correspondence establishment methods
N Top20 8 11 14
Database EER (%) Method 1to1_Correlation 1to1_TD-Sparse 1toM_TD-Sparse
DBI 15.42 5.82 4.45
10.4
Experimental Results and Analysis
183
100
35 MICPP TDSWR DP SRDP
90 80
25
70 60
FRR(%)
FRR(%)
MICPP TDSWR DP SRDP
30
50 40 30
20 15 10
20 5
10 0 10-3
10-2
10-1
100
101
FAR(%)
102
0 10-3
10-2
10-1 100 FAR(%)
(a)
101
102
(b)
Fig. 10.9 ROCs of different pore matching methods on (a) DBI and (b) DBII
Table 10.3 EER of different pore matching methods
Database EER (%) Method MICPP DP SRDP TDSWR
DBI 30.45 15.42 6.59 3.25
DBII 7.83 7.05 0.97 0.53
We finally compare the fingerprint recognition performance of the proposed TDSWR method, the MICPP, DP, and SRDP methods on DBI and DBII. Figure 10.9 shows the ROC curves of these methods, and the corresponding EERs are listed in Table 10.3. It can be seen that TDSWR outperforms both MICPP and DP by decreasing the EER by one order of magnitude on both DBI and DBII. Compared with SRDP, TDSWR has also improved the EER by more than 50% and 45% on DBI and DBII, respectively. This fully demonstrates the effectiveness of TD over ED for fingerprint pore matching. We believe that the improvement achieved by the proposed TDSWR method owes to the following three factors. First, the hierarchical strategy makes the matching method more robust to the instability of pores. Second, the TD-Sparse method used to find coarse pore correspondences is not only robust to noise, which has been demonstrated in [18], but also robust to fingerprint distortion by using TD instead of ED in sparse representation. Third, the one-to-many coarse pore correspondence establishment scheme together with the WRANSAC based refinement make it more effective and efficient to find the correct pore correspondences in fingerprints.
184
10
Fingerprint Pore Matching
10.4.5 TDSWR Applied in Fingerprint Minutiae Matching The proposed TDSWR is also suitable for minutiae matching in fingerprints. Figure 10.10 shows an example of fingerprint matching results based on minutiae. From
Fig. 10.10 Example matching results of TDSWR based on minutiae. (a) Two example fingerprint images with dotted extracted minutiae (41 in the left print and 47 in the right print). (b) Coarse minutiae correspondences (24 initial obtained minutiae pairs). (c) Final minutiae correspondences (15 true minutiae pairs)
10.5
Summary
185
100 ROC for Minutiae Matching
90
Equal Error Line
80 70 FRR(%)
60 50 40 30
EER=10.98%
20 10 0
0
10
20
30
40
50
60
70
80
90
FAR(%)
Fig. 10.11 ROC for minutiae-based matching using TDSWR in DBII
the extracted result in Fig. 10.10a, we can see that there are missing (solid circled), spurious (dashed circled) and inaccurate extracted (solid rectangled) minutiae in both compared fingerprints. Our proposed method can effectively establish the coarse correspondences, as shown in Fig. 10.10b, there are 5 wrong correspondences in 24 coarse ones. 15 true correspondences are finally selected out after refinement, as shown in Fig. 10.10c. Figure 10.11 also shows the fingerprint recognition performance (ROC) of minutiae-based matching in DBII by using the proposed TDSWR method. The EER is about 11%, which further demonstrates the effectiveness of our proposed method for fingerprint matching. This example matching also demonstrates that our proposed TDSWR method can be used for other image matching problems. It is because this approach firstly constructs a local descriptor at the location of each feature point, and then establishes coarse correspondences and refines the coarse pairs to get final result. The proposed TD-Sparse and WRANSAC methods are useful for any coarse matching and fine matching. Therefore, this method can be modified to solve different image matching problems by constructing different local descriptor or using different coarse (or fine) matching methods.
10.5
Summary
This chapter has proposed a novel hierarchical fingerprint matching method, namely TDSWR which mainly applied in sweat pore, by introducing the TD-Sparse based method for coarse pore correspondence establishment and WRANSAC for refinement. The proposed method measures the differences between pores based on the residuals obtained by tangent distance and sparse representation technique, which makes our method more robust to noise and local distortions in fingerprints when compared with the existing DP and SRDP method. It then establishes one-to-many
186
10
Fingerprint Pore Matching
coarse pore correspondences, and assigns to each correspondence a weight based on the difference between the pores in the correspondence. The final pore correspondences are obtained by adopting WRANSAC to refine coarse pore correspondences. The experimental results demonstrate that the proposed method can more effectively establish pore correspondences and finally reduce the EER by one order of magnitude in both of the two fingerprint databases used in the experiments (the best improvement on the recognition accuracy is up to 92%). However, the high computational complexity is one of the limitations of the proposed method. How to further improve the efficiency of the proposed pore matching method is among our future work. One possible solution is first aligning two fingerprints to estimate the overlapping area between them and then matching only the pores lying in the overlapping area. It is more valuable that the method proposed in this chapter can be used for any other image matching problems if slightly modified or not. For example, it can be directly used for singular point and minutiae matching in fingerprints. By constructing different local descriptor or using different coarse (or fine) matching methods, this method is also effective to solve different image matching problems.
References 1. CDEFFS.: Data format for the interchange of extended fingerprint and palmprint features. Working Draft Version 0.4, http://fingerprint.nist.gov/standard/cdeffs/index.html. (June 2009) 2. Stosz, J.D., Alyea, L.A.: Automated system for fingerprint authentication using pores and ridge structure. Automatic systems for the identification and inspection of humans. Vol. 2277. International Society for Optics and Photonics (1994) 3. Roddy, A.R., Stosz, J.D.: Fingerprint features-statistical analysis and system performance estimates. Proc. IEEE. 85(9), 1390–1421 (1997) 4. Kryszczuk, K.M., Patrice, M., Andrzej, D.: Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison. International Workshop on Biometric Authentication. Springer, Berlin/Heidelberg (2004) 5. Anil, J., Yi, C., Meltem, D.: Pores and ridges: Fingerprint matching using level 3 features. 18th International Conference on Pattern Recognition (ICPR’06). Vol. 4. IEEE (2006) 6. Jain, A.K., Yi, C., Meltem, D.: Pores and ridges: high-resolution fingerprint matching using level 3 features. IEEE Trans. Pattern Anal. Mach. Intell. 29(1), 15–27 (2006) 7. Qijun, Z., Lei, Z, David, Z., Luo, N.: Direct pore matching for fingerprint recognition. International conference on biometrics. Springer, Berlin/Heidelberg (2009) 8. Qijun, Z.: High Resolution Fingerprint Additional Features Analysis. Dissertation. The Hong Kong Polytechnic University (2010) 9. Krzysztof, K., Andrzej, D., Patrice, M.: Extraction of level 2 and level 3 features for fragmentary fingerprint comparison. EPFL. 3, 45–47 (2008) 10. Michael, R., Peter, M., Reza, A.: A novel approach to fingerprint pore extraction. In: Proceedings of the Thirty-Seventh Southeastern Symposium on System Theory, 2005. SSST’05. IEEE (2005) 11. Parsons, N.R., et al.: Rotationally invariant statistics for examining the evidence from the pores in fingerprints. Law Prob. Risk. 7(1), 1–14 (2008) 12. Qijun, Z., et al.: Adaptive fingerprint pore modeling and extraction. Pattern Recogn. 43(8), 2833–2844 (2010)
References
187
13. Dong, Z., et al.: Matching images more efficiently with local descriptors. 2008 19th International Conference on Pattern Recognition. IEEE (2008) 14. Donoho, D.L.: For most large underdetermined systems of linear equations the minimal ‘1norm solution is also the sparsest solution. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 59(6), 797–829 (2006) 15. Ingrid, D., Michel, D., Christine, D.M.: An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 57(11), 1413–1457 (2004) 16. Stephen, B., Lieven, V.: Convex Optimization. Cambridge University Press (2004) 17. Seung-Jean, K., et al.: A method for large-scale l1-regularized least squares. IEEE J. Sel. Top. Sign. Proces. 1(4), 606–617 (2007) 18. John, W., et al.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2008) 19. Simard, P., LeCun, Y., Denker, J.S.: Efficient pattern recognition using a new transformation distance. Advances in neural information processing systems (1993) 20. Daniel, K., et al.: Experiments with an extended tangent distance. In: Proceedings 15th International Conference on Pattern Recognition. ICPR-2000, vol. 2. IEEE (2000) 21. Keysers, D., et al.: Adaptation in statistical pattern recognition using tangent vectors. IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 269–274 (2004) 22. Shuicheng, Y, Wang, H.: Semi-supervised learning by sparse representation. In: Proceedings of the 2009 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics (2009) 23. Arun, R., Sarat, D., Anil, J.: A deformable model for fingerprint matching. Pattern Recogn. 38 (1), 95–103 (2005) 24. Qijun, Z., et al.: Adaptive pore model for fingerprint pore extraction. 2008 19th International Conference on Pattern Recognition. IEEE (2008) 25. Feng, L., et al.: Fingerprint pore matching based on sparse representation. 2010 20th International Conference on Pattern Recognition. IEEE (2010) 26. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM. 24(6), 381–395 (1981) 27. Marco, Z.: RANSAC for Dummies. In: Vision Research Lab. University of California, Santa Barbara (2009)
Chapter 11
Quality Assessment of High Resolution Fingerprints
Abstract High resolution fingerprint images have been increasingly used in fingerprint recognition. They can provide more fine features (e.g. pores) than standard fingerprint images, which are expected to be helpful for improving the recognition accuracy. It is therefore demanded to investigate whether or not existing quality assessment methods are suitable for high resolution fingerprint images. This chapter compares some typical quality indexes by analyzing the correlation between them and their prediction ability on minutia-based and pore-based high resolution fingerprint recognition accuracy. Experimental results show that the indexes based on ridge orientation are more effective for high resolution fingerprint recognition systems. Keywords Fingerprint recognition · Quality assessment · High resolution fingerprint images · Quality index
11.1
Introduction
Fingerprint recognition is one of the most widely used biometric techniques for personal authentication. Usually, two fingerprint images are not compared directly, but have the features on them extracted and matched. Although level-1 and level-2 features such as singular points and minutiae (i.e., ridge endings and bifurcations) are the basis of most existing automatic fingerprint recognition systems (AFRS) [1], level-3 features such as pores are also very distinctive and are now increasingly used to improve fingerprint recognition accuracy, thanks to the advent of high resolution fingerprint imaging devices which enable the reliable extraction of them from fingerprint images [2, 3]. When capturing fingerprint images, a number of factors can affect the image quality, e.g., the sensor noise, the acquisition conditions, and the finger skin conditions [1]. Figure 11.1 shows two example fingerprint images, one of good quality and the other of poor quality. Features on the poor quality fingerprint images are difficult to precisely detect, and automatic algorithms often miss true features or detect spurious features. The recognition accuracy of AFRS will be consequently © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_11
189
190
11
Quality Assessment of High Resolution Fingerprints
Fig. 11.1 (a) A good quality high resolution fingerprint images. (b) A bad quality high resolution fingerprint images
degraded. Therefore, in order to ensure good performance of AFRS, it is very important to evaluate the quality of captured fingerprint images. According to the quality of fingerprint images, appropriate follow-up operations can be then taken, e.g. re-capturing the fingerprint if the image quality is too bad, or incorporating the quality into feature extraction and matching to augment the recognition accuracy [4]. Many different methods have been proposed in the literature for assessing the quality of fingerprint images [5–10]. High resolution (1000 ppi) fingerprint images have been recently increasingly used in fingerprint recognition. Compared with conventional low resolution (~500 ppi) fingerprint images, they provide more fine features on fingerprint ridges (e.g., pores) and can thus help to further improve the fingerprint recognition accuracy [2]. Because of the introducing of new features, it is unclear whether existing quality assessment methods are suitable for high resolution fingerprint images or not. This chapter will present a study on the quality assessment of high resolution fingerprint images.
11.2
Quality Assessment Methods
Existing fingerprint image quality assessment methods can be divided into three categories [10]. The first kind of methods are based on local features of fingerprint images, e.g. local ridge orientation and local ridge clarity. These methods are most widely used. They usually divide a fingerprint image into a number of blocks and extract features from each block. The quality of each block is then assessed based on the extracted features, and a quality index is finally calculated for the whole fingerprint image according to the quality of all the local blocks on it. A variety of local features have been exploited, including ridge orientation [6, 8], pixel intensity [7], Gabor features [5], and power spectrum [7].
11.2
Quality Assessment Methods
191
Two typical local feature based fingerprint image quality indexes are orientation certainty level (OCL) [6] and uniformity of local pixel intensity [7]. The OCL index measures the energy concentration along the dominant ridge orientation on a local block. It can be calculated based on the eigenvalues of local structure tensor defined as J¼
1 X x y T x y g ,g gi , gi jBj i2B i i
ð11:1Þ
where B is a local block and |B| denotes the number of pixels on B, gxi and gyi are respectively the x- and y- gradients at the pixel i, and ‘T’ is the transpose operator. Let λ1 and λ2 (λ1 λ2) be the two eigenvalues. The OCL index in [6] is then defined as λ1/λ2. Differently, the uniformity index evaluates the fingerprint image quality based on the degree to which similar pixels (i.e. ridges and valleys) cluster in the nearby region. To get the index value for a fingerprint image, the image is first binarized and the clustering factor is then calculated as the uniformity index (denoted as CFU). Methods in the second category assess the fingerprint image quality based on global features. They analyze fingerprint images in a holistic manner. For example, in [6], the authors consider the continuity of fingerprint ridge orientation field and the uniformity of fingerprint ridge frequency field over the fingerprint image. The method in [8] is based on the observation that the ridge frequency values on a fingerprint image lie within a certain range. It thus assumes that the energy on the spectrum becomes more concentrated in few bands as the quality of fingerprint image increases. Based on this assumption, the method defines a global quality index which measures the energy concentration by using entropy (we denote this method as EC). The third kind of methods predict fingerprint image quality by using classifiers such as neural networks [9]. In these methods, the quality measure is essentially defined as a degree of separation between the match and non-match score distributions of a given fingerprint based on some fingerprint features (e.g., minutiae). They classify fingerprint images into five classes of image quality, instead of generating continuous quality index values for them. From the above discussion, we can see that different quality indexes evaluate the fingerprint image quality from different aspects. As a result, they may be more effective in some cases, but less in others. Moreover, because the recognition performance of AFRS highly depends on the extraction and matching accuracy of the features used for fingerprint recognition, a good quality index for an AFRS should correlate with the features used by the AFRS and thus be able to predict its performance. Motivated by this, we in this study investigate the effectiveness of existing fingerprint image quality assessment methods on high resolution fingerprint images when level-3 features (i.e., pores) are used for recognition, and aim to provide helpful guidance on choosing and designing quality indexes for high resolution fingerprint images.
192
11.3
11
Quality Assessment of High Resolution Fingerprints
Performance Analysis and Comparison
11.3.1 Selected Quality Indexes We considered two local feature based quality indexes, OCL [6] and CFU [7], and one global feature based quality index, EC [8]. These three quality indexes assess the fingerprint image quality based on local ridge orientation consistency, local ridge and valley clarity, and concentration of ridge frequency field, respectively. In the experiments, all quality index values were normalized into the range between 0 and 1, with 0 denoting the worst quality and 1 the best quality.
11.3.2 Database and Protocol We collected a set of high resolution fingerprint images by using our custom-built ~1200ppi fingerprint image acquisition device. There are totally 1480 images from 148 fingers, each finger having five images captured in each of two sessions (about 2 weeks apart). The spatial size of the images is 640 pixels in width and 480 pixels in height. The three quality indexes were calculated for each of the fingerprint images. The correlation between the quality indexes was then studied. In order to investigate the recognition accuracy prediction ability of the three quality indexes, we implemented two fingerprint matchers, one based on minutiae [11] and the other based on pores [12, 13]. By using each matcher, the following genuine and imposter matches were carried out. (1) Genuine matches: each of the fingerprint images in the second session was matched with all the fingerprint images of the same finger in the first session, resulting in 3700 genuine match scores. (2) Imposter matches: the first fingerprint image of each finger in the second session was matched with the first fingerprint images of all the other fingers in the first session, resulting in 21,756 imposter match scores. Following the strategy in [8], we sorted all the fingerprint images according to their quality, and pruned the last p percent poor quality fingerprint images. The match scores corresponding to these poor quality fingerprint images were removed and the equal error rate (EER) was then calculated based on the rest match scores. By investigating the EERs under different p’s, we can understand the recognition accuracy prediction ability of the quality index.
11.3.3 Experimental Results and Analysis Figure 11.2 plots the values of the three quality indexes on the high resolution fingerprint images used in the experiments. The Pearson correlation between each two of the quality indexes is also given. From the results, we can see that the Pearson correlations between the three quality indexes are all below 0.5. This is because they
11.3
Performance Analysis and Comparison
Fig. 11.2 Correlation between the three quality indexes on the high resolution fingerprint images used in the experiments
193
194
11
Quality Assessment of High Resolution Fingerprints
are based on different fingerprint features, and moreover, the good quality of one kind of features does not necessarily guarantee the good quality of other kinds of features. The fingerprint recognition accuracy with respect to the quality of fingerprint images is shown in Fig. 11.3. We considered pruning 5%, 10%, 15%, 20%, 25% and 30% poor quality fingerprint images. If a quality index is good at predicting the recognition accuracy, the EER is expected to decrease as more poor quality fingerprint images are pruned according to the quality evaluated by the quality index. However, according to the results in Fig. 11.3, the CFU quality index does not work at all for either minutia-based or pore-based fingerprint matchers on high resolution fingerprint images. This is better illustrated in Fig. 11.4. In the left fingerprint fragment, there are many pores (i.e. bright blobs) on ridges; whereas in the right fingerprint fragment, very few pores appear on ridges. By using the CFU quality index, the right fragment is evaluated to have better quality than the left one because the pores on ridges decrease the clarity of ridges. However, these pores are in fact useful features on high resolution fingerprint images. Consequently, the CFU quality index seems not suitable for quality assessment of high resolution fingerprint images as demonstrated in Fig. 11.3. The EC quality index is almost effective when the minutia-based fingerprint matcher is used, but does not work for the pore-based fingerprint matcher. As discussed in Sect. 11.2, the EC quality index essentially evaluates the fingerprint ridge frequency field. Such frequency feature is important for the minutia extractor used here which is based on fingerprint images enhanced by Gabor filters, because the ridge frequency has to be estimated for the filters, and thus a good quality ridge frequency field is very important for the method. On the other hand, the pore extractor used in the experiments is less dependent on the ridge frequency. Moreover, the appearance of pores on ridges could bias the estimation of ridge frequency, and decrease the value of the EC quality index. Consequently, the EC quality index cannot well predict the recognition accuracy of pore-based fingerprint matcher as observed in Fig. 11.3. Different from the other two quality indexes, the OCL quality index can well predict the recognition accuracy of both minutia-based and pore-based fingerprint matchers on high resolution fingerprint images. The ridge orientation field plays an important role in the extraction of both minutiae and pores. Fingerprint images with good quality ridge orientation field can thus benefit both minutia-based and porebased fingerprint matchers. This is demonstrated by Fig. 11.3. Moreover, as small dots on ridges, the pores have little interference on the local dominant ridge orientation. Therefore, the OCL quality index still works well on high resolution fingerprint images and can well predict the recognition accuracy of fingerprint matchers when high resolution fingerprint images are used.
11.3
Performance Analysis and Comparison
195
Fig. 11.3 The fingerprint recognition accuracy with respect to the quality of fingerprint images
196
11
Quality Assessment of High Resolution Fingerprints
Fig. 11.4 Two example high resolution fingerprint fragments. The CFU index values of them are respectively (a) 0.09 and (b) 0.12
Fig. 11.5 Two impressions of the same finger at ~500ppi (a) and ~1200ppi (b)
11.3.4 Discussion The above experiments clearly demonstrate that not all existing fingerprint image quality assessment methods are suitable for high resolution fingerprint images. This is due to the characteristics of high resolution fingerprint images compared with low resolution fingerprint images. As shown in Fig. 11.5, on high resolution fingerprint images, many fine features on ridges come into sight, e.g. pores (see the bright blobs on ridges). In conventional fingerprint matchers for low resolution fingerprint images, such fine ridge features are however taken as noise, rather than useful features, and hence will be suppressed by using some enhancement filters. As a result, many existing fingerprint image quality indexes cannot well handle high resolution fingerprint images as they are initially proposed for low resolution fingerprint images. One exception is the OCL quality index, which assesses the fingerprint image quality based on the ridge orientation field. This is because the orientation field composes the basis of fingerprint matchers for both low resolution and high resolution fingerprint images.
References
11.4
197
Summary
A comparative study has been presented in this chapter for quality assessment of high resolution fingerprint images. The results show that not all existing image quality indexes are equally effective for high resolution fingerprint images when the fine ridge features (e.g. pores) on them are utilized in fingerprint recognition. Therefore, fingerprint image resolution and the specific features used for recognition have to be considered when designing fingerprint image quality indexes. In addition to the OCL quality index which is suitable for high resolution fingerprint images, it is worthwhile to develop new indexes specially designed for fine ridge features on high resolution fingerprint images.
References 1. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer, New York (2009) 2. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: high-resolution fingerprint matching using level 3 features. IEEE Trans. PAMI. 29(1), 15–27 (2007) 3. Zhao, Q., Zhang, D., Zhang, L., Luo, N.: High resolution partial fingerprint alignment using Pore-Valley descriptors. Pattern Recogn. 43(3), 1050–1061 (2010) 4. Vatsa, M., Singh, R., Noore, A., Houck, M.M.: Quality-augmented fusion of level-2 and level-3 fingerprint information using DSm theory. Int. J. Approx. Reason. 50(1), 51–61 (2009) 5. Shen, L., Kot, A., Koo, W.: Quality measures of fingerprint images. AVBPA, pp. 266–271 (2001) 6. Lim, E., Jiang, X., Yau, W.: Fingerprint Quality and Validity Analysis. ICIP, pp. 469–472 (2002) 7. Lim, E., Toh, K., Suganthan, P., Jiang, X., Yau, W.: Fingerprint image quality analysis. ICIP, pp. 1241–1244 (2004) 8. Chen, Y., Dass, S., Jain, A.: Fingerprint Quality Indices for Predicting Authentication Performance. AVBPA, pp. 160–170 (2005) 9. Tabassi, E., Wilson, C.: A novel approach to fingerprint image quality. ICIP, vol. 2, pp. 37–40 (2005) 10. Alonso-Fernandez, F., Fiérrez-Aguilar, J., Ortega-Garcia, J., Gonzalez-Rodriguez, J., Fronthaler, H., Kollreider, K., Bigün, J.: A comparative study of fingerprint image-quality estimation methods. IEEE Trans. IFS. 2(4), 734–743 (2007) 11. Feng, J.: Combining minutiae descriptors for fingerprint matching. Pattern Recogn. 41, 342–352 (2008) 12. Zhao, Q., Zhang, L., Zhang, D., Luo, N., Bao, J.: Adaptive Pore Model for Fingerprint Pore Matching. ICPR (2008) 13. Zhao, Q., Zhang, L., Zhang, D., Luo, N.: Direct Pore Matching for Fingerprint Recognition. ICB, pp. 597–606 (2009)
Chapter 12
Fusion of Extended Fingerprint Features
Abstract Extended fingerprint features such as pores, dots and incipient ridges have attracted increasing attention from researchers and engineers working on automatic fingerprint recognition systems. A variety of methods have been proposed to combine these features with the traditional minutiae features. This chapter comparatively analyses the parallel and hierarchical fusion approaches on a high resolution fingerprint image dataset. Based on the results, a novel and more effective hierarchical approach is presented for combining minutiae, pores, dots and incipient ridges. Keywords Fingerprint recognition · High resolution fingerprint images · Extended fingerprint feature set · Fusion
12.1
Introduction
Traditional automated fingerprint recognition systems (AFRS) are mainly based on the minutiae features, i.e., ridge endings and bifurcations [1, 2]. In recent years, increasing attention has been paid to extended fingerprint features such as pores, dots and incipient ridges (see Fig. 12.1) for the purpose of further enhancing the fingerprint recognition accuracy. While extended features are routinely used by experts in manual latent fingerprint matching [3, 4], they have been exploited in AFRS only recently thanks to the advent of high quality fingerprint sensors [5]. They have been proven to be able to improve the accuracy of AFRS when combined with minutiae [6–11]. A number of methods, including parallel and hierarchical fusion approaches, have been proposed for fusing minutiae and extended features. In [7, 9], for example, the authors combined minutiae with pores and dots and incipient ridges by summing up their match scores, which is parallel fusion. In [5], on the other hand, pores were compared first and if they could not be well matched, the input fingerprint was rejected directly; otherwise, minutiae were further compared. This is hierarchical fusion, which was also used by [8, 10]. The method of [8] proceeds from level-1 features to level-3 features, and directly rejects the input fingerprint if any of the features are found to be unmatched. In [10], level-2 (i.e., minutiae) features were also © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_12
199
200
12
Fusion of Extended Fingerprint Features
Fig. 12.1 Example extended fingerprint features: pores, dots and incipient ridges
compared before level-3 (i.e., pores, ridge contours, and edgeoscopic) features, but the input fingerprint was directly accepted (rather than rejected) if the level-2 features were matched well and the level-3 features would not be further compared. From above discussion, we can see that given the various approaches in the literature of fusing extended features and minutiae, an extensive study on them is highly necessary to investigate how extended fingerprint features can be better utilized by AFRS. Toward the end, this chapter will comparatively analyze the parallel and hierarchical fusion approaches by using a high resolution fingerprint image dataset. Some basic problems in designing hierarchical fusion approaches will be discussed for fingerprint features. A novel hierarchical fusion method is then presented to combine minutiae, pores, dots and incipient ridges, and it is shown to be more effective than existing approaches.
12.2
Fusion Approaches
12.2.1 Parallel Fusion Generally, the fusion in a biometric system can be fulfilled at four different levels, i.e., sensor level, feature level, score level, and decision level [12]. This study focuses on the score level fusion of minutiae and extended fingerprint features, which is most widely used in fingerprint recognition. In parallel fusion approaches, different fingerprint features are matched separately and simultaneously, each defining a matcher. Score normalization is then applied to change the location and scale parameters of the match score distributions at the outputs of the individual matchers so that the match scores of different matchers are transformed into a common domain [12]. The minmax (MMN) and z-score normalization techniques [12] are considered in our experiments. When applying z-score normalization, the mean and standard deviation of match score distribution are estimated in two different ways. One uses the mean and standard deviation of genuine match scores (ZNG), and the other uses those of imposter match scores (ZNI). After normalization, the match scores of individual matchers are combined to form one single final score for the input fingerprint by using the min (MIN), max (MAX), and simple sum (SSUM) rules. The MIN and MAX rules respectively pick as the final score the minimum and
12.2
Fusion Approaches
201
maximum of the match scores of all individual matchers, whereas the SSUM rule takes the summation of the match scores as the final score [12].
12.2.2 Hierarchical Fusion Hierarchical fusion, in contrast to parallel fusion, launches the matchers in serial. The recognition process can stop at any matcher if the matcher can already make a decision with high confidence; otherwise, it will proceed to the matcher at the next layer until a decision can be made. Two basic problems are involved in designing hierarchical fusion approaches: (1) in which order should the matchers/features be applied and (2) in what manner should the features be used? For example, one kind of features can be used in a positive manner, i.e. if its match score is above a given threshold, then the input fingerprint is directly accepted as a genuine; or in a negative manner, i.e. if its match score is not above the threshold, then the input fingerprint is directly rejected as an imposter. Different orders and both manners have been used in previous work [5–10]. It is highly desired to investigate which approach is better for combining minutiae and extended fingerprint features. In this chapter, we will consider two different orders: from minutiae to pores to dots and incipient ridges (denoted as MPD), and inversely (denoted as DPM). In each order, we consider using the features in both positive (denoted as P) and negative (denoted as N) manners. Thus, four different hierarchical fusion approaches are studied here, MPD_P, MPD_N, DPM_P, and DPM_N, among which DPM_P is for the first time proposed in the literature. Figure 12.2 shows the flowcharts of MPD_N and DPM_P as examples. In the figure, sm, sp., and sd denote respectively sp > t p
sm > t m Match Pores Match Minutiae
sm ≤ t m
Reject
sp ≤ t p
Match Dots and Incipient Ridges Reject
sd > t d Accept
sd ≤ t d
Reject
(a) sd ≤ t d
sp ≤ t p Match Pores
Match Dots and Incipient Ridges Accept
sd > t d
sm ≤ t m Match Minutiae
Reject
Accept
sp > t p
(b)
Accept
sm > t m
Fig. 12.2 The flowcharts of two example hierarchical fusion approaches: (a) MPD_N and (b) DPM_P
202
12
Fusion of Extended Fingerprint Features
the match scores of minutiae, pores, and dots and incipient ridges. With respect to the associated thresholds, tm, tp, and td, we chose them in the experiments according to the manner in which the features were used. Specifically, if a feature is used in a positive manner on prior layers of the fusion hierarchy, we selected the threshold for it as the minimum of the thresholds which produce the minimum false acceptance rate (FAR). This is because such threshold can give the lowest false rejection rate (FRR) among all the thresholds that correspond to the minimum FAR. Similarly, for a feature negatively used, we selected its threshold as the maximum of the thresholds which lead to the minimum FRR.
12.3
Fusion Approaches
12.3.1 Datasets and Algorithms A set of high resolution (~1200dpi) fingerprint images were collected by using our custom-built acquisition device to evaluate different fusion approaches. The dataset consists of 1480 fingerprint images from 148 fingers, each having five images captured in each of two sessions (about 2 weeks apart). The images have a spatial size of 640480 pixels. The minutiae were extracted from the fingerprint images and matched by using the method in [13]. The pores on the images were extracted by using an improved method of [14], and matched by using the method in [15]. The dots and incipient ridges were extracted and matched by using methods similar to those in [14, 15]. Note that the minutiae match score between two fingerprints is defined as the percentage of the matched minutiae among the complete set of minutiae on the two fingerprints. The pore match score between them is defined as the number of finally matched pores on them. The dots and incipient ridges are matched together by representing both of them with the coordinates of their centers, and the match score of dots and incipient ridges on two fingerprints is also defined as the number of matched dots and incipient ridges on them. With the above feature extraction and matching methods, the following matches were conducted for each individual feature. (1) Genuine matches: each of the fingerprint images in the second session was matched with all the fingerprint images of the same finger in the first session, resulting in 3700 genuine match scores. (2) Imposter matches: the first fingerprint image of each finger in the second session was matched with the first fingerprint images of all the other fingers in the first session, resulting in 21,756 imposter match scores. Based on these match scores, the equal error rates (EER) and the receiver operating characteristic (ROC) curves of different parallel fusion and hierarchical fusion methods were calculated. We report and analyze the obtained results as follows.
12.3
Fusion Approaches
203
12.3.2 Fusion Results The EERs of different parallel fusion approaches are listed in Table 12.1. The best result among these parallel fusion approaches is obtained by MMN + SSUM, and the corresponding ROC curve (denoted by MPD_MMN_SSUM) is plotted in Fig. 12.3 together with the ROC curves of different hierarchical fusion approaches. For comparison, the ROC curves of individual features are also displayed in Fig. 12.3.
12.3.3 Analysis The above results show that the best EERs obtained by parallel and hierarchical fusion are respectively 0.65% and 0.508%. This supports the conclusion of previous studies [7, 8] that hierarchical fusion is better than parallel fusion in combing fingerprint features. However, our results also show that in order to get the best Table 12.1 The equal error rates of different parallel fusion methods
MMN ZNG ZNI
MIN (%) 13.54 44.61 19.48
MAX (%) 0.71 13.92 16.22
SSUM (%) 0.65 0.92 0.8
Fig. 12.3 The ROC curves of individual features and different parallel and hierarchical fusion approaches
204
12
Fusion of Extended Fingerprint Features
performance by using hierarchical fusion, the order and manner of using fingerprint features should be carefully considered; if the features are not properly utilized, hierarchical fusion can even work worse than parallel fusion. Among different hierarchical fusion approaches, the positive manner overwhelms the negative one in all cases. When the features are used in a negative manner, the performance after fusion becomes even worse than that achieved by individual features. In order to better understand why the positive manner is better, we show in Fig. 12.4 the match score distributions of the individual features, i.e. minutiae, pores, dots and incipient ridges. We also calculated the following statistics on these match scores: the minimum (Min), maximum (Max), mean (Mean), and standard deviation (Std) of the genuine and imposter match scores. The results are presented in Table 12.2. According to Fig. 12.4 and Table 12.2, the genuine and imposter match scores of all of the considered features have overlap in the lower part of the range of match scores. This means that some pairs of genuine fingerprints would also fall into the cluster obtaining low match scores. Consequently, if we directly reject an input fingerprint just because its match score of certain feature is small as what we have done on the prior layers of hierarchical fusion in negative manner, it will be very possible for us to commit a false rejection. On the other hand, because there is no overlap between genuine and imposter match scores on the higher part of the range of scores, we can accept with high confidence the input fingerprint which gains a high match score. Furthermore, when the positive manner is taken, the fusion from DOTINR to PORE to MINU outperforms the fusion using an inverse order. This is interesting considering that in most previous studies [8, 10], the fingerprint features on level-2 were compared before those on level-3. The results here, however, show that using the extended fingerprint features in a positive manner and on the prior layers of the hierarchical fusion system can benefit more the overall accuracy of AFRS.
12.4
Summary
This chapter comparatively studied parallel and hierarchical fusion approaches for combining minutiae and extended fingerprint features. The results showed the advantages of hierarchical fusion over parallel fusion, and demonstrated the importance of fusion order and manner in designing hierarchical fusion approaches. A more effective hierarchical fusion approach has also been presented in the study.
Summary
205
Fig. 12.4 The match score distributions of (a) minutiae, (b) pores, and (c) dots and incipient ridges
206
12
Fusion of Extended Fingerprint Features
Table 12.2 The (genuine, imposter) match score statistics of individual features MINU PORE DOTINR
Min (0, 0) (5, 0) (0, 0)
Max (0.99, 0.23) (396, 9) (65, 14)
Mean (0.61, 0.02) (68, 6) (5.9, 2.6)
Std (0.179, 0.036) (63.5, 0.6) (6.96, 1.63)
References 1. Ratha, N., Bolle, R.: Automatic Fingerprint Recognition Systems. Springer, New York (2004) 2. Maltoni, D., Maio, D., Jain, A.K., Prabhakar, S.: Handbook of Fingerprint Recognition, 2nd edn. Springer, New York (2009) 3. Ashbaugh, D.R.: Quantitative-Qualitative Friction Ridge Analysis: An Introduction to Basic and Advanced Ridgeology. CRC Press LLC, Boca Raton (1999) 4. CDEFFS: Data Format for the Interchange of Extended Fingerprint and Palmprint Features. Version 0.4 (2009) 5. Stosz, J.D., Alyea, L.A.: Automated system for fingerprint authentication using pores and ridge structure. SPIE. 2277, 210–223 (1994) 6. Kryszczuk, K., Morier, P., Drygajlo, A.: Study of the distinctiveness of level 2 and level 3 features in fragmentary fingerprint comparison. BioAW. LNCS 3087, 124–133 (2004) 7. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: fingerprint matching using level 3 features. ICPR’06, 4, pp. 477–480 (2006) 8. Jain, A.K., Chen, Y., Demirkus, M.: Pores and ridges: fingerprint matching using level 3 features. IEEE Trans. PAMI. 29(1), 15–27 (2007) 9. Chen, Y., Jain, A.K.: Dots and incipients: extended features for partial fingerprint Matching. Presented at Biometric Symposium, Baltimore (2007) 10. International Biometric Group: Analysis of Level 3 Features at High Resolutions. Phase II – Final Report (2008) 11. Vasta, M., Singh, R., Noore, A., Singh, S.K.: Combining pores and ridges with minutiae for improved fingerprint verification. Signal Process. 89, 2676–2685 (2009) 12. Ross, A., Nandakumar, K., Jain, A.K.: Handbook of Multibiometrics. Springer, New York (2006) 13. Feng, J.: Combining minutiae descriptors for fingerprint matching. Pattern Recogn. 41, 342–352 (2008) 14. Zhao, Q., Zhang, L., Zhang, D., Luo, N., Bao, J.: Adaptive Pore Model for Fingerprint Pore Extraction. ICPR’08 (2008) 15. Zhao, Q., Zhang, L., Zhang, D., Luo, N.: Direct Pore Matching for Fingerprint Recognition. ICB’09, pp. 597–606 (2009)
Chapter 13
Book Review and Future Work
Abstract This chapter concludes the book. We first highlight again the motivation of compiling such a book on the topic of advanced fingerprint recognition technology, in particular 3D fingerprints and high resolution fingerprints. We then briefly summarize the content of each chapter in the main body of this book. Finally, we discuss the remaining challenges in deploying 3D and high resolution fingerprint recognition systems in real-world scenarios, and suggest research directions for future study, including developing compact sensors, reducing computational complexity, large-scale evaluation, full utilization of fingerprint features, and the application of emerging learning techniques and imaging technologies. Keywords Fingerprint recognition · High resolution fingerprints · 3D fingerprints · Fingerprint features
13.1
Book Recapitulation
Fingerprints, as one of the most widely used biometric traits, refer to the pattern of interleaved ridges and valleys on finger tips. The use of fingerprints as identity evidence can be dated back to thousands of years ago. However, the scientific foundation of fingerprint-based personal identification was not established until the nineteenth century, when forensic experts systematically utilized fingerprints to identify criminals and victims. Since 1950s, automated fingerprint recognition systems (AFRSs) have been rapidly developed thanks to the advancement of computing power, fingerprint acquisition technology, and fingerprint processing algorithms. AFRSs are now deployed in both forensics and non-forensics applications. For instance, in forensics, fingerprints are used as important legal evidence by law-enforcement agencies; in civilian applications, fingerprints are employed for access and attendance control as well as for other identity services. The capacity of fingerprints to distinguish different persons highly depends on the discriminative features that can be extracted from the captured fingerprint images. Fingerprint features on 2D images are generally divided into three levels according to their scales. Global ridge patterns, local ridge singularities (e.g., ridge endings and © Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5_13
207
208
13
Book Review and Future Work
bifurcations, also known as minutiae), and sweat pores (located on ridges) are typical level-1, level-2, and level-3 features, respectively. Level-1 and level-2 features can be reliably extracted from 500 ppi fingerprint images that can be easily obtained with low-cost sensors, and are thus well studied and extensively used by contemporary AFRSs. In contrast, limited attention has been given to level-3 features and shape of fingers in the literature of automatic fingerprint recognition, though forensic experts routinely utilize such extended features in matching fingerprints. The advancement of three-dimensional (3D) and high resolution imaging technology in the past decade makes it feasible to capture 3D fingerprints and highresolution fingerprints, from which finger shape and level-3 features can be extracted and explored for fingerprint recognition. This book is based on our research with focus on the advanced fingerprint recognition technology using 3D fingerprint features (i.e., finger shape, kind of level 0 features) or high resolution fingerprint features (i.e., ridge detail, kind of level 3 features). The book begins with an introduction chapter clarifying the background and motivation of research on advanced fingerprint recognition technology, and is organized into two main parts focusing on 3D fingerprints and high resolution fingerprints, respectively. PART I introduces 3D fingerprints. There are four chapters (Chaps. 2, 3, 4, and 5) in this part. In Chap. 2, we summarize the background and technologies involved in 3D AFRSs. Typically, the ways to generate 3D fingerprints, 3D fingerprint features, and 3D fingerprint recognition systems and applications are thoroughly reviewed. Chapter 3 firstly discusses the problem of feature-based 3D reconstruction model for close-range objects. Then, three criteria are set to guide the selection of feature correspondences for more accurate 3D reconstruction by analyzing features on representative objects. After that, these criteria are applied to 3D fingerprint reconstruction from multi-view fingerprint images. The effectiveness of the reconstruction method is finally proofed by comparison with the corresponding 3D point cloud data obtained by structured light illumination (SLI) technique which is taken as a ground truth in this Chapter. In Chap. 4, we introduce an accurate touchless 3D fingerprint recognition system by matching low resolution fingerprint features from the intermediate form of 3D fingerprints. The distal interphalangeal crease (DIP)-based feature is extracted from multi-view fingerprint images (i.e. the intermediate form of 3D fingerprints) to achieve more accurate recognition. The experimental results have shown the effectiveness of proposed system in recognition accuracy. Chapter 5 focuses on the analysis of 3D fingerprint features and its applications. The 3D fingerprints discussed in this chapter are imaged by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed, such as for fingerprint alignment, for fingerprint recognition and for more accurate singular point location. From the experimental results, we can see that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication.
13.1
Book Recapitulation
209
Also, it is helpful to remove false core point. Furthermore, promising EER can be achieved by combining 3D fingerprint features with 2D features for fingerprint recognition. PART II discusses high resolution fingerprints, and consists of seven chapters (Chaps. 6, 7, 8, 9, 10, 11, and 12). Chapter 6 introduces the background and development of fingerprint recognition using high resolution images. It first discusses the significance of high resolution fingerprint recognition, and then introduces fingerprint features, particularly the features available on high resolution fingerprint images. Some high resolution fingerprint recognition systems are then discussed, followed by benchmarks of high resolution fingerprint images in the literature and the recent development of deep learning based fingerprint recognition methods. Finally, a brief summary of the chapters in Part II is given. Chapter 7 discusses the resolution problem for high resolution fingerprint acquisition. Resolution is one of the main parameters affecting the quality of a digital fingerprint image. It directly influences the cost, interoperability, and performance of an AFRS. In this chapter, a multi-resolution fingerprint acquisition device is designed to collect fingerprint images at multiple resolutions and captured fingerprints at various resolutions but at a fixed image size. Then, two most representative fingerprint features, minutiae and pores, are chosen to guide the setting of criteria by theoretical analysis and experimental implementation. Finally, a reference resolution is recommended for high resolution fingerprint acquisition. In Chap. 8, we focus on how to extract pores from high resolution fingerprint images. Specifically, two pore extraction methods are introduced. The first method is based on a dynamic anisotropic pore model that describes pores more accurately by using orientation and scale parameters. An adaptive pore extraction method is developed based on the model. It first partitions the fingerprint image into welldefined, ill-posed, and background blocks, then computes a local instantiation of appropriate pore model according to the dominant ridge orientation and frequency on each foreground block, and finally extracts the pores by filtering the block with the adaptively generated pore model. The second method is a coarse-to-fine detection method based on convolutional neural networks (CNN) and logical operation. More specifically, pore candidates are coarsely estimated using logical operation at first; then, coarse pore candidates are further computed through well-trained CNN models; precise pore locations are finally refined by logical and morphological operation. Extensive experiments are performed, and the results demonstrate that both the two methods can detect pores accurately and robustly, and consequently improve the fingerprint recognition accuracy. In Chap. 9, an application of pores is shown for the alignment of high resolution partial fingerprint images. Pores are first extracted from the fingerprint images by using a difference of Gaussian filtering approach. After pore detection, a novel porevalley descriptor (PVD) is employed to characterize pores based on their locations and orientations, as well as the ridge orientation fields and valley structures around them. A PVD-based coarse-to-fine pore matching algorithm is then developed to locate pore correspondences. Once the corresponding pores are determined, the
210
13
Book Review and Future Work
alignment transformation between two partial fingerprints can be estimated. The method is compared with representative minutia based and orientation field based methods. The experimental results show that the PVD-based method can more accurately locate corresponding feature points, estimate the alignment transformations, and hence significantly improve the accuracy of high resolution partial fingerprint recognition. In Chap. 10, we mainly discuss an effective fingerprint pore matching method. This method matches pores in fingerprints by adopting a coarse-to-fine strategy. In the coarse matching step, sparse representation method is employed to describe pore patch for coarse pore correspondences establishment. To make the representation more robust to noise and distortions in fingerprint images, tangent distance is introduced in sparse representation. In the fine matching step, false pore correspondences are further excluded by a weighted Random Sample Consensus (WRANSAC) algorithm in which the weights of pore correspondences are determined based on the dis-similarity between the pores in the correspondences. Experimental results show that much higher recognition accuracy can be achieved. Chapter 11 focuses on the quality assessment issue of high resolution fingerprint images. A comparative study is presented in this chapter by considering a number of image quality indexes in the literature. The results show that not all existing image quality indexes are equally effective for high resolution fingerprint images when the fine ridge features (e.g. pores) on them are utilized in fingerprint recognition. The study suggests that fingerprint image resolution and the specific features used for recognition have to be considered when designing fingerprint image quality indexes. In Chap. 12, we attempt to exploit the extended fingerprint feature set for improved recognition accuracy with focus on the fusion strategy of the multiple fingerprint features available on high resolution fingerprint images. To this end, we comparatively study parallel and hierarchical fusion approaches for combining minutiae and extended fingerprint features (i.e., pores, dots and incipient ridges). The results show the advantages of hierarchical fusion over parallel fusion, and demonstrate the importance of fusion order and manner in designing hierarchical fusion approaches. Based on the results, a novel and more effective hierarchical approach is devised for combining minutiae, pores, dots and incipient ridges.
13.2
Challenges and Future Work
In contrast to the extensive study on conventional 2D fingerprint images and the wide application of minutiae-based AFRSs, little research has been done on 3D fingerprints and high resolution fingerprints, and 3D and high resolution fingerprint features are seldom used in practical AFRSs, although they have been proven effective in improving fingerprint recognition performance. This book systematically introduces different aspects of 3D and high resolution fingerprints, including acquisition devices, pre-processing, quality assessment, feature extraction, feature matching, and applications. We believe that 3D and high resolution fingerprints, as
13.2
Challenges and Future Work
211
advanced fingerprint recognition technology, have great potential in next generation AFRSs with the advancement of relevant techniques. Before closing this book, we discuss some challenges in the development of practical AFRSs based on 3D and high resolution fingerprints, which deserve research efforts in the future work. Compact Sensors Reducing the size, consumption and cost of fingerprint acquisition devices is of significant importance in the deployment of AFRSs. The application of fingerprint recognition in personal authentication on mobile devices like smart phones further increases the demand on compact fingerprint sensors. However, current 3D fingerprint sensors and high resolution fingerprint sensors are still relatively bulky and expensive, which becomes a big barrier to the promotion of 3D and high resolution AFRSs in real-world applications. Computational Efficiency Practical AFRSs have high requirement on the fingerprint matching speed. However, existing 3D and high resolution fingerprint matching methods have relatively high computational cost. For example, the computational complexity of the pore matching methods discussed in this book is very high due to the large number of pores even in a small patch of fingerprint images. Such high computational complexity hinders the application of pore-based AFRSs. Large-Scale Evaluation Due to the limited public resource in the literature of 3D and high resolution fingerprints, existing studies use only small-small databases (up to hundreds of fingers). However, practical applications usually require extremely low false acceptance rates, which can be reliably estimated only when sufficiently large benchmark data sets are available. Therefore, it is highly demanded to evaluate 3D and high resolution fingerprint recognition methods on large-scale databases with a wide variety of fingerprint image qualities. Full Utilization of Fingerprint Features Compared with the feature set used by human fingerprint examiners, the fingerprint features utilized by contemporary AFRSs are limited. Moreover, thanks to the development of fingerprint imaging techniques, a richer set of features can be observed on the captured fingerprint images. For instance, in 3D case, additional spatial coordinates and orientation are available, and we believe that effectively excavating the merits of 3D fingerprint images and fusing different levels of fingerprint features will undoubtedly benefit high security fingerprint recognition. Learning-Based Approaches It has been well proven that in many pattern recognition tasks learned features are superior to handcrafted features by using emerging deep learning techniques. Similar observations have also been made in fingerprint feature extraction and matching using images of 500dpi. Although some deep learning based methods have been proposed for extracting and matching pores on high resolution fingerprint images, further studies are highly expected on the effectiveness of learning-based approaches for processing and recognizing 3D and high resolution fingerprints, especially as large-scale data sets are becoming available. Achievement along this direction might lead to an innovation to the classical
212
13
Book Review and Future Work
pipeline of AFRSs, which resembles how human fingerprint examiners match fingerprints. Emerging New Imaging Technologies Heterogeneous fingerprints have their own characteristics. 3D fingerprints stand out in keeping 3D profile of human finger, while high resolution fingerprints have advantages of providing level-3 fingerprint features. Fingerprints integrating the merits of 3D and high resolution fingerprints will contribute to building more robust and high-security AFRSs. Recently, the introduction of Optical Coherence Technology (OCT) into fingerprint imaging made it possible to obtain high resolution 3D fingerprint images. How to make full use of the OCT-based fingerprints will largely promote the development of AFRSs with both high recognition accuracy and anti-spoofing ability.
Index
A Adaptive pore model, 115 Advanced fingerprint, v, 1–3, 5, 59, 89, 208, 210 Anti-spoofing, 12, 84, 212 Automated fingerprint recognition systems (AFRSs), v, 2, 3, 5, 9, 59, 78, 79, 165, 166, 207, 208, 210–212
B Biometric, v, 17, 33, 34, 59–61, 78, 82, 83, 89, 90, 108, 109, 189, 200, 207
C Camera calibration, 19, 37 Competitive coding scheme, 45, 70 Convolutional neural network (CNN), 84, 109, 129–133, 209 Core points, 27, 34, 60, 66, 69, 70, 73, 209 Curve-skeleton, 71, 72
D Distal interphalangeal crease (DIP), 12, 34, 40–43, 45–47, 70, 72, 208 Distinctive 3D shape ridge feature, 60, 64, 69–73, 208 Dynamic anisotropic pore model (DAPM), 108, 109, 111–115, 117, 119, 121, 134, 135, 209
E Enhancement, 34, 196 Equal error rates (EER), 36, 48, 49, 51, 52, 55, 60, 70, 71, 73, 82, 102, 103, 124, 126, 159, 160, 178, 180, 182, 183, 185, 186, 192, 194, 202, 209 Extended fingerprint feature set, 80, 210
F Fingerprint acquisition, 3, 11, 36, 59, 89–104, 207, 209, 211 Fingerprint alignment, 6, 84, 139–162, 208 Fingerprint classification, 78, 89 Fingerprint coarse alignment, 67 Fingerprint feature, v, 1–6, 9–12, 18, 28–30, 35, 36, 40–48, 50–52, 59, 60, 63–66, 70–73, 78–85, 89, 95, 141, 144, 191, 194, 199–212 Fingerprint matching, 48, 60, 65, 72–73, 82, 84, 95, 108, 139, 142, 165, 166, 184, 185, 199, 211 Fingerprint recognition, v, 2–6, 10, 33–36, 48, 55, 59, 60, 65–67, 73, 78, 80–85, 89–92, 97–100, 102, 108, 124, 140–143, 146, 147, 158–162, 180–183, 185, 189–191, 197, 200, 208–211 Fingerprint recognition accuracy, 6, 78, 82, 91, 95, 102–103, 151, 178, 180, 189, 190, 194, 195, 199, 209 Finger width, 40, 46 Fusion, v, 6, 12, 48, 49, 51, 52, 54, 55, 60, 72, 73, 102, 123, 124, 126, 199–204, 210
© Springer Nature Singapore Pte Ltd. 2020 F. Liu et al., Advanced Fingerprint Recognition: From 3D Shape to Ridge Detail, https://doi.org/10.1007/978-981-15-4128-5
213
214 G Genuine match score, see Match score Genuine pair, 49–51, 175, 180
H Hierarchical fusion, 199–204, 210 High-resolution AFRS, 82, 90, 91, 96, 98, 103, 104 High resolution fingerprint, v, 3–6, 77–85, 89–104, 108, 132, 140, 141, 162, 166, 189–197, 208–212 High resolution fingerprint image, 4, 81–85, 95, 109, 110, 140, 141, 178, 190–194, 196, 197, 200, 209–211
I Image resolutions, 11, 40, 43, 91, 95–98, 120, 197, 210 Imaging technologies, v, 4, 9, 81, 208, 212 Imposter pair, 49–51, 53, 175
L Latent fingerprints, 3, 34, 199 Learning-based pore extraction, 128–134 Level 0 features, 40, 208 Level-3 features, v, 2–5, 40, 78, 80–82, 84, 90, 140, 144, 189, 191, 199, 208
M Match score, 47–49, 52, 67, 68, 70, 71, 123, 124, 126, 158, 159, 178, 192, 199–202, 204–206 Minutiae, v, 1–5, 9, 10, 28, 29, 34, 35, 48, 50–56, 60, 70, 78–80, 82–84, 89–91, 95–104, 108, 123, 124, 126, 139–143, 147, 154, 155, 158, 159, 161, 165, 166, 178–180, 184–186, 189, 191, 192, 194, 199–202, 204, 205, 208–210 Model-based pore extraction, 110–128 Multi-view images, 36, 52
O Optical Coherence Technology (OCT), 212 Optical fingerprint sensors, 91
P Partial fingerprint, 3, 6, 82–84, 122–123, 127, 139–162, 209, 210
Index Point cloud, 4, 15, 18, 23, 26–28, 30, 60–70, 72, 73, 208 Pore, 2, 5, 6, 78, 80–84, 90–92, 95–104, 107–136, 140, 141, 143–151, 153–156, 158, 159, 161, 162, 165–186, 189–192, 194, 196, 197, 199–202, 204–206, 208–211
Q Quality assessment, 6, 84, 189–197, 210 Quality index, 152, 190–192, 194, 196, 197
R Reconstruction criteria, 18–23, 28, 30 Region of interest (ROI), 37–39, 63 Representative fingerprint features, 90, 209 Ridge details, 78, 208
S Selecting resolution criteria, 95–99 Sparse representation, 73, 167, 169, 172, 183, 185, 210 Stereo vision, 17, 18 Structured light illumination (SLI), 12, 18, 26, 28, 30, 60, 208
T Tangent distance, 167, 169, 185, 210 TD-Sparse, 167–172, 174, 180–183, 185 3D fingerprint recognition, v, 3, 5, 9–12, 60, 72, 73, 208 3D fingerprint reconstruction model, 17–20, 208 3D fingerprints, 3, 5, 9–13, 59–73, 208, 210, 212 3D finger surface, 12 3D minutiae, see Minutiae 3D point cloud, see Point cloud Touchless 3D fingerprints, 9, 208 Touchless fingerprint recognition, 34, 35, 52, 55 Touchless multi-view fingerprints, 35–37, 48, 52
W Weighted random sample consensus (WRANSAC), 167, 174–177, 180, 181, 183, 185, 186, 210