.NET project on Multimodal Biometric Authentication System
Link for the project is given below.
Biometric systems make use of the physiological and/or behavioral traits of individuals, for recognition purposes. These traits include ﬁngerprints, hand-geometry, face, voice, iris, retina, gait, signature, palm-print, ear, etc. Biometric systems that use a single trait for recognition (i.e., unimodal biometric systems) are often affected by several practical problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multimodal biometric systems overcome some of these problems by consolidating the evidence obtained from different sources. These sources may be multiple sensors for the same biometric (e.g., optical and solid-state ﬁngerprint sensors), multiple instances of the same biometric (e.g., ﬁngerprints from different ﬁngers of a person), multiple snapshots of the same biometric (e.g., four impressions of a userís right index ﬁnger), multiple representations and matching algorithms for the same biometric (e.g., multiple face matchers like PCA and LDA), or multiple biometric traits (e.g., face and ﬁngerprint).
A Unimodal Biometric System (UBS) is usually more cost-efficient than a multimodal biometric system. However, it may not always be applicable in a given domain because of the limitations and problems like skin dryness, disease, data quality, pressure, dirt, oil, etc. Implementing an authentication based on weighted multimodal system gives not only high efficiency and performance but also allows the administrator to adjust ratio of weights as required. Generally feature matching or projecting input on template generates a score which may be nonhomogeneous. So in that case to fuse two or more traits, score level normalization (numerical scaling) is performed to overcome the limitation of incompatibility of scores. Whereas in our system; the input is continuously projected on the template to record % (percentage) of accuracy or confidence based on the least distance (Euclidean Distance) measurement in finding neighbors (specifically in case of face verification). We record a hundred values; and, in a divide and conquer fashion, mean accuracy scores are stored. These scores are then multiplied with a floating point number Ďní typically less than 1, which are then added with the multiplication of another biometric score and Ď1-ní. For face verification we have used High quality 1/4 CMOS sensor- 480K pixels (Interpolated 8M pixels still image) and for reading finger prints we used optical sensor with 0.14 sec (continuous) / 0.20 sec (snap-shot) imaging speed. Face verification is based on the fundamental concept of 2D model i.e. Principal Component Analysis. It is a mathematical procedure that performs a dimensionality reduction by extracting the principal components of the multi-dimensional data. Fingerprint verification is based on minutiae extraction and Eigen vectors formation.