Edited by Giuseppe Lo Giudice and Angel Catalá
To purchase hard copies of this book, please contact the representative in India:
CBS Publishers & Distributors Pvt. Ltd. www.cbspd.com | customercare@cbspd.com
Chapter metrics overview
2,452 Chapter Downloads
Cite this chapter
There are two ways to cite this chapter:
1. Choose citation style Copy to clipboard Get citation 2. Choose citation style Download citationImpact of this chapter
IntechOpen DownloadsTotal Chapter Downloads on intechopen.com
IntechOpen ViewsTotal Chapter Views on intechopen.com
Altmetric scoreOverall attention for this chapters
Nowadays, we are talking more and more about insecurity in various sectors as well as the computer techniques to be implemented to counter this trend: access control to computers, e-commerce, banking, etc. There are two traditional ways of identifying an individual. The first method is a knowledge-based method. It is based on the knowledge of an individual’s information such as the PIN code to allow him/her to activate a mobile phone. The second method is based on the possession of token. It can be a piece of identification, a key, a badge, etc. These two methods of identification can be used in a complementary way to obtain increased security like in bank cards. However, they each have their weaknesses. In the first case, the password can be forgotten or guessed by a third party. In the second case, the badge (or ID or key) may be lost or stolen. Biometric features are an alternative solution to the two previous identification modes. The advantage of using the biometric features is that they are all universal, measurable, unique, and permanent. The interest of applications using biometrics can be summed up in two classes: to facilitate the way of life and to avoid fraud.
*Address all correspondence to: mrsouhail@gmail.com
The increasing performance of computers over the last decade has stimulated the development of general-purpose computer vision algorithms. One of the major problems of computer vision is object recognition tasks, to which special attention is paid. This is due to the desire to create artificial intelligent systems. The first step toward any kind of intelligence is perception, followed by reasoning and action.
Human perception is based on visual perception. Since intelligent artificial systems are primarily inspired by human perception and reasoning, we can conclude that visual perception is an important source of information for many potential systems.
Recently, there was a raising interest on eye tracking technology. This is mainly due to the industrial growth of many domains such as augmented reality, smart cars, and web applications’ testing for which a solid eye tracking technology is essential. Eye movement recognition, combined with other biometrics such as sound recognition, can enable a smooth interaction with virtual environments.
A good example of a smart system is the autonomous car. It perceives the surrounding world and the signs while adapting her behavior to changing situations. Such a car contains a lot of different sensors, which help to perceive the necessary information. The visual perception of the surrounding world is among the most important. It could be used to recognize pedestrians on the street, cars, animals, or even unspecified objects on the road, which could pose a potential threat to human life.
Improving and developing object recognition algorithms will help improve not only artificial intelligent systems but many other useful applications in today’s world. Other examples of application of this system can be extended to the tourist industry where applications of augmented reality (Figure 1) are becoming more and more popular especially after the widespread use of smartphones. In addition, the field of video surveillance is also a possible extension of object detection algorithms because of the need for quick and timely detection of different video scenes captured by cameras.
After analyzing the potential problems associated with recognition tasks, we believe that the direction to follow in imitating the human visual perception system is natural. The first moment of human comprehension of the image is a very general activity that analyzes the basic categories (buildings, men, cars, etc.). After getting the big picture, his attention focuses on the things that interest him. While focusing, humans observe objects of interest to enrich more details and see and recognize more features. A feature is a general term for describing a particular part of the object in order to enrich its appearance. A human has special predispositions on several objects (e.g., faces) and on situations (mainly of the danger and movement type) on which he is more sensitive to recognize. The typical situation is when you see someone away from you and you can recognize that it is a person. As you get closer, by focusing on this person, you are enriching and recognizing more and more elements that make it possible to distinguish whether he is a known person and to detect his name. Humans can do instance-level recognition as in the case presented, but they must first distinguish the object category to optimize the subsequent search.
Biometrics has been a concern for centuries. Proving one’s identity reliably was done using several techniques. From prehistory man knew the uniqueness of fingerprints, which meant that signatures by fingerprints were sufficient to prove the identity of an individual. Indeed, two centuries before Christ, the Emperor Ts-In-She authenticated certain sealed with the fingerprint.
At the beginning of the nineteenth century, in France, Alphonse Bertillon launched the first steps of the scientific police. He proposed the first method of biometrics that can be described as a scientific approach: bertillonage allowed the identification of criminals through several physiological measures.
At the beginning of the twentieth century, biometry was rediscovered by William James Herschel, an English officer who had the idea of having his subcontractors sign their fingerprints to find them easily in case of unhonored contracts. As a result, police departments have begun using fingerprints as a unique and reliable feature to identify an individual.
Biometrics is constantly growing especially in the field of secure identity documents such as the national identity card, passport, or driving license. This technology is running on new platforms, including chip cards based on the microprocessor.
The biometric market has undergone a great development thanks to the great number of advancement and innovation that this field has experienced in recent decades. This development is increasing as a result of the security concerns of several countries, which has pushed investment in this area and the widespread use of biometric solutions in several social and legal fields.
As shown by the statistics in Figure 3 between 2007 and 2015, there has been a considerable increase in the share of the private sector market due to the growing need for biometric solutions in this sector especially for smartphone and camera manufacturers.
According to ABI Research [2], the global biometric market will break the $30 billion mark by 2021, 118% higher than the 2015 market. In this context, consumer electronics, and smartphones in particular, are boosting the biometric sector: it is expected to sell two billion onboard fingerprint sensors in 2021, for an average annual increase of 40% in 5 years.
For the evaluation of the precision of a biometric system, which makes it possible to measure these performances, numerous attempts have been made on the system, and all the similarity scores are saved.
By applying the variable score threshold to similarity scores, the pairs of false recognition rate (FRR) and false acceptance rate (FAR) can be calculated. The false recognition rate, or FRR, is the measure of the likelihood that the biometric system will incorrectly reject an access attempt by an authorized user. It is stated as the ratio of the number of false recognitions divided by the number of identification attempts. On the other hand, the false acceptance rate, or FAR, is the measure of the likelihood that the biometric system will incorrectly accept an access attempt by an unauthorized user. It is stated as the ratio of the number of false acceptances divided by the number of identification attempts.
The results are presented either as such pairs, i.e., FRR at a certain level of FAR or as the graph in Figure 5. The rates can be expressed in several ways, for example, in percentages (1%), in fractions (1/100), in decimal format (0.01), or using powers of ten ( 10 p 2 ). When comparing two systems, the most accurate shows a lower FRR equal to FAR level. Some systems do not report the similarity score, only the decision. In this case, it is only possible to win a single FRR/FAR pair (and not a continuous series) as a result of a performance evaluation. If the mode of operation (the security level) is adjustable (i.e., we have a means of controlling the scoring threshold used internally), the performance evaluation can be performed repeatedly in different modes to get other FRR/FAR pairs.
The biggest disadvantage of technological evaluations is that they do not necessarily reflect the final conditions of use of the system. For this reason, it is important to collect a set of samples of the conditions of use of the target system when preparing an assessment.
Registered samples used in technology assessments are collected in databases. Data collection is performed using a group of volunteers, at least some of whom provide multiple acquisitions of the same biometric modality (e.g., the same finger) to have relevant attempts. To make collection efficient, samples of several objects can be collected from each volunteer, for example, every ten fingers. The characteristics of the database have a great impact on the results of an evaluation. As previously stated, with the exception of the capabilities of the biometric algorithm, the amount of available information can be used to characterize the objects.
To be able to make an assertion about the FRR 1% @ FAR 1 / 1 000 000 (i.e., when the system operates in a mode where one out of one million impostor attempts is-falsely-considered a match, one percent of the genuine attempts would fail) it at least one million impostor attempts (user sticking perfectly to another person’s template). It is not difficult to understand that the uncertainty of such an assertion would be rather high. The result depends heavily on how the two most similar samples in the database are scored. When comparing and viewing a DET (detection error trade-off) graph, it is important to understand that the uncertainty is higher on the side of the edges of the image. The number of comparisons made is only an important factor affecting confidence. The key to getting better statistical significance is to make as many uncorrelated attempts as possible.
Detection of dynamic forms is a very important research area that is rapidly evolving in the field of image processing. The goal is to recognize the shapes of objects in an image or in a sequence of images from the information relating to their shapes. In fact, shape is one of the most differentiating features in an image. However, the description and representation of an image remain a major challenge to perform the recognition task.
The quality of a descriptor is represented by its intelligence and ability to distinguish the different forms in a reliable manner despite the geometric variations related to translation and rotation.
On the other hand, a reliable descriptor must withstand the various changes that affect the shape of an object such as noise and distortion that can actually alter the shape and make the recognition task more complicated.
The form representation and description techniques can be generally split into two main classes of methods: contour-based methods and region-based methods. This ranking depends on how the shape features are extracted: from only the outline or the entire region of the shape. For each category, the different approaches are divided into global approaches and local (structural) approaches. This subclassification is based on the representation of the form that depends on the whole form or parts of the form (primitives). These approaches can also be distinguished according to the spatial or transform processing space, in which the shape characteristics are calculated. Global methods are not always robust against occlusions and image noise. In addition, they require an entire and correct segmentation of objects in the images. In general, the segmentation process results in partitioning objects into regions or contour parts that do not necessarily correspond to whole objects.
The contour-based approaches only exploit the boundary of the object for the characterization of the form by ignoring its inner content. The most commonly used representation in contour-based recognition methods is the signature of the form [4]. For a given form, the signature is essentially a representation based on the parameters 1D of the contour of shape. This can be done using a scalar value of the radial distance, angle, curvature, or velocity function. Let us note here that the signature of an entire form (closed curve) is often a periodic function; this will not be the case of a part of form (open curve) for which the two ends are not contiguous. Outline-based descriptors include Fourier descriptors [5, 6], the wavelet descriptors [7, 8], the multi-scale curvature [9], the shape context [10], the contour moments [11], and the symbol chain [12, 13]. Since these descriptors are calculated using only the pixels of the contour, the computational complexity is low, and their characteristic vectors are generally compact.
In region-based approaches, all pixels of the object are considered for characterization of the shape. This type of methods aims to exploit not only the information of the shape boundary but also that of the inner region of the form. The majority of region-based methods use moment descriptors to describe shapes such as Zernike moments [14], Legendre moments [15], or invariant geometric moments [16]. Other methods include grid descriptors [17] or shape matrix [18]. Since the region-based descriptor makes use of all the pixels constituting the shape, it can effectively describe various forms in a single descriptor. However, the size of the region-based features is usually large. This descriptor leads to a computing time that remains considerable.
It remains to emphasize that the description of the forms based on the contour is considered more relevant than that based on the region because the shape of an object is essentially distinguished by the border. In most cases, the central part of the object does not contribute much to pattern recognition [13].
In this chapter, we presented different biometric techniques used in the industrial world as well as their performances.
We started with an overview of biometric systems as well as an overview of biometrics. Then we presented the different issues and challenges related to implementation of such systems.
After that, we presented a performance evaluation of different biometric systems given the issues and challenges previously stated. Then we presented an overview of some important biometric elements such as the databases and the degree of confidence. Furthermore, a detailed analysis of different domains of application of several biometric techniques was presented with a focus on eye movement tracking techniques.
Finally, the different approaches of recognition of dynamic and planar shapes were discussed in the last paragraph.
We have no conflicts of interest to disclose.