Facial recognition and biometrics: your smartphone knows you so well (Part 1)
Over the last few years, we have been gradually plunging into a world where biometrics has become part of our lives. And this without really being aware of it, most of its infiltration has been through fun and playful apps. But it is interesting to see how it works, and especially to understand it, because it allows us to know if we should be worried about this evolution.
What do Alexa, a smartphone, a surveillance camera and a palm scanner have in common? It's not silicon, even though silicon is found in all of these devices. Well, they can all identify us thanks to a simple phrase for one, a fingerprint for the other, a photo for the last one. And in order to identify us, all of them make use of what we are and not of what we know : our passwords. Recognizing characteristics specific to each individual, controlling them in a digital way to make identification or authentication, this is what is commonly called biometrics.
Biometrics is used everywhere and by everyone
The use of smartphones has become an everyday occurrence. We all have our eyes riveted on our cell phone for at least an hour a day. It contains a lot of data that is more or less freely and easily accessible to third parties. To protect this data, we tend to lock it. But given the frequency with which we access our screen, we needed an efficient and quick way to unlock it.
In many films, restricted areas are often represented by access via biometric control. Voice control, retinal recognition, or some other biometric function.
Figure 1: Illustration of a retinal scan and the elements used
Figure 2: Illustration of identification
Figure 3: Illustration of authentication
Recognition in five steps
The application of biometrics for identification, or authentication, is divided into several steps. The first one is the acquisition. An image of the scene is acquired, and the analog information (information of the world) is changed into digital information. This digital information can then be used by a computer system as. A geographical distribution of energy levels. This may sound a bit too abstract, but let's see how to simplify this: if we compare the facade of an office building to a camera sensor, each window would be a pixel in the sensor. Offices lit in a different way and scenes behind them form an image. The camera sensor works the same way: it builds a mosaic. Each individual element of the mosaic does not represent much, but it is the whole that will form an image.
After that comes the detection of the human face in the image. The computer system will look for anything that looks roughly like a face. That is to say, an "oval" element, with "eyes" and "a mouth" represented by various shapes.
The image will then be transformed...
Figure 4: Image of a created face
And will be processed to achieve this result.
Figure 5: Image with edge filtering and binarization
By looking for the position of the symmetrical black spots for the eyes, the overall shape of the contours and the position of the black line designating the mouth, the system can determine where the face is.
Once the position of the face is detected, a more precise analysis will be performed. It will take into account the distance between the eyes, the position and size of the ears, their shapes and other variables. Several elements will be taken into account and it will allow the camera to create a schema.
Figure 6: Image with edge enhancement and edge detection only.
Schemas are a set of data that can be stored in a table or in another format. Once created it is retrieved and compared to those present in the database.
Figure 7 & 8: Images extracted from a processing on ImageJ and a table of analysis results
After processing we can extract a lot of information from the image which will be stored in a detailed table. This table and the processing operators were made with a free software and on a personal computer. More powerful software or algorithms exist and can do a much better job than what We do in 1 minute on a table.
A threshold match rate is set up to decide if the recognition is accepted or rejected. The system will tell if two items are similar enough to say that they are the same.
This process is seen in a global way. Keep in mind that, in detail, it is more complex than it appears. The technical actions require a strong understanding of what an image is, what it is composed of and above all what to look for in the image (but not only that).
When picking up a smartphone and it unlocks at the presence of your face, it follows the process mentioned above. The same procedure follows when you stick your finger on the fingerprint reader to authenticate yourself.
Facial recognition is used to determine if a person is who they say they are. But it can also be used to find out who a person is to ensure the person isn’t pretending to be somebody else. The use that comes directly to mind is security. Using facial recognition allows identification without the digicode (which can be forgotten) or key (which can be lost or stolen). The operation is presented in a simplified way in 5 steps, but it is a set of techniques that are all very important in the extraction of information and the processing of it.
Comment ( 0 ) :
You might be interested in this:
Facial recognition and biometrics: your smartphone knows you so well / Part 2
How particles interfere with light
LASER and light amplification
The light and its race in environments
Computer vision and artificial vision: what is it?
Light pollution, we are all disturbed
And there was light on our stalls
Subscribe to our newsletter
We post content regularly, stay up to date by subscribing to our newsletter.