Since the introduction of the Vision library in Play Services 8.1, developers can easily locate faces in videos or images. As long as there is a picture containing face information, you can collect face information in each picture, such as the position of the face, whether it is smiling, open or closed eyes, and their specific facial features. This information is very useful for many applications. For example, a camera application can use this information to take a photo when everyone is smiling with their eyes open, or use it to add some funny effects, such as adding a unicorn horn to the head of the person in the photo. However, everyone should note that this can only be used for face detection, not face recognition. We can only use it to detect face information, but cannot use it to determine whether the two photos are of the same person. This tutorial uses the Face Detection API to analyze static images, identify people in the images, and draw overlaid graphics. All the code used in the tutorial can be found on GitHub. 1. Project Configuration First, to add the Vision library to your project, you need to import Play Services 8.1 or higher into your project. This tutorial only imports the Play Services Vision library. Open the build.gradle file in your project and add the following compile dependency node code.
Once you have included Play Services in your project, close the build.gradle file in your project and open the AndroidManifest.xml file. Add the following data to your manifest file to define the face detection dependency. Let the Vision library know that you will use it in your app.
Once you have completed the configuration of AndroidManifest.xml, you can close the file. Next, you need to create a new class file FaceOverlayView.java. This class inherits from the View class and is used to perform face detection logic, display the analyzed image, and draw information on the image to illustrate the point. Now, we start adding member variables and implementing the constructor. The Bitmap object is used to store the bitmap data to be analyzed, and the SparseArray array is used to store the face information found in the image.
Then, we add a setBitmap(Bitmap bitmap) function to the FaceOverlayView class. Now we only store the bitmap object through this function, and we will use this method to analyze the bitmap data later.
Next, we need a bitmap image. I've added one to the sample project on GitHub, but you can use any image you like and see if it works. Once you've chosen your image, put it in the res/raw directory. This tutorial assumes the image is called face.jpg. After you put the image in the res/raw directory, open the res/layout/activity_main.xml file. In this layout file, reference a FaceOverlayView object to display it in MainActivity.
After defining the layout file, open MainActivity and reference an instance of FaceOverlayView in the onCreate() function. Read face.jpg from the raw folder through the input stream and convert it into bitmap data. After having the bitmap data, you can set the bitmap in the custom view by calling the setBitmap method of FaceOverlayView.
2. Detecting faces Now that your project is set up, it's time to start detecting faces. Define a FaceDetector object in the setBitmap( Bitmap bitmap ) method. We can do this by using the constructor in FaceDetector. With FaceDetector.Builder you can define multiple parameters to control the speed of face detection and other data generated by FaceDetector. The specific settings depend on the purpose of your application. If facial feature search is turned on, face detection will become very slow. In most programming, everything has its advantages and disadvantages. If you want to learn more about FaceDetector.Builder, you can find it by looking up the official documentation on the Android Developers website.
You need to check whether FaceDetector is operational. Every time a user uses face detection on a device for the first time, the Play Services service needs to load a set of small native libraries to handle the application's request. Although this work is generally done before the application starts, it is also necessary to handle failures well. If FaceDetector is operational, you need to convert the bitmap data into a Frame object and pass it through the detect function for face data analysis. When the data analysis is completed, you need to release the detector to prevent memory leaks. ***Call the invalidate() function to trigger the view refresh.
Now that you have detected faces in the image, you can use them. For example, you can draw a box around each face that was detected. After the invalidate() function call, we can add all the necessary logic in the OnDraw(Canvas canvas) function. We need to make sure the bitmap and face data are valid, then draw the bitmap data on the canvas, and then draw a box around the location of each face. Because different devices have different resolutions, you need to control the scaling of the bitmap to ensure that the image is always displayed correctly.
The drawBitmap(Canvas canvas) method will draw the image on the canvas with adaptive size and return a correct scale value for you to use.
The drawFaceBox(Canvas canvas, double scale) method is more interesting. The detected face data is stored in mFaces as location information. This method will draw a green rectangular box at the detected face position based on the width and height of these location data. You need to define your own drawing object, then loop through the position, height, and width information in your SparseArray array, and then use that information to draw a rectangle on the canvas.
Run your application now and you will find that each detected face is surrounded by a rectangle. It is worth noting that the face detection API version we are using is very new, so it may not be able to detect all faces. You can modify the configuration in FaceDetector.Builder to make it get more information, but I can't guarantee that this will work. 3. Understand facial features Facial features refer to some special points on the face. The face detection API does not rely on facial features to detect a face, but detects facial features after the face is detected. This is why detecting facial features is an optional setting that we can turn on through FaceDetector.Builder. You can use these facial features as an additional source of information, for example, if you need to find where the model's eyes are, you can do the corresponding processing in the application. There are twelve facial features that can be detected: left and right eyes left and right ears left and right earlobes nose left and right cheeks left and right corners of the mouth The detection of facial features depends on the angle of detection. For example, if someone is facing sideways, only one of their eyes will be detected, which means that the other eye will not be detected. The following table outlines which facial features should be detected (Y is based on the Euler angle of the face (left or right)).
If you have enabled facial landmark detection in face detection, you can easily use facial landmark information. You just need to call getLandmarks() function to get a list of facial landmarks, and you can use it directly. In this tutorial, you can use a new function drawFaceLandmarks(Canvas canvas, double scale) to draw a small circle on each facial feature detected in face detection. In the onDraw(canvas canvas) function, replace drawFaceBox with drawFaceLandmarks. This method uses the position of each facial feature point as the center, adapts to the size of the bitmap, and encloses the facial feature point with a circle.
After calling this method, you should see the following screen, where facial landmarks are encircled by small green circles. 4. Additional facial data The position and facial features of the face are very useful. In addition, we can also get more information about face detection through the built-in methods of Face in the application. Through the return values of getIsSmilingProbability(), getIsLeftEyeOpenProbability(), and getIsRightEyeOpenProbability() methods (ranging from 0.0 to 1.0), we can determine whether the left and right eyes of a person are open and whether they are smiling. The closer the value is to 1.0, the greater the possibility. You can also get the Euler values for the Y and Z axes from face detection. The Euler value for the Z axis is always returned. If you want to receive the value for the X axis, then you must use an exact mode when detecting. Here is an example of how to get these values.
in conclusion In this tutorial, you have learned about one of the main components of the Play Services Vision library: face detection. You now know how to detect faces in a still image, how to collect information about faces, and find the important facial features of each face. Use what you've learned to add an interesting feature to your own imaging application, track faces in a video, or do anything else you can think of. |
<<: How can you show off happily without learning some higher-order functions!
>>: Apple iOS 11 may no longer support 32-bit, and a large number of old apps may be invalidated
As a product, competitive product analysis is def...
Many activities seem to have been done, and time ...
The latest data from iResearch MUT in April shows...
As the company develops and its market share incr...
When working in new media , choose a popular plat...
Why do marketers need to understand psychology? B...
I remember when I was interviewing for Lenovo'...
Improving content conversion rate is the demand o...
Since the iPhone X removed the physical Home butt...
Recently, the mobile game industry event GMGDC wa...
In the 360 developer communication group this a...
Last time, I wrote an article titled "How to...
Recently, a friend sent me a private message, say...
Recently, a bus fell into a lake in Guizhou. Neti...
Today I would like to share with you a case study...