Introduction to Android Face Detection

Introduction to Android Face Detection

Since the introduction of the Vision library in Play Services 8.1, developers can easily locate faces in videos or images. As long as there is a picture containing face information, you can collect face information in each picture, such as the position of the face, whether it is smiling, open or closed eyes, and their specific facial features.

This information is very useful for many applications. For example, a camera application can use this information to take a photo when everyone is smiling with their eyes open, or use it to add some funny effects, such as adding a unicorn horn to the head of the person in the photo. However, everyone should note that this can only be used for face detection, not face recognition. We can only use it to detect face information, but cannot use it to determine whether the two photos are of the same person.

This tutorial uses the Face Detection API to analyze static images, identify people in the images, and draw overlaid graphics. All the code used in the tutorial can be found on GitHub.

1. Project Configuration

First, to add the Vision library to your project, you need to import Play Services 8.1 or higher into your project. This tutorial only imports the Play Services Vision library. Open the build.gradle file in your project and add the following compile dependency node code.

  1. compile 'com.google.android.gms:play-services-vision:8.1.0'  

Once you have included Play Services in your project, close the build.gradle file in your project and open the AndroidManifest.xml file. Add the following data to your manifest file to define the face detection dependency. Let the Vision library know that you will use it in your app.

  1. <meta-data android: name = "com.google.android.gms.vision.DEPENDENCIES" android:value= "face" />

Once you have completed the configuration of AndroidManifest.xml, you can close the file. Next, you need to create a new class file FaceOverlayView.java. This class inherits from the View class and is used to perform face detection logic, display the analyzed image, and draw information on the image to illustrate the point.

Now, we start adding member variables and implementing the constructor. The Bitmap object is used to store the bitmap data to be analyzed, and the SparseArray array is used to store the face information found in the image.

  1. public class FaceOverlayView extends View {
  2. private Bitmap mBitmap;
  3. private SparseArray&lt;Face&gt; mFaces;
  4. public FaceOverlayView(Context context) {
  5. this(context, null );
  6. }
  7. public FaceOverlayView(Context context, AttributeSet attrs) {
  8. this(context, attrs, 0);
  9. }
  10. public FaceOverlayView(Context context, AttributeSet attrs, int defStyleAttr) {
  11. super(context, attrs, defStyleAttr);
  12. }
  13. }

Then, we add a setBitmap(Bitmap bitmap) function to the FaceOverlayView class. Now we only store the bitmap object through this function, and we will use this method to analyze the bitmap data later.

  1. public void setBitmap( Bitmap bitmap ) {
  2. mBitmap = bitmap;
  3. }

Next, we need a bitmap image. I've added one to the sample project on GitHub, but you can use any image you like and see if it works. Once you've chosen your image, put it in the res/raw directory. This tutorial assumes the image is called face.jpg.

After you put the image in the res/raw directory, open the res/layout/activity_main.xml file. In this layout file, reference a FaceOverlayView object to display it in MainActivity.

  1. <?xml version= "1.0" encoding= "utf-8" ?>
  2. <com.tutsplus.facedetection.FaceOverlayView
  3. xmlns:android= "http://schemas.android.com/apk/res/android"  
  4. android:id= "@+id/face_overlay"  
  5. android:layout_width= "match_parent"  
  6. android:layout_height= "match_parent" />

After defining the layout file, open MainActivity and reference an instance of FaceOverlayView in the onCreate() function. Read face.jpg from the raw folder through the input stream and convert it into bitmap data. After having the bitmap data, you can set the bitmap in the custom view by calling the setBitmap method of FaceOverlayView.

  1. public class MainActivity extends AppCompatActivity {
  2. private FaceOverlayView mFaceOverlayView;
  3. @Override
  4. protected void onCreate(Bundle savedInstanceState) {
  5. super.onCreate(savedInstanceState);
  6. setContentView(R.layout.activity_main);
  7. mFaceOverlayView = (FaceOverlayView) findViewById( R.id.face_overlay );
  8. InputStream stream = getResources().openRawResource( R.raw.face );
  9. Bitmap bitmap = BitmapFactory.decodeStream(stream);
  10. mFaceOverlayView.setBitmap(bitmap);
  11. }
  12. }

2. Detecting faces

Now that your project is set up, it's time to start detecting faces. Define a FaceDetector object in the setBitmap( Bitmap bitmap ) method. We can do this by using the constructor in FaceDetector. With FaceDetector.Builder you can define multiple parameters to control the speed of face detection and other data generated by FaceDetector.

The specific settings depend on the purpose of your application. If facial feature search is turned on, face detection will become very slow. In most programming, everything has its advantages and disadvantages. If you want to learn more about FaceDetector.Builder, you can find it by looking up the official documentation on the Android Developers website.

  1. FaceDetector detector = new FaceDetector.Builder( getContext() )
  2. .setTrackingEnabled( false )
  3. .setLandmarkType(FaceDetector.ALL_LANDMARKS)
  4. .setMode(FaceDetector.FAST_MODE)
  5. .build();

You need to check whether FaceDetector is operational. Every time a user uses face detection on a device for the first time, the Play Services service needs to load a set of small native libraries to handle the application's request. Although this work is generally done before the application starts, it is also necessary to handle failures well.

If FaceDetector is operational, you need to convert the bitmap data into a Frame object and pass it through the detect function for face data analysis. When the data analysis is completed, you need to release the detector to prevent memory leaks. ***Call the invalidate() function to trigger the view refresh.

  1. if (!detector.isOperational()) {
  2. //Handle contingency
  3. } else {
  4. Frame frame = new Frame.Builder().setBitmap(bitmap).build();
  5. mFaces = detector.detect(frame);
  6. detector.release();
  7. }
  8. invalidate();

Now that you have detected faces in the image, you can use them. For example, you can draw a box around each face that was detected. After the invalidate() function call, we can add all the necessary logic in the OnDraw(Canvas canvas) function. We need to make sure the bitmap and face data are valid, then draw the bitmap data on the canvas, and then draw a box around the location of each face.

Because different devices have different resolutions, you need to control the scaling of the bitmap to ensure that the image is always displayed correctly.

  1. @Override
  2. protected void onDraw(Canvas canvas) {
  3. super.onDraw(canvas);
  4. if ((mBitmap != null ) &amp;&amp; (mFaces != null )) {
  5. double scale = drawBitmap(canvas);
  6. drawFaceBox(canvas, scale);
  7. }
  8. }

The drawBitmap(Canvas canvas) method will draw the image on the canvas with adaptive size and return a correct scale value for you to use.

  1. private double drawBitmap( Canvas canvas ) {
  2. double viewWidth = canvas.getWidth();
  3. double viewHeight = canvas.getHeight();
  4. double imageWidth = mBitmap.getWidth();
  5. double imageHeight = mBitmap.getHeight();
  6. double scale = Math. min ( viewWidth / imageWidth, viewHeight / imageHeight );
  7. Rect destBounds = new Rect( 0, 0, ( int ) ( imageWidth * scale ), ( int ) ( imageHeight * scale ) );
  8. canvas.drawBitmap( mBitmap, null , destBounds, null );
  9. return scale;
  10. }

The drawFaceBox(Canvas canvas, double scale) method is more interesting. The detected face data is stored in mFaces as location information. This method will draw a green rectangular box at the detected face position based on the width and height of these location data.

You need to define your own drawing object, then loop through the position, height, and width information in your SparseArray array, and then use that information to draw a rectangle on the canvas.

  1. private void drawFaceBox(Canvas canvas, double scale) {
  2. //paint should be defined as a member variable rather than
  3. //being created on each onDraw request, but left here for  
  4. //emphasis.
  5. Paint paint = new Paint();
  6. paint.setColor(Color.GREEN);
  7. paint.setStyle(Paint.Style.STROKE);
  8. paint.setStrokeWidth(5);
  9. float   left = 0;
  10. float   top = 0;
  11. float   right = 0;
  12. float bottom = 0;
  13. for ( int i = 0; i &lt; mFaces. size (); i++ ) {
  14. Face face = mFaces.valueAt(i);
  15. left = ( float ) ( face.getPosition().x * scale );
  16. top = ( float ) ( face.getPosition().y * scale );
  17. right = ( float ) scale * ( face.getPosition().x + face.getWidth() );
  18. bottom = ( float ) scale * ( face.getPosition().y + face.getHeight() );
  19. canvas.drawRect( left , top , right , bottom, paint );
  20. }
  21. }

Run your application now and you will find that each detected face is surrounded by a rectangle. It is worth noting that the face detection API version we are using is very new, so it may not be able to detect all faces. You can modify the configuration in FaceDetector.Builder to make it get more information, but I can't guarantee that this will work.

3. Understand facial features

Facial features refer to some special points on the face. The face detection API does not rely on facial features to detect a face, but detects facial features after the face is detected. This is why detecting facial features is an optional setting that we can turn on through FaceDetector.Builder.

You can use these facial features as an additional source of information, for example, if you need to find where the model's eyes are, you can do the corresponding processing in the application. There are twelve facial features that can be detected: left and right eyes left and right ears left and right earlobes nose left and right cheeks left and right corners of the mouth

The detection of facial features depends on the angle of detection. For example, if someone is facing sideways, only one of their eyes will be detected, which means that the other eye will not be detected. The following table outlines which facial features should be detected (Y is based on the Euler angle of the face (left or right)).

Euler Angle Y Visible signs
< -36° Left eye, left corner of mouth, left ear, nose, left cheek
-36° to -12° Left corner of mouth, nose, lower corner of mouth, right eye, left eye, left cheek, left earlobe
-12° to 12° Right eye, left eye, nose, left cheek, right cheek, left corner of mouth, right corner of mouth, lower corner of mouth
12° to 36° Right corner of mouth, nose, lower corner of mouth, left eye, right eye, right cheek, right earlobe
> 36° Right eye, right corner of mouth, right ear, nose, right cheek

If you have enabled facial landmark detection in face detection, you can easily use facial landmark information. You just need to call getLandmarks() function to get a list of facial landmarks, and you can use it directly.

In this tutorial, you can use a new function drawFaceLandmarks(Canvas canvas, double scale) to draw a small circle on each facial feature detected in face detection. In the onDraw(canvas canvas) function, replace drawFaceBox with drawFaceLandmarks. This method uses the position of each facial feature point as the center, adapts to the size of the bitmap, and encloses the facial feature point with a circle.

  1. private void drawFaceLandmarks( Canvas canvas, double scale ) {
  2. Paint paint = new Paint();
  3. paint.setColor( Color.GREEN );
  4. paint.setStyle( Paint.Style.STROKE );
  5. paint.setStrokeWidth( 5 );
  6. for ( int i = 0; i &lt; mFaces. size (); i++ ) {
  7. Face face = mFaces.valueAt(i);
  8. for ( Landmark landmark : face.getLandmarks() ) {
  9. int cx = ( int ) ( landmark.getPosition().x * scale );
  10. int cy = ( int ) ( landmark.getPosition().y * scale );
  11. canvas.drawCircle( cx, cy, 10, paint );
  12. }
  13. }
  14. }

After calling this method, you should see the following screen, where facial landmarks are encircled by small green circles.

4. Additional facial data

The position and facial features of the face are very useful. In addition, we can also get more information about face detection through the built-in methods of Face in the application. Through the return values ​​of getIsSmilingProbability(), getIsLeftEyeOpenProbability(), and getIsRightEyeOpenProbability() methods (ranging from 0.0 to 1.0), we can determine whether the left and right eyes of a person are open and whether they are smiling. The closer the value is to 1.0, the greater the possibility.

You can also get the Euler values ​​for the Y and Z axes from face detection. The Euler value for the Z axis is always returned. If you want to receive the value for the X axis, then you must use an exact mode when detecting. Here is an example of how to get these values.

  1. private void logFaceData() {
  2. floating smilingProbability;
  3. float leftEyeOpenProbability;
  4. float rightEyeOpenProbability;
  5. float eulerY;
  6. float eulerZ;
  7. for ( int i = 0; i &lt; mFaces. size (); i++ ) {
  8. Face face = mFaces.valueAt(i);
  9. smilingProbability = face.getIsSmilingProbability();
  10. leftEyeOpenProbability = face.getIsLeftEyeOpenProbability();
  11. rightEyeOpenProbability = face.getIsRightEyeOpenProbability();
  12. eulerY = face.getEulerY();
  13. eulerZ = face.getEulerZ();
  14. Log.e( "Tuts+ Face Detection" , "Smiling: " + smilingProbability );
  15. Log.e( "Tuts+ Face Detection" , "Left eye open: " + leftEyeOpenProbability );
  16. Log.e( "Tuts+ Face Detection" , "Right eye open: " + rightEyeOpenProbability );
  17. Log.e( "Tuts+ Face Detection" , "Euler Y: " + eulerY );
  18. Log.e( "Tuts+ Face Detection" , "Euler Z: " + eulerZ );
  19. }
  20. }

in conclusion

In this tutorial, you have learned about one of the main components of the Play Services Vision library: face detection. You now know how to detect faces in a still image, how to collect information about faces, and find the important facial features of each face.

Use what you've learned to add an interesting feature to your own imaging application, track faces in a video, or do anything else you can think of.

<<:  How can you show off happily without learning some higher-order functions!

>>:  Apple iOS 11 may no longer support 32-bit, and a large number of old apps may be invalidated

Recommend

Practical examples of competitive product analysis

As a product, competitive product analysis is def...

The 6 core components of fission marketing!

Many activities seem to have been done, and time ...

How can a newbie on Douyin become a big V? Remember these 7 creative methods

When working in new media , choose a popular plat...

25 Psychological Skills That Marketers Must Know

Why do marketers need to understand psychology? B...

From C-end to B-end, my product design journey

I remember when I was interviewing for Lenovo'...

13 Content Creation Tips to Increase Conversion Rates (with Case Details)

Improving content conversion rate is the demand o...

Why developers need free B2D services

Recently, the mobile game industry event GMGDC wa...

APP promotion tips: 100,000 yuan brings 20 million users

Last time, I wrote an article titled "How to...

What are the signs that a website has been demoted? How to solve the problem?

Recently, a friend sent me a private message, say...

Advertising case study: how to reduce the cost of leads by 50%?

Today I would like to share with you a case study...