Heartbeat Journey - iOS uses the phone camera to detect heart rate (PPG)

Heartbeat Journey - iOS uses the phone camera to detect heart rate (PPG)

[Background] Time flies, and in recent years, devices that support heart rate monitoring have become more and more common. Everyone is busy measuring air and Sprite, so I want to join in the fun. [0]

During this period, I completed a heart rate detection demo based on iOS. As long as you hold the phone camera steadily with your fingertips, it can collect your heart rate data. After the demo was completed, I encapsulated the heart rate detection component and provided default animations and sound effects, which can be easily imported into other projects. In this article, I will share with you the process of completing the heart rate detection and the various difficulties I encountered during the process.

The main points involved in this article are:

  • AVCapture
  • Core Graphics
  • Delegate & Block
  • RGB -> HSV
  • Bandpass filtering
  • Pitch Tagging Algorithm (TP-Psola)
  • PhotoplethysmoGraphy (PPG)

Before we get started, let me show you the final product:

Heart rate detection ViewController

The picture above shows the main interface during heart rate detection.

During the detection process, the application can capture the peak of the heartbeat in real time, calculate the corresponding heart rate, and call back in the form of Delegate or Block to display the corresponding animation and sound effects on the interface.

0. Plot Overview

Well, 😂 actually, the above background is all made up by me. This demo was the task I received on my first day at the company. I was actually a little confused when I first received the task. I originally thought that I would just read documents, drag controls, write interfaces, etc. in the first two days of work. As a result, I hadn’t even installed Xcode yet, and suddenly I received a task of heart rate detection. I was immediately stressed out 😨 and quickly got up to find information.

The heart rate monitoring app was available when I was in my third year of high school. I clearly remember that at that time, I was young and ignorant and thought it was probably another rogue app made by some unscrupulous person out of boredom, so I downloaded it and tried it out. I didn’t expect that it could actually measure...

[[185986]]

There are always villains who want to harm me

At that time, I was shocked and opened a certain Baidu to check the principles of this kind of application. So now I have a better direction when looking for information.

I spent a day looking for information and found that there is still relatively little information on the Internet about mobile phone heart rate detection. However, after referring to various materials, I have a basic idea of ​​implementation.

Task List

  • Heart rate detection

1. Overall thinking

principle

First, let’s talk about the principles used to realize heart rate detection using mobile phone cameras.

We know that there are many wearable devices with heart rate detection functions on the market, such as various bracelets and watches. In fact, in essence, the principle we are going to use this time is no different from the principle used by these wearable devices. They are all based on photoplethysmography (PPG).

Green light from the iWatch's heart rate sensor

PPG tracks the reflection of visible light (usually green light) in human tissue. It has a visible light source to illuminate the skin, and then uses a photoelectric sensor to collect the light reflected by the skin. PPG has two modes, transmissive and reflective. Like ordinary bracelets and watches, the light source and sensor are on the same side, which is reflective; while the fingertips commonly seen in hospitals are usually transmissive, that is, the light source and sensor are on different sides.

The skin's ability to reflect light is relatively stable, but the heart's pumping of blood causes the blood vessel volume to change periodically, causing the reflected light to also show periodic fluctuations. Especially in areas such as fingertips where capillaries are very rich, this periodic fluctuation is easy to observe.

This fluctuation can be easily observed with the naked eye using the iPhone's system camera - turn on the flash during a video recording, then lightly cover the camera with your finger, and you will see that the red image on the full screen changes in brightness with your heartbeat, as shown below (please ignore the moiré pattern on the full screen).

The light and dark changes of the camera image can be observed directly with the naked eye

As for why most of the light sources used in wearable devices are green light, will there be any problem if we use the white light of the mobile phone flash? This is mainly because the signal-to-noise ratio generated by green light in heart rate detection is relatively large, which is conducive to heart rate detection. Using white light is also completely fine. For details, please go to Zhihu: Heart rate detection functions of various smart wearables. I will not go into details here.

https://www.zhihu.com/question/27391584

My thoughts

We already know that we need to use flash and camera as the light source and sensor of PPG, so let's analyze the subsequent overall solution. The following is a flowchart I drew after collecting data.

Overall idea

  1. First we need to collect the camera data, for this step we can use AVCapture;
  2. Then, according to a certain algorithm, a corresponding eigenvalue is calculated for each frame of the image and saved in an array. The algorithm can consider taking the red component or converting it to HSV before calculating;
  3. After getting a certain amount of data, we preprocess the data within this time period, such as filtering to filter out some noise. You can refer to a blog: Butterworth filter;
  4. Next, we can calculate the heart rate. This step may involve some digital signal processing, such as peak detection and signal frequency calculation. We can use the vDSP processing framework of Accelerate.Framework. For the usage of Accelerate framework, please refer to an answer on StackOverFlow (I didn’t use it in the end, the reason will be mentioned later);
  5. Finally, you can get the heart rate.

2. Preliminary Implementation

After I had a rough plan, I decided to start implementing it.

1) Video stream acquisition

We have mentioned before that we need to use AVCapture to capture video streams. When using AVCapture, you need to first establish an AVCaptureSession, which is equivalent to a transmission stream to connect the input and output of data, and then establish input and output connections respectively. Therefore, in order to be more intuitive, I first made a demo similar to a camera, which directly transfers the camera image captured by AVCapture to a Layer.

1. Create an AVCaptureSession

The configuration process of AVCaptureSession is similar to the submission of a database transaction. Before starting the configuration, you must call [_session beginConfiguration]; to start the configuration; after completing all the configuration work, call [_session commitConfiguration]; to commit the configuration.

Therefore, the entire configuration process is roughly like this:

  1. /** Create input and output streams */
  2.  
  3. _session = [AVCaptureSession new];
  4.  
  5. /** Start configuring AVCaptureSession */
  6.  
  7. [_session beginConfiguration];
  8.  
  9. /*
  10.  
  11. * Configure session
  12.  
  13. * (Create input and output streams)
  14.  
  15. * ...
  16.  
  17. */
  18.  
  19. /** Submit configuration and establish flow */
  20.  
  21. [_session commitConfiguration];
  22.  
  23. /** Start transmitting data stream */
  24.  
  25. [_session startRunning];

2. Create input stream From Camera

To establish an input stream from the camera, you must first obtain the camera device and configure it accordingly. The most important thing about the camera configuration here is to turn on the flash. In addition, set the lock of parameters such as white balance and focus to ensure that the feature value will not be unstable due to the automatic adjustment of the camera during the subsequent detection process.

  1. /** Get the camera device and configure it */
  2.  
  3. AVCaptureDevice *device = [self getCameraDeviceWithPosition:AVCaptureDevicePositionBack];
  4.  
  5. if ([device isTorchModeSupported:AVCaptureTorchModeOn]) {
  6.  
  7. NSError *error = nil;
  8.  
  9. /** Lock the device to configure parameters */
  10.  
  11. [device lockForConfiguration:&error];
  12.  
  13. if (error) {
  14.  
  15. return ;
  16.  
  17. }
  18.  
  19. [device setTorchMode:AVCaptureTorchModeOn];
  20.  
  21. [device unlockForConfiguration]; //Unlock
  22.  
  23. }

It should be noted that during the configuration of the camera device, it is necessary to lock it in advance, and configuration can only be performed after it is successfully locked. In addition, before configuring parameters such as the flash, it is necessary to determine in advance whether the current device supports the corresponding flash mode or other functions, and ensure that the current device supports it before setting it.

In addition, there is one very important point about the camera configuration: remember to lower the flash brightness!!

Turning on the flash for a long time will make the battery heat up, which is harmful to the battery. During my debugging process, I forgot to turn off the flash countless times, and finally the whole phone became hot to the point of burning my hand before I realized it. I evolved directly into Xiaomi~ Therefore, try to reduce the brightness of the flash. After my test, even if the flash is turned to the minimum brightness, I can still measure a clear heart rate.

The next step is to create an input stream using the configured device:

  1. /** Create an input stream */
  2.  
  3. NSError *error = nil;
  4.  
  5. AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device
  6.  
  7. error:&error];
  8.  
  9. if (error) {
  10.  
  11. NSLog(@ "DeviceInput error:%@" , error.localizedDescription);
  12.  
  13. return ;
  14.  
  15. }

3. Create output stream To AVCaptureVideoDataOutput

The AVCaptureVideoDataOutput class is needed to establish the output stream. We need to create an AVCaptureVideoDataOutput class and set its pixel output format to 32-bit BGRA format, which seems to be the original format of the iPhone camera (@熊皮皮 suggested that in addition to this format, there are two YUV formats). When we read the pixels in the image buffer later, we also read the pixel data in this order (BGRA). An NSDictionary is required as a parameter in the setting.

We also need to set up the AVCaptureVideoDataOutput delegate and create a new thread (FIFO) to run the output stream.

  1. /** Create an output stream */
  2.  
  3. AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new];
  4.  
  5. NSNumber *BGRA32PixelFormat = [NSNumber numberWithInt:kCVPixelFormatType_32BGRA];
  6.  
  7. NSDictionary *rgbOutputSetting;
  8.  
  9. rgbOutputSetting = [NSDictionary dictionaryWithObject:BGRA32PixelFormat
  10.  
  11. forKey:(id)kCVPixelBufferPixelFormatTypeKey];
  12.  
  13. [videoDataOutput setVideoSettings:rgbOutputSetting]; // Set the pixel output format
  14.  
  15. [videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; // Discard delayed frames
  16.  
  17. dispatch_queue_t videoDataOutputQueue = dispatch_queue_create( "VideoDataOutputQueue" ,DISPATCH_QUEUE_SERIAL);
  18.  
  19. [videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];

4. Connect to AVCaptureSession

After establishing the input and output streams, it is time to connect them to the AVCaptureSession!

It should be noted here that you must first determine whether it can be added, and then perform the adding operation, as shown below.

  1. if ([_session canAddInput:deviceInput])
  2.  
  3. [_session addInput:deviceInput];
  4.  
  5. if ([_session canAddOutput:videoDataOutput])
  6.  
  7. [_session addOutput:videoDataOutput];

5. Implement the proxy protocol to obtain video frames

In the above steps, we set self as the delegate of AVCaptureVideoDataOutput, so now we have to implement the method xxx didOutputSampleBuffer xxx of AVCaptureVideoDataOutputSampleBufferDelegate in self, so that we can get it in this method when the video frame arrives.

  1. #pragma mark - AVCaptureVideoDataOutputSampleBufferDelegate & Algorithm
  2.  
  3. - (void)captureOutput:(AVCaptureOutput *)captureOutput
  4.  
  5. didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
  6.  
  7. fromConnection:(AVCaptureConnection *) connection {
  8.  
  9. /** Read image Buffer */
  10.  
  11. CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
  12.  
  13. //
  14.  
  15. // We can here
  16.  
  17. // Calculate this frame
  18.  
  19. // Eigenvalues ​​. . .
  20.  
  21. //
  22.  
  23. /** Convert to a bitmap for drawing on Layer */
  24.  
  25. CGImageRef quartzImage = CGBitmapContextCreateImage(context);
  26.  
  27. /** Draw on Layer */
  28.  
  29. id renderedImage = CFBridgingRelease(quartzImage);
  30.  
  31. dispatch_async(dispatch_get_main_queue(), ^(void) {
  32.  
  33. [CATransaction setDisableActions:YES];
  34.  
  35. [CATransaction begin ];
  36.  
  37. _imageLayer.contents = renderedImage;
  38.  
  39. [CATransaction commit ];
  40.  
  41. });
  42.  
  43. }

At this point, we have obtained a camera-like demo that can output the images captured by the camera on the screen. Next, we will calculate the eigenvalues ​​of each frame in this proxy method.

2) Sampling (calculating eigenvalues)

In the sampling process, the most critical thing is how to convert an image into a corresponding eigenvalue.

I first convert all pixels to one pixel (RGB):

Accumulate and synthesize a pixel

After converting to a pixel, we only have three values ​​​​of RGB, and things become much simpler. In the process of designing the sampling algorithm, I made many attempts.

I first tried simply using one of the R, G, or B components directly as the signal input, but the results were not ideal.

– HSV color space

I remembered that the HSV color space was introduced in the previous graphics class. It represents color as three numerical values: hue, saturation, and value.

HSV color space[5]

I thought that since the naked eye can observe the change of image color, and RGB has no obvious reflection, then one of the three dimensions of HSV should be able to reflect its change. I tried to convert it to HSV, and found that the hue H changes obviously with the pulse! So, I decided to use the H value as the feature value.

I simply use Core Graphics to draw a polyline of the H value directly on the image's layer:

Hue H changes with pulse

3) Heart rate calculation

In order to make the curve more intuitive, I processed the eigenvalues ​​a little bit and changed the scale of the horizontal axis to get the following screenshot. Now that the heart rate signal is stable and the peak is more obvious, we start to calculate the heart rate.

After scaling, the heart rate signal is stable

Initially, I thought of using Fast Fourier Transform (FFT) to process the signal array. FFT can convert the time domain signal into the frequency domain signal, and get the distribution of a signal at each frequency. In this way, we can determine the heart rate by judging the frequency with the largest proportion.

But maybe because I lack the relevant knowledge of signal processing, after nearly two days of research, I still can't understand the document that is the same as the advanced mathematics textbook...

So I decided to use a brute force method to calculate the heart rate first, and then consider optimizing the algorithm after a usable demo comes out to see how it works.

From the curve above, we can see that when the signal is stable, the peak is still relatively clear. So I thought, I can set a threshold to detect the peak. As long as the signal exceeds the threshold, the frame is judged to be at a peak. Then set a state machine to complete the state transition between the peak and the trough, and the peak can be detected.

Because the number of image frames obtained from AVCapture is 30 frames, that is, each frame represents 1/30s of time. Then I just need to count how many frames have passed from the first peak to the last peak, and how many peaks have been detected, then I can calculate the period between each peak and the heart rate.

This idea is very simple, but there is a problem, that is, the setting of the threshold. The degree of the peak is not constant, sometimes obvious, sometimes weak. Therefore, a fixed threshold certainly cannot meet the needs of actual detection. So I thought that we can determine a suitable threshold in real time based on the upper and lower ranges of the heartbeat curve. I made the following changes:

Each time you calculate the heart rate, first find the maximum and minimum values ​​of the entire array and determine the range of data fluctuations.

Then, a threshold is determined based on a percentage of this range.

In other words, a eigenvalue will only be judged as a peak if it exceeds a certain percentage of the entire set of data.

According to this method, I tested the data at regular intervals, implemented the heart rate calculation in the Demo, and made a simple implementation of the interface. The general effect is as follows.

Preliminary implementation of heart rate detection Demo

There is still a certain degree of false detection rate during use, but heart rate detection is finally achieved~ 🎉🎉🎉

3. Performance Optimization

After I roughly implemented the heart rate detection function, the leader asked me to optimize the performance and also introduced me to the usage of Instruments (I had never used it before 🙊).

Task List

  • Performance Optimization
  • Encapsulate components (in the form of delegate or block);
  • Two default animations are provided;

I used Instrument to analyze the CPU usage during the heart rate detection process and found that the usage rate was very high, maintaining at around 50%~60%. However, this was within my expectations, because my algorithm is really violent 😂 - each frame of the image is 1920×1080 in size, and within 1/30 second, these more than 2 million pixels must be traversed and calculated, and converted into a bitmap to be displayed on the layer, and the heart rate must be calculated once in a while...

I analyzed the parts that took up the most CPU and summarized several areas that can be considered for optimization.

  • Reduce the sampling range
  • Reduce the sampling rate
  • Cancel AV output
  • Lower the resolution
  • Improve the algorithm and remove redundant calculations

1. Reduce the sampling range

The current sampling algorithm samples all pixels once. I wondered if I could narrow the sampling range, such as sampling only a certain area in the middle. However, after experiments, I found that sampling only a certain area would make the detected peaks blurred, indicating that the sampling of individual areas is not representative.

Then I thought of a new way. I found that the color difference of adjacent pixels in the image is very small, so I can skip sampling, sampling every few columns and rows, which can reduce the workload and the impact on the sampling effect.

Jumping sampling

The sampling method is as shown in the figure above, and a constant is set to adjust the spacing of each jump. In this way, in theory, the time taken each time can be reduced to 1/n^2 of the original time, which is greatly reduced. After several attempts, it can be seen that the CPU usage of the function where the sampling algorithm is located has been reduced from 31% to 14%.

When analyzing the CPU usage, I found that when RGB is accumulated separately in the loop, the calculation of the first R takes more than 100 times the time. At first, I thought it might be because the Red component is large and difficult to calculate. Brother Mao suggested that I use bit operations, but after I changed to bit operations, the bottleneck still exists, which made me very confused. Later, I tried to change the calculation order of RGB, and found that the bottleneck has nothing to do with R. Regardless of RGB, whoever is in the first place will become the bottleneck. Later, I thought that this should be a bottleneck caused by the data transmission between the CPU and the memory, because the pixels are stored in a large memory block, and the speed may be slow when the first data is retrieved, and then there may be a cache when the adjacent data is retrieved later, so the speed will increase by two orders of magnitude.

2. Reduce the sampling rate

Lowering the sampling rate means reducing the number of frames of the video. I remember, I don’t know if it was Shannon or someone else, there was a theorem, which roughly means that as long as the sampling rate is more than twice the frequency, the frequency of the signal can be detected.

(As pointed out by coderMoe, the official name here should be "Nyquist Sampling Theorem" ~ Shannon was one of the participants)

The upper limit of a person's heart rate is generally 160 beats per minute, which is less than 3 Hz. In theory, as long as our sampling rate reaches 6 frames per second, we can calculate the frequency.

However, since the algorithm I used before was not particularly stable, I did not change the sampling rate at that time.

3. Cancel AV output

Previously, in order to facilitate viewing of the effect, I output the captured video image to a layer on the interface. In fact, this image is completely unnecessary to display. Therefore, I removed this part of the function, and the overall CPU usage was reduced to less than 33%.

4. Reduce the resolution

Currently, the size of the video we collect is 1920×1080, but we don’t need such a high resolution. Reducing the resolution can reduce the number of pixels that need to be calculated and reduce the IO time.

After I lowered the resolution to 640x480:

  1. if ([_session canSetSessionPreset:AVCaptureSessionPreset640x480]) {
  2.  
  3. /** Reduce the resolution of image acquisition */
  4.  
  5. [_session setSessionPreset:AVCaptureSessionPreset640x480];
  6.  
  7. }

The result is amazing, the overall CPU usage is directly reduced to about 5%!

5. Improve the algorithm and remove redundant calculations

Finally, I optimized some redundant calculations in the algorithm. However, since the CPU usage has been reduced to about 5%, the real bottleneck has been eliminated, so the improvement here is not very obvious.

4. Packaging

Previously, we have completed a roughly usable heart rate monitoring Demo, but before that, I focused on how to realize the heart rate detection function as quickly as possible. I didn’t think too much about the overall structure and object encapsulation, and simply used OC’s object-oriented approach as a process-oriented one.

Then our next important task is to encapsulate our heart rate detection and make it a reusable component.

Task List

  • Encapsulate components and provide reasonable interfaces (in the form of delegates or blocks);
  • Two default animations are provided;

Encapsulating ViewController

At the beginning, I thought of encapsulating the ViewController, so that when others need heart rate detection, a heart rate monitoring ViewController can pop up with some animation effects during the detection process. It will automatically dismiss after the detection is completed and return the detected heart rate.

I declared three interfaces in the protocol:

  1. /**
  2.  
  3. * Heart rate detection ViewController's proxy protocol
  4.  
  5. */
  6.  
  7. @protocol MTHeartBeatsCaptureViewControllerDelegate
  8.  
  9. @optional
  10.  
  11. - (void)heartBeatsCaptureViewController:(MTHeartBeatsCaptureViewController *)captureVC
  12.  
  13. didFinishCaptureHeartRate:( int )rate;
  14.  
  15. - (void)heartBeatsCaptureViewControllerDidCancel:(MTHeartBeatsCaptureViewController *)captureVC;
  16.  
  17. - (void)heartBeatsCaptureViewController:(MTHeartBeatsCaptureViewController *)captureVC
  18.  
  19. DidFailWithError:(NSError *)error;
  20.  
  21. @ end  

I set all three methods as optional, because I also set three corresponding Blocks in ViewController for external use, corresponding to the three methods.

  1. @property (nonatomic, copy)void(^didFinishCaptureHeartRateHandle)( int rate);
  2.  
  3. @property (nonatomic, copy)void(^didCancelCaptureHeartRateHandle)();
  4.  
  5. @property (nonatomic, copy)void(^didFailCaptureHeartRateHandle)(NSError *error);

Encapsulated heart rate detection class

After encapsulating ViewController, we can see that it is still not very reasonable. This means that others can only use our encapsulated interface to detect heart rate. If the user of the component has a better interaction solution or has special logic requirements, it will be very inconvenient for him to use. Therefore, it is necessary for us to perform deeper encapsulation.

Next, I will extract the heart rate detection class and encapsulate it.

First, I stripped out the key code of heart rate detection little by little and put it into the new MTHeartBeatsCapture class. After stripping it almost completely, I found that the whole screen was full of red Error codes. It took me an afternoon to restore the project to a working state.

I set up two methods in the heart rate detection class: start and stop. It is very convenient to use.

  1. /** Start detecting heart rate */
  2.  
  3. - (NSError *)start;
  4.  
  5. /** Stop detecting heart rate */
  6.  
  7. - (void)stop;

Then, I redesigned a callback interface for the heart rate detector, which still uses both delegate and block. The new interface is as follows:

  1. /**
  2.  
  3. * Proxy protocol for heart rate monitor;
  4.  
  5. * You can choose Delegate or block to get notifications,
  6.  
  7. * Therefore, all methods in the protocol are optional
  8.  
  9. */
  10.  
  11. @protocol MTHeartBeatsCaptureDelegate
  12.  
  13. @optional
  14.  
  15. /** After a peak (jump) is detected, you can choose whether to stop the detection by the return value */
  16.  
  17. - (BOOL)heartBeatsCapture:(MTHeartBeatsCapture *)capture heartBeatingWithRate:( int )rate;
  18.  
  19. /** Lost stable signal */
  20.  
  21. - (void)heartBeatsCaptureDidLost:(MTHeartBeatsCapture *)capture;
  22.  
  23. /** Get new feature value (30 frames/second) */
  24.  
  25. - (void)heartBeatsCaptureDataDidUpdata:(MTHeartBeatsCapture *)capture
  26.  
  27. @ end  

I added heartBeatsCaptureDidLost: to the new interface to facilitate callback when the eigenvalue fluctuates violently, so that the user can be reminded of the wrong posture. The third method is to transmit data to the outside so that the external animation view can produce an animation effect similar to that of an electrocardiogram.

I also removed the callback didFinishCaptureHeartRate: of successful detection and replaced it with heartBeatingWithRate:, leaving the judgment of the success timing to the outside world. When the external developer believes that the detected heart rate is stable enough, he can return YES to stop the detection.

In addition, I also removed the DidFailWithError: callback when an error occurs, because I found that almost all possible errors occur in the preparation stage before starting. Therefore, I changed it to return error information in the start method, enumerate the error type as a code, and encapsulate it into NSError.

  1. typedef NS_OPTIONS(NSInteger, CaptureError) {
  2.  
  3. CaptureErrorNoError = 0, /**< No error */
  4.  
  5. CaptureErrorNoAuthorization = 1 << 0, /**< No camera permission */
  6.  
  7. CaptureErrorNoCamera = 1 << 1, /**< Camera device not supported, probably on simulator */
  8.  
  9. CaptureErrorCameraConnectFailed = 1 << 2, /**< Camera error, unable to connect to the camera */
  10.  
  11. CaptureErrorCameraConfigFailed = 1 << 3, /**< Camera configuration failed. The camera may be locked by other programs */
  12.  
  13. CaptureErrorTimeOut = 1 << 4, /**< Detection timeout, the user should be reminded to place the finger correctly */
  14.  
  15. CaptureErrorSetupSessionFailed = 1 << 5, /**< Video data stream establishment failed */
  16.  
  17. };

After the main work was completed, Brother Mao gave me a lot of suggestions, mainly about some problems with encapsulation. Many places do not need to be open to the outside world, and should be hidden as much as possible. The interface should also be simplified as much as possible, and unnecessary functions should be removed as much as possible. In particular, a characteristic value array (NSMutableArray) that is open to the outside world should be immutable, which I have never considered.

Encapsulate animation & improve animation

After encapsulating the heart rate detection class, I separated out the part that displays the heartbeat waveform and encapsulated it into a MTHeartBeatsWaveView. When using it, just assign the animation View to MTHeartBeatsCapture as the delegate, and the feature value data can be obtained and displayed on the view.

Animation improvement: During the test, I found that the waveform displayed by the waveform animation was not ideal. The size of the View is determined during initialization, but the amplitude of the heartbeat fluctuation varies greatly. Sometimes it is flat, comparable to an airport, and sometimes it is so turbulent that it directly exceeds the range of the View.

Therefore, I made an improvement to the display of the animation: I was able to calculate the appropriate zoom ratio based on the current waveform range, and dynamically scale the Y coordinate of the heartbeat curve so that its upper and lower amplitudes are suitable for the current View.

This improvement greatly improves the user experience.

5. Optimization

We can see that the previously obtained curve can already reflect the heart beat well, but there is still a certain false detection rate in the current heart rate calculation. The clear heartbeat curve shown in the above figure is actually ideal. In the test, it is found that the sampled data often has large noise and disturbance, which often leads to false peaks in the heart rate calculation. Therefore, I made optimizations in the following two aspects to improve the accuracy of heart rate detection.

1. Filtering in the preprocessing stage

The obtained curve sometimes contains more noise

Analyzing the noise in the heart rate curve, we will find that there is some high-frequency noise in the noise, which may be caused by the slight shaking of the finger or some noise generated by the camera. Therefore, I found a simple real-time bandpass filter to process the H value we sampled before, filtering out some high-frequency and low-frequency noise.

Heart rate signal after adding filter processing

After being processed by the filter, the curve we get is smoother.

2. Refer to the TP-Psola algorithm to eliminate pseudo peaks

After being processed by the filter, we will find that there is always a small peak in each heartbeat cycle. Because it is not a real peak, I call it a "pseudo peak". This pseudo peak is very obvious and sometimes interferes with our heart rate detection. It is misjudged as a heartbeat peak by the algorithm, causing the heart rate to double directly.

This false peak appears because, in addition to external noise, there are also many "clutters" in the heart's own beating cycle. Let's look at the complete process of a heartbeat.

Animation of the electrocardiogram waveform generation process[1]

The above figure shows the changes in the state of the heart and the corresponding wave bands during a heartbeat cycle. It can be seen that before and after the heart contracts, the human body will also have electrical signals to stimulate the heart to relax, which will show several fluctuations on the electrocardiogram. The blood pressure will also change accordingly, and the fluctuations in the data we detect are formed in this way.

Normal cardiac cycle[2]

Therefore, the formation of this false peak is unavoidable. The existing method of judging the peak by threshold is easily deceived, and we still need to consider the improvement of the algorithm. Therefore, I thought of the fast Fourier transform.

Since I knew very little about signal processing, I read Fast Fourier Transform for two days but still made no progress. So I consulted seniors in the department, and they were very enthusiastic and recommended a lot of solutions and materials. One of them, Bo Wei, who was in charge of audio processing in the laboratory, happened to be assigned to the same group as me during the new employee orientation. I took the opportunity to ask him some related questions in my spare time. He felt that the waveform of the heart rate was relatively simple and there was no need to use Fast Fourier Transform, and recommended the pitch detection algorithm to me.

Pitch Marking

Simply put, this algorithm will mark possible peaks, and then eliminate pseudo peaks through dynamic programming, and then you can get the real peaks. Based on the idea of ​​this algorithm, I implemented a simplified pseudo peak elimination algorithm. After the improvement, the heart rate detection has reached the same accuracy as Apple watch. (I feel good about myself 😂, please don't criticize me too much~~)

Real-time peak detection

I also want to provide a real-time heartbeat animation, so I also implemented a real-time peak detection. In this way, every time a peak is detected, the delegate or block can be notified immediately to animate on the interface.

Heart rate detection ViewController

Xie-hou-yu

Since this chapter was written after a break, I called it - a two-part allegorical saying.

This heart rate monitoring project took about three weeks in total. Although the first demo was completed in three or four days, the subsequent packaging and optimization took two weeks. Well, I was deeply touched...

From the incredible at the beginning to the Apple Watch at the end, it is really a very fulfilling process. Although I encountered many difficulties during the process, and even felt that I really had no solution once or twice, I always managed to get through it in the end. I really can't help but recite poetry. I feel very fulfilled and happy.

In the process of doing this project, I also received help from many people. Especially Brother Mao, who gave me all kinds of guidance. After hearing our complaints about the cafeteria of a friendly company, he often took us out to eat meat, which greatly improved our meals~😋 And the seniors and colleagues in the department, after seeing my questions, they enthusiastically provided me with opinions and information. I hope this blog will be helpful to everyone. Thank you~

<<:  A bloody incident caused by SwipeRefreshLayout

>>:  "The best time, never seen before" Cisco's three brand transformations

Recommend

Android performance optimization: neglected optimization points

Performance optimization is a very broad topic. T...

iOS 15.4 is updated again, and the unlocking function with a mask is more useful

iOS15.4 Beta2 Early this morning, Apple quickly l...

Introduction to Baidu search promotion advanced precise matching method!

What is Advanced Exact Match Enabling the advance...

Himalaya FM-AARRR traffic funnel model analysis report!

With the rise of knowledge payment, audio payment...

The era of paid content: both a hot topic and a hidden crisis

Last year, a wave of paid content entrepreneurshi...

Private lessons at the Men's Love Academy

A new leader of high-end private lessons, with 5 ...

Baidu cloud of all Jet Li's movies, top ten Jet Li movies!

From a naturally gifted martial arts genius to an...

New media operation: 10 methods to dig deep into materials!

More and more people are entering the new media i...

What will the VR 2.0 era be like? Five major changes may be coming

2016 is already halfway through, and the Oculus R...

How to get users to actively download your product?

What does scenario-based app promotion mean? It i...

Starbucks’ private domain operation method!

Starbucks China has more than 5,000 stores and mo...

How to attract new users and promote activity on Xiaoyuan Kousuan APP

Today's case comes from Xiaowen, an outstandi...