Android tracking technology analysis

Android tracking technology analysis

1. Tracking is a method of collecting data from websites, apps, or backend applications. By tracking, users’ behaviors in the application can be collected, which can then be used to analyze and optimize the subsequent product experience, and can also provide data support for product operations. Common indicators include PV, UV, page duration, and button clicks.

When collecting behavioral data, you usually need to add some code to the web page/App. When the user's behavior reaches a certain condition, the user's behavior will be reported to the server. In fact, the process of adding these codes can be called "tracking", and this technology has appeared a long time ago. With the development of technology and the continuous improvement of everyone's requirements for data collection, I think the technical solution of tracking has gone through the following stages:

Code tracking: Code tracking means that developers add behavior reporting codes to the source code of web pages/Apps according to product/operation requirements. When the user's behavior meets a certain condition, these codes will be executed and the behavior data will be reported to the server. This solution is the most basic solution. Every time you add or modify the conditions for data reporting, you need the participation of developers, and the effect can only be seen after the next version is launched. Many companies provide this type of data reporting SDK, which encapsulates the background server interface of behavior reporting into a simple client SDK interface. Developers can embed this type of SDK and call a small amount of code at the tracking location to report behavior data.

Full tracking: Full tracking means that all behaviors generated in the web page/App that meet a certain condition are reported to the background server. For example, all button clicks in the App are reported, and then the product/operation will go to the background to filter the required behavior data. The advantage of this solution is very obvious, that is, you don’t need to find developers to modify the tracking code when adding/modifying behavior reporting conditions. However, its disadvantages are as obvious as its advantages, that is, the amount of data reported is much larger than code tracking, and there may be a lot of worthless data in it. In addition, this solution tends to look at user behavior independently without paying attention to the context of the behavior, which brings some difficulties to data analysis. Many companies also provide SDKs for this type of function, which "hook" the original App code in a static or dynamic way to achieve behavior monitoring. When reporting data, it is usually a solution to accumulate multiple requests and then report them to merge requests.

Hook literally means hook. I have heard of it in Windows when I was studying information security. It generally means to change the behavior of a system API by some means, bypass a method of the system, or change the workflow of the system. Here it actually means to replace the object that is supposed to execute a method with another one, usually using reflection or proxy. You need to find the code location of the hook, and you can even replace it during the compilation stage.

Visual tracking: Visual tracking means that the product/operation circles the element on the interface of the web page/App, configures which element on the interface needs to be monitored, and then saves the configuration. When the App starts, it will obtain the configuration pre-selected by the product/operation from the background server, and then monitor the elements on the App interface according to this configuration. When an element meets the conditions, the behavior data will be reported to the background server. With the full tracking technology solution, it is easy to think of on-demand tracking from the perspective of experience optimization. Visual tracking is a solution for configuring tracking on demand. Some companies now provide this type of SDK. When circling the monitoring elements, a Web management interface is generally provided. After the mobile phone installs and initializes the SDK, it can connect to the management interface, allowing users to configure the elements to be monitored on the Web management interface.

There are many SDKs in the industry that support one or all of the three tracking solutions introduced above, such as Mixpanel, Sensorsdata, TalkingData, GrowingIO, Zhuge IO, Heap Analytics, MTA, Umeng Analytics, and Baidu. However, the names of the latter two tracking solutions are not exactly the same, and some are called no tracking or codeless tracking. Since Mixpanel (supports code tracking and visual tracking) and Sensorsdata (supports all) have open-sourced their entire SDKs, the technical solutions are also similar. Let's take their Android SDKs as an example to briefly analyze the technical implementation of the three tracking solutions. For the technical implementation of JS SDK, you can read my other blog - JS Tracking SDK Technical Analysis.

2. Code Buried Points

Most SDKs, including the Mixpanel SDK, encapsulate this tracking solution into a relatively simple interface, which is track(String eventName, JSONObject properties) here. When developers call this interface, they can pass in an event name and event properties, and then report it to the backend.

In terms of implementation, Mixpanel SDK uses a HandlerThread thread to handle events by default. When the developer calls the track(String eventName, JSONObject properties) method, the main thread switches to the HandlerThread and stores the event in the database first. Then it checks whether 40 events have been accumulated in the SDK. If so, they are merged and reported to the backend.

When the developer sets the debug mode or manually calls the flush API, all accumulated events can be reported immediately. However, since there is only one thread, if the previous events have not been processed when flushing, the SDK will process the subsequent events again after an interval of 1 minute.

Developers can set the threshold for the number of events reported cumulatively, the time interval for retrying to report when an event is blocked, etc. This solution is relatively basic and I believe most developers have come across it and do not need to analyze it too much.

3. Full Buried Points

3.1 AOP Basics

Mixpanel's current Android SDK does not provide this function, but the Sensors Android SDK does, and the implementation method relies on AOP. So what is AOP?

In the software industry, AOP is the abbreviation of Aspect Oriented Programming, which means: aspect-oriented programming, a technology that achieves unified maintenance of program functions through pre-compilation and runtime dynamic proxy. AOP is a continuation of OOP, a hot topic in software development, and an important part of the Spring framework. It is a derivative paradigm of functional programming. AOP can be used to isolate various parts of business logic, thereby reducing the coupling between various parts of business logic, improving the reusability of the program, and improving development efficiency. (from baidu baike)

In short, AOP is a technology that can dynamically and uniformly add functions to a program without modifying the source code through precompilation and runtime dynamic proxy.

The implementation of Sensors Analytics Android SDK full embedding is to find the location where events need to be reported in the source code during the code compilation phase and insert the SDK event reporting code. The framework used is AspectJ.

At this point, we must briefly understand AspectJ and some of its concepts. It is the leader of AOP and we can see it in many places, such as Hugo, an annotation logging and performance tuning framework contributed by JakeWartson. AspectJ is also widely used in the Spring framework. I understand the main concepts in AspectJ are:

  • JPoint: Code cut point (where we want to insert the code)
  • Aspect: Description of code cut point
  • Pointcut: describes what the pointcut is, such as where the function is called (Call(MethodSignature)) or where the function is executed (execution(MethodSignature)).
  • Advice: describes where to insert the code at the pointcut, such as before the pointcut ( @Before ) or after the pointcut ( @After ), or around the entire pointcut ( @Around )

It can be seen that when implementing AOP functions, the following things need to be done:

  • Define an Aspect, which must have two attributes: Pointcut and Advice.
  • Write the code that needs to be injected when matching the code described by Pointcut and Advice
  • When the code is compiled, a special Java compiler (Aspect's ajc compiler) is used to find the code that conforms to the Aspect we defined, and insert the code to be injected into the location specified by the Advice.

If you are familiar with AspectJ, you can already guess how the full embedding is implemented inside the SDK; if you are not exposed to it, I don’t think you need to rush to learn AspectJ comprehensively, because only a small part of AspectJ’s functions are used inside the SDK. You can directly see the analysis below.

3.2 Full Buried Points-Technical Implementation

How does Sensors SDK monitor View click events? I simplified the SDK code for analysis. There are several steps:

3.2.1 Defining an Aspect

  1. import org.aspectj.lang.JoinPoint;
  2. import org.aspectj.lang.annotation.After ;
  3. import org.aspectj.lang.annotation.Aspect;
  4. import org.aspectj.lang.annotation.Pointcut;
  5.  
  6. @Aspect
  7. public class ViewOnClickListenerAspectj{
  8.  
  9. /**
  10. * android. view . View .OnClickListener.onClick(android. view . View )
  11. *
  12. *@paramjoinPoint JoinPoint
  13. * @throwsThrowable Exception
  14. */
  15. @ After ( "execution(* android.view.View.OnClickListener.onClick(android.view.View))" )
  16. public void onViewClickAOP(final JoinPoint joinPoint)throws Throwable {
  17. AopUtil.sendTrackEventToSDK(joinPoint, "onViewOnClick" );
  18. }
  19. }

This Aspect code defines: After executing the original implementation of the android.view.View.OnClickListener.onClick(android.view.View) method, you need to insert the code AopUtil.sendTrackEventToSDK(joinPoint, "onViewOnClick");

AopUtil.sendTrackEventToSDK(joinPoint, "onViewOnClick"); This code reports click events. Because Sensors SDK separates the full tracking function and the main SDK package into two jar packages, the AopUtil tool is used to call the actual event reporting code. Its implementation is not described in detail here. Let's take a look at the actual click reporting implementation behind this code.

  1. SensorsDataAPI.sharedInstance().track(AopConstants.APP_CLICK_EVENT_NAME, properties);

You can see that AOP implements click monitoring, and *** also reports it using the track method.

3.2.2 Using the ajc compiler to insert aspect code into source code

If you want to inject the code written in the AspectJ framework into the original project code, you need to reference the ajc compiler in /app/build.gradle. The script is as follows:

  1. ...
  2. import org.aspectj.bridge.IMessage
  3. import org.aspectj.bridge.MessageHandler
  4. import org.aspectj.tools.ajc.Main
  5.  
  6. buildscript {
  7. repositories {
  8. mavenCentral()
  9. }
  10. dependencies {
  11. classpath 'org.aspectj:aspectjtools:1.8.10'  
  12. }
  13. }
  14.  
  15. repositories {
  16. mavenCentral()
  17. }
  18.  
  19. android {
  20. ...
  21. }
  22.  
  23. dependencies {
  24. ...
  25. compile 'org.aspectj:aspectjrt:1.8.10'  
  26. }
  27.  
  28. final def log = project.logger
  29. final def variants = project.android.applicationVariants
  30.  
  31. variants. all { variant ->
  32. if (!variant.buildType.isDebuggable()) {
  33. log.debug( "Skipping non-debuggable build type '${variant.buildType.name}'." )
  34. return ;
  35. }
  36.  
  37. JavaCompile javaCompile = variant.javaCompile
  38. javaCompile.doLast {
  39. String[] args = [ "-showWeaveInfo" ,
  40. "-1.5" ,
  41. "-inpath" , javaCompile.destinationDir.toString(),
  42. "-aspectpath" , javaCompile.classpath.asPath,
  43. "-d" , javaCompile.destinationDir.toString(),
  44. "-classpath" , javaCompile.classpath.asPath,
  45. "-bootclasspath" , project.android.bootClasspath.join ( File.pathSeparator )]
  46. log.debug "ajc args: " + Arrays.toString(args)
  47.  
  48. MessageHandler handler = new MessageHandler( true );
  49. new Main().run(args, handler);
  50. for (IMessage message : handler.getMessages( null , true )) {
  51. switch (message.getKind()) {
  52. case IMessage.ABORT:
  53. case IMessage.ERROR:
  54. case IMessage.FAIL:
  55. log.error message.message, message.thrown
  56. break;
  57. case IMessage.WARNING:
  58. log.warn message.message, message.thrown
  59. break;
  60. case IMessage.INFO:
  61. log.info message.message, message.thrown
  62. break;
  63. case IMessage.DEBUG:
  64. log.debug message.message, message.thrown
  65. break;
  66. }
  67. }
  68. }
  69. }

In SensorsAndroidSDK, the above script is written as a gradle plug-in. Developers only need to reference this plug-in in app/build.gradle.

  1. apply plugin: 'com.sensorsdata.analytics.android'  

3.2.3 Complete code insertion and check the effect after insertion

After completing the above two steps, we can insert our data reporting code into the android.view.View.OnClickListener.onClick(android.view.View) method. We add a Button to the demo code and set an OnClickListener for it. Compile the code and check the class files in /build/intermediates/classes/debug/. After ajc compilation, the Aspect code is inserted into the original code and the onViewClickAOP method in ViewOnClickListenerAspectj is called.

  1. public class MainActivityextends Activity{
  2. public MainActivity(){
  3. }
  4.  
  5. protected void onCreate(Bundle savedInstanceState){
  6. super.onCreate(savedInstanceState);
  7. this.setContentView(2130968603);
  8. Button btnTst = (Button)this.findViewById(2131427422);
  9. btnTst.setOnClickListener(new OnClickListener() {
  10. public void onClick( View v){
  11. JoinPoint var2 = Factory.makeJP(ajc$tjp_0, this, this, v);
  12.  
  13. try {
  14. Log.i( "MainActivity" , "button clicked" );
  15. } catch (Throwable var5) {
  16. ViewOnClickListenerAspectj.aspectOf().onViewClickAOP(var2);
  17. throw var5;
  18. }
  19.  
  20. ViewOnClickListenerAspectj.aspectOf().onViewClickAOP(var2);
  21. }
  22.  
  23. static {
  24. ajc$preClinit();
  25. }
  26. });
  27. }
  28. }

This is the basic usage of AspectJ. SensorsAndroidSDK inserts Aspect code with the help of AspectJ, which is a static method. The static full-point solution essentially modifies the bytecode and inserts the event reporting code.

In addition to this solution, there are also the transform API (version 1.5.0 or later), ASM, and Javassist provided by the Android Gradle plug-in to modify the bytecode. These technologies can be seen in the NetEase Lede tracking solution and the Nuwa hot fix project.

3.3 AspectJ related information

  • Aspect Oriented Programming in Android: https://fernandocejas.com/2014/08/03/aspect-oriented-programming-in-android/
  • AspectJ comprehensive analysis of AOP in Android: http://www.jianshu.com/p/f90e04bcb326
  • Hujiang has open-sourced a plug-in called AspectJX, which extends AspectJ. In addition to AOP for src code, it also supports AOP for kotlin, jar and aar referenced in the project: https://github.com/HujiangTechnology/gradle_plugin_android_aspectjx
  • Everything you need to know about Spring AOP (AspectJ): http://blog.csdn.net/javazejian/article/details/56267036

3.4 Other ideas

The above is a "static Hook" implementation represented by AspectJ. Is there any other way to "dynamically Hook" the click behavior when the App is running without modifying the source code? The answer is yes. In the Java world, there is also reflection. Let's see how to implement the replacement of click events.

In the source code of android.view.View.java (API>=14), there are several key methods:

  1. // getListenerInfo method: returns all listener information mListenerInfo
  2. ListenerInfogetListenerInfo(){
  3. if (mListenerInfo != null ) {
  4. return mListenerInfo;
  5. }
  6. mListenerInfo = new ListenerInfo();
  7. return mListenerInfo;
  8. }
  9.  
  10. //Listener information
  11. static class ListenerInfo{
  12. ... // Various xxxListeners are omitted here
  13. /**
  14. * Listener used to dispatch click events.
  15. * This field should be made private, so it is hidden from the SDK.
  16. * {@hide}
  17. */
  18. public OnClickListener mOnClickListener;
  19.  
  20. /**
  21. * Listener used to dispatch long click events.
  22. * This field should be made private, so it is hidden from the SDK.
  23. * {@hide}
  24. */
  25. protected OnLongClickListener mOnLongClickListener;
  26.  
  27. ...
  28. }
  29. ListenerInfo mListenerInfo;
  30.  
  31. // This method is very familiar to us. It actually sets the mOnClickListener of mListenerInfo to the OnclickListner object we created.
  32. public void setOnClickListener(@Nullable OnClickListener l){
  33. if (!isClickable()) {
  34. setClickable( true );
  35. }
  36. getListenerInfo().mOnClickListener = l;
  37. }
  38.  
  39. /**
  40. * Determine whether this View has a click listener set
  41. * Return whether this view has an attached OnClickListener. Returns  
  42. * true if there is a listener, false if there is none.
  43. */
  44. public boolean hasOnClickListeners(){
  45. ListenerInfo li = mListenerInfo;
  46. return (li != null && li.mOnClickListener != null );
  47. }

From the above methods, we can see that the click listener is actually saved in mListenerInfo.mOnClickListener. So when implementing the Hook click listener, just replace this mOnClickListener with the click listener proxy object we packaged. Let's take a look at the implementation ideas:

1. Create a click listener proxy class

  1. // The click listener proxy class has the function of reporting click behavior
  2. class OnClickListenerWrapperimplements View .OnClickListener{
  3. // The original click listener object
  4. private View .OnClickListener onClickListener;
  5.  
  6. public OnClickListenerWrapper( View .OnClickListener onClickListener){
  7. this.onClickListener = onClickListener;
  8. }
  9.  
  10. @Override
  11. public void onClick( View   view ){
  12. // Make the original click listener work properly
  13. if(onClickListener != null ){
  14. onClickListener.onClick( view );
  15. }
  16. // Click event reporting, you can get some properties of the clicked view
  17. track(APP_CLICK_EVENT_NAME, getSomeProperties( view ));
  18. }
  19. }

2. Reflect and get a View's mListenerInfo.mOnClickListener and replace it with the proxy's click listener

  1. //Hook a View 's click listener
  2. public void hookView( View   view ) {
  3. // 1. Reflectively call the getListenerInfo method of View (API>=14) to obtain the mListenerInfo object
  4. Class viewClazz = Class.forName( "android.view.View" );
  5. Method getListenerInfoMethod = viewClazz.getDeclaredMethod( "getListenerInfo" );
  6. if (!getListenerInfoMethod.isAccessible()) {
  7. getListenerInfoMethod.setAccessible( true );
  8. }
  9. Object mListenerInfo = listenerInfoMethod.invoke( view );
  10.      
  11. // 2. Then obtain the mOnClickListener object from mListenerInfo by reflection
  12. Class listenerInfoClazz = Class.forName( "android.view.View$ListenerInfo" );
  13. Field onClickListenerField = listenerInfoClazz.getDeclaredField( "mOnClickListener" );
  14. if (!onClickListenerField.isAccessible()) {
  15. onClickListenerField.setAccessible( true );
  16. }
  17. View .OnClickListener mOnClickListener = ( View .OnClickListener) onClickListenerField.get(mListenerInfo);
  18.      
  19. // 3. Create a click listener object for the proxy
  20. View .OnClickListener mOnClickListenerWrapper = new OnClickListenerWrapper(mOnClickListener);
  21.      
  22. // 4. Set the mOnClickListener of mListenerInfo to the new onClickListenerWrapper
  23. onClickListenerField.set (mListenerInfo, mOnClickListenerWrapper) ;
  24. // This seems to work: view .setOnClickListener(mOnClickListenerWrapper);
  25. }

Note that if the API is < 14, mOnClickListener is directly stored in the View object as a Field, without ListenerInfo, so the number of reflections is less.

3. Hook all Views in the App

We are analyzing all embedded points, so how to hook all View clicks in the App? There are two ways:

***: After the Activity is created, start traversing the ViewTree from top to bottom from the Activity's DecorView. When a View is traversed, hookView is performed on it. This method is a bit violent, and performance will be affected because a lot of reflection is used when traversing the ViewTree.

The second method is slightly better than the first method. It comes from an open source library AndroidTracker (Kotlin implementation) on Github. Its processing method is to add a transparent View as a child View in DecorView after the Activity is created. In the onTouchEvent method of this child View, the View on the screen containing the coordinates is found according to the touch coordinates, and then the hookView operation is attempted on these Views. This method is more tricky. First, the position where the finger is pressed is obtained, and the View that needs to be hooked is found according to this position, avoiding the reflection of the View while traversing the ViewTree. The specific implementation is to determine whether the coordinates of this View contain the coordinates of the finger pressed, and whether the View is Visible when traversing each View in the ViewTree. If these two conditions are met, the View is saved in an ArrayList hitViews. Then the Views in this ArrayList are traversed again. If a View#hasOnClickListeners returns true, the hookView operation is performed on it.

Overall, the idea of ​​dynamic Hook uses reflection, which will inevitably affect the performance of the program. If you want to use this method to implement a full embedding solution, you still need to evaluate it carefully.

4. Visualization of tracking points

4.1 Visualized tracking points - technical implementation

Visual tracking requires two steps and can be completed by non-technical personnel. The first step is to use the App embedded with Mixpanel/SensorsSDK to connect to the background. When the mobile App is synchronized with the background, the background management interface will display the same interface as the mobile App. Users can use the mouse to select the elements to be monitored on the management interface, set the event name, the element attributes to be monitored, etc. (It is said that some SDKs' circle selection operations are performed on the mobile phone. Regardless of the method, it is essentially the same, and a configuration needs to be saved to the background). The second step is that when the App embedded with the SDK is started, it will obtain a configuration from the server, and then detect the interface and elements in the App based on this configuration, and report events to the server when the configuration conditions are met. The following takes Mixpanel and SensorsdataSDK as examples to briefly analyze the implementation of the technical solution.

4.1.1 Circle the elements to be monitored and save the configuration

1. Create a WebSocket connection backend

The reason for using WebSocket connection is to keep the mobile phone and the backend connected for a long time, which is a continuous two-way communication. When connected to the backend, the device information of the mobile phone is sent to the backend.

2. Send the screenshot of the App interface to the backend

After creating the Socket connection, in the main thread, scan the Activity started in the App to find the RootView (actually DecorView) of the interface. While searching for the RootView, a screenshot of the RootView will be taken, and the createSnapshot method of the View class is called by reflection.

After taking a screenshot, the SDK will determine the hash value of the image. If the image has changed, it will use a recursive method to deeply traverse the Activity's ViewTree and read the View's properties (id, top, left, width, height, class name, layoutRules, etc.) while traversing.

***, send the collected data to the connected backend, which will parse it and display the App interface on the web page. Users can circle the element to be monitored on this web page, set the time name of this element (event_type and event_name), and save the configuration.

4.1.2 Get configuration, monitor element behavior, and report events

1. Get the configuration

When the SDK is started, it will pull a JSON configuration from the server and save it in sharedPreference. At the same time, the SDK will scan the resource ID and resource name in the android.R file and save them.

After the SDK is configured, it is parsed into a JSON object, and the event_bindings field is read. Then the events field is read. This field contains an array. Each element of the array describes a type of event and contains the Activity and path of the element that needs to be monitored for this type of event. This configuration is basically structured like this:

  1. event_bindings: {
  2. events:
  3. {
  4. target_activity: ""  
  5. event_name: ""  
  6. event_type: ""  
  7. path: [
  8. {
  9. prefix:
  10. view_class:
  11. index :
  12. id:
  13. id_name:
  14. },
  15. ...
  16. ]
  17. }
  18. ]
  19. }

After receiving this configuration, the SDK will generate a ViewVisitor based on each event information. The function of ViewVisitor is to find all View elements pointed to in the path array, and set the corresponding behavior monitor for this View element according to event_type. When the View has a specified behavior, the monitor will detect it and report the behavior.

After ViewVisitor is generated, the SDK saves them in a Map structure, which is also relatively easy to understand.

2. Monitor elements and report events

How does ViewVisitor monitor the behavior of elements? The answer is View.AccessibilityDelegate.

In the Android SDK, AccessibilityService provides us with a series of event callbacks to help us indicate some state changes of the user interface. We can derive accessibility classes to handle different AccessibilityEvents. Let's see what event types are in AccessibilityEvent:

  1. /**
  2. * Represents the event of clicking on a {@linkandroid. view . View } like  
  3. * {@linkandroid.widget.Button}, {@linkandroid.widget.CompoundButton}, etc.
  4. */
  5. public   static final int TYPE_VIEW_CLICKED = 0x00000001;
  6.  
  7. /**
  8. * Represents the event of long clicking on a {@linkandroid. view . View } like  
  9. * {@linkandroid.widget.Button}, {@linkandroid.widget.CompoundButton}, etc.
  10. */
  11. public   static final int TYPE_VIEW_LONG_CLICKED = 0x00000002;
  12.  
  13. /**
  14. * Represents the event of selecting an item usually in the context of an
  15. * {@linkandroid.widget.AdapterView}.
  16. */
  17. public   static final int TYPE_VIEW_SELECTED = 0x00000004;
  18.  
  19. /**
  20. * Represents the event of setting input focus of a {@linkandroid. view . View }.
  21. */
  22. public   static final int TYPE_VIEW_FOCUSED = 0x00000008;
  23.  
  24. /**
  25. * Represents the event of changing the text of an {@linkandroid.widget.EditText}.
  26. */
  27. public   static final int TYPE_VIEW_TEXT_CHANGED = 0x00000010;
  28. ...

Take the click event TYPE_VIEW_CLICKED as an example. When the RootView of the Activity interface starts drawing (onGlobalLayout callback of ViewTreeObserver.OnGlobalLayoutListener), ViewVisitor will also start looking for the specified View and set a new AccessibilityDelegate for this View. Let's take a look at how this new View.AccessibilityDelegate is written:

  1. private class TrackingAccessibilityDelegateextends View .AccessibilityDelegate{
  2. ...
  3. public TrackingAccessibilityDelegate( View .AccessibilityDelegate realDelegate){
  4. mRealDelegate = realDelegate;
  5. }
  6.  
  7. public   View .AccessibilityDelegategetRealDelegate(){
  8. return mRealDelegate;
  9. }
  10.  
  11. ...
  12.              
  13. @Override
  14. public void sendAccessibilityEvent( View host, int eventType){
  15. if (eventType == mEventType) {
  16. fireEvent(host); // event reporting
  17. }
  18.  
  19. if ( null != mRealDelegate) {
  20. mRealDelegate.sendAccessibilityEvent(host, eventType);
  21. }
  22. }
  23.  
  24. private View .AccessibilityDelegate mRealDelegate;
  25. }
  26. ...

You can see that an event report is sent in the TrackingAccessibilityDelegate#sendAccessibilityEvent method of the SDK.

So does View call the sendAccessibilityEvent method in its internal implementation of the click method? The View.performClick method is called when View handles the click event. Take a look at the source code:

  1. public boolean performClick(){
  2. final boolean result;
  3. final ListenerInfo li = mListenerInfo;
  4. if (li != null && li.mOnClickListener != null ) {
  5. playSoundEffect(SoundEffectConstants.CLICK);
  6. li.mOnClickListener.onClick(this);
  7. result = true ;
  8. } else {
  9. result = false ;
  10. }
  11. sendAccessibilityEvent(AccessibilityEvent.TYPE_VIEW_CLICKED);
  12. return result;
  13. }
  14. ...
  15. public void sendAccessibilityEvent( int eventType){
  16. if (mAccessibilityDelegate != null ) {
  17. mAccessibilityDelegate.sendAccessibilityEvent(this, eventType);
  18. } else {
  19. sendAccessibilityEventInternal(eventType);
  20. }
  21. }
  22. ...
  23. public void setAccessibilityDelegate(@Nullable AccessibilityDelegate delegate){
  24. mAccessibilityDelegate = delegate;
  25. }

From this we can see that when RootView starts drawing, registering AccessibilityDelegate for View can monitor its click events.

4.2 Difficulties and Problems of Visualizing Buried Points

The above briefly analyzes the basic implementation of Mixpanel and SensorsSDK visualization tracking. There is still a difficulty that needs to be carefully considered, that is, how to uniquely identify a View in the App? What information of the View needs to be recorded, how to generate a unique ID for the View, and ensure that these IDs are fixed on different mobile phones, and ensure that the ID does not change each time the App is started. At the same time, the ID must also be able to cope with a certain degree of interface adjustment.

In addition, I saw some netizens online saying that setAccessibilityDelegate can monitor View clicks for most manufacturers' models and versions, but some models cannot successfully capture and monitor click events. From the perspective of View identification generation and monitoring principles, there are some doubts about the stability of this solution.

4.3 References

  • sensorsdata git, including multiple versions of SDK such as Android, iOS, js, JAVA, etc.: https://github.com/sensorsdata
  • Mixpanel git, including multiple versions of SDKs such as Android, iOS, js, JAVA, etc.: https://github.com/mixpanel
  • NetEase mobile data collection and analysis blog: http://www.jianshu.com/c/ee326e36f556

V. Conclusion

***Briefly summarize the advantages, disadvantages and usage scenarios of several solutions. In actual applications, multiple methods should be used in combination to balance efficiency and reliability. The best one is the one that suits your business.

<<:  Apple fans must read: Review of Apple's top ten hot topics in 2017

>>:  Apple apologizes for iPhone slowdown, battery replacement price drops from 608 yuan to 218 yuan

Recommend

PPI April 2021 iPad Illustration Course [HD Quality] Course Catalog

01Introduction to Software Basics.mp4 02 Line dra...

Shengtao E-commerce: 2022 anchor advanced training online column worth 980 yuan

Shengtao E-commerce: 2022 anchor advanced trainin...

3 experiences I have to share after 1 month of new media internship

Before joining the company, I was unprepared, so ...

Nanjing takeaway private studio, I don't want to leave after coming here

Nanjing tea delivery private studio arrangement: ...

How to plan an event with high user participation?

0. What is event operation? Before we talk about ...

A brief discussion on iOS screen adaptation practice

Screen adaptation in front-end development is act...

User Operations: Conduct user interviews from scratch!

The most important operational work before each g...