Handler is a message processing mechanism in Android. It is a solution for inter-thread communication. You can also think of it as creating a queue for us in the main thread. The order of messages in the queue is the delay time we set. If you want to implement a queue function in Android, you might as well consider it first. This article is divided into three parts:
1. How many Handlers, Loopers, and MessageQueues can there be in a thread at most? 2. Why doesn’t the infinite loop of Looper cause the application to freeze? Will it consume a lot of resources? 3. How to update UI in child thread, such as Dialog, Toast, etc.? Why does the system not recommend updating UI in child thread? 4. How does the main thread access the network? 5. How to deal with memory leaks caused by improper use of Handler? 6. What are the application scenarios of Handler message priority? 7. When does the main thread's Looper exit? Can it be exited manually? 8. How to determine whether the current thread is the Android main thread? 9. What is the correct way to create a Message instance?
1. ThreadLocal 2. epoll mechanism 3. Handle synchronization barrier mechanism 4. Handler lock related issues 5. Synchronization methods in Handler
1. HandlerThread 2. IntentService 3. How to build an app that doesn’t crash 4. Application in Glide Handler source code and answers to common questions Here is the official definition:
The general idea is that Handler allows you to send Message/Runnable to the thread's message queue (MessageQueue). Each Handler instance is associated with a thread and the message queue of that thread. When you create a Handler, you should bind it to a Looper (the main thread has created a Looper by default, and the child thread needs to create a Looper by itself). It sends Message/Runnable to the corresponding message queue of the Looper and processes the corresponding Message/Runnable in the thread where the Looper is located. The following figure is the workflow of Handler Handler workflow diagram It can be seen that in Thread, the conveyor belt of Looper is actually an infinite loop. It continuously takes messages from the message queue MessageQueue, and finally hands them over to Handler.dispatchMessage for message distribution. Handler.sendXXX, Handler.postXXX and other methods send messages to the message queue MessageQueue. The whole mode is actually a producer-consumer mode, which continuously produces and processes messages and sleeps when there are no messages. MessageQueue is a priority queue composed of a single linked list (it takes the head, so it is called a queue). As mentioned earlier, when you create a Handler, you should bind it to a Looper (binding can also be understood as creation. The main thread has created a Looper by default, and the child thread needs to create a Looper by itself), so let's first look at how it is handled in the main thread:
You can see that in the main method in ActivityThread, we first call the Looper.prepareMainLooper() method, then get the Handler of the current thread, and finally call Looper.loop(). Let's take a look at the Looper.prepareMainLooper() method first.
It can be seen that the Looper of the current thread is created in the Looper.prepareMainLooper() method, and the Looper instance is stored in the thread local variable sThreadLocal(ThreadLocal), that is, each thread has its own Looper. When creating a Looper, the message queue of the thread is also created. It can be seen that prepareMainLooper will determine whether sMainLooper has a value. If it is called multiple times, an exception will be thrown, so there will only be one Looper and MessageQueue for the main thread. Similarly, when Looper.prepare() is called in the child thread, the prepare(true) method will be called. If it is called multiple times, an exception will be thrown that each thread can only have one Looper. In summary, there is only one Looper and MessageQueue in each thread.
Let’s take a look at the main thread sMainThreadHandler = thread.getHandler(). What getHandler actually gets is the Handler mH.
The Handler mH is an internal class of ActivityThread. By looking at the handMessage method, you can see that this Handler handles some messages of the four major components, Application, etc., such as creating a Service and binding some messages of the Service.
Finally, let's look at the Looper.loop() method
In the loop method, there is an infinite loop, where messages are continuously obtained from the message queue (queue.next()), and then the messages are distributed through Handler (msg.target). In fact, there is no specific binding, because Handler has only one Looper and message queue MessageQueue in each thread, so it is naturally handled by it, that is, calling the Looper.loop() method. In the infinite loop of Looper.loop(), messages are continuously obtained and finally recycled and reused. Here we need to emphasize the parameter target (Handler) in Message. It is this variable that enables each Message to find the corresponding Handler for message distribution, allowing multiple Handlers to work simultaneously. Let's take a look at how it is handled in the child thread. First, create a Handler in the child thread and send a Runnable
After running, you can see the error log, and you can see that we need to call the Looper.prepare() method in the child thread, which actually creates a Looper and "associates" it with your Handler.
Add Looper.prepare() to create Looper and call Looper.loop() method to start processing messages.
It should be noted here that the quit method should be called to terminate the message loop after all things are processed, otherwise the child thread will always be in a loop waiting state. Therefore, terminate the Looper when it is not needed and call Looper.myLooper().quit(). After reading the above code, you may have a question. Is there any problem in updating the UI (performing Toast) in the child thread? Doesn't our Android not allow updating the UI in the child thread? In fact, this is not the case. The checkThread method in ViewRootImpl will check that mThread != Thread.currentThread(), and mThread is initialized in the constructor of ViewRootImpl, that is, the thread that creates ViewRootImpl must be consistent with the thread where checkThread is called, and the update of the UI is not only in the main thread.
Here we need to introduce some concepts. Window is a window in Android. Each Activity, Dialog, and Toast corresponds to a specific Window. Window is an abstract concept. Each Window corresponds to a View and a ViewRootImpl. Window and View are connected through ViewRootImpl. Therefore, it exists in the form of View. Let's take a look at the creation process of ViewRootImpl in Toast. Calling the show method of toast will eventually call its handleShow method.
The final implementer of this mWM (WindowManager) is WindowManagerGlobal, which creates ViewRootImpl in its addView method, and then performs root.setView(view, wparams, panelParentView), updates the interface through ViewRootImpl and completes the Window adding process.
SetView will complete the asynchronous refresh request through requestLayout, and will also call the checkThread method to verify the legitimacy of the thread.
Therefore, our ViewRootImpl is created in the child thread, so the value of mThread is also the child thread, and our update is also in the child thread, so no exception will occur. You can also refer to this article for analysis, which is very detailed. Similarly, the following code can also verify this situation
Call the showDialog method in the child thread, first call the alertDialog.show() method, then call the alertDialog.hide() method. The hide method only hides the Dialog and does not do any other operations (the Window is not removed). Then call alertDialog.show() in the main thread; it will throw an exception that Only the original thread that created a view hierarchy can touch its views.
Therefore, the key to updating the UI in the thread is whether the thread where ViewRootImpl and checkThread are created are consistent. How to access the network in the main thread Add the following code before the network request
StrictMode was introduced in Android 2.3 and is used to detect two major issues: ThreadPolicy and VmPolicy. If you turn off the network detection in strict mode, you can perform network operations in the main thread, which is generally not recommended. You can read more about strict mode here. Why does the system not recommend accessing the UI in a child thread? This is because Android UI controls are not thread-safe. If accessed concurrently in multiple threads, the UI controls may be in an unexpected state. So why doesn't the system add a lock mechanism to the access to UI controls? There are two disadvantages:
Therefore, the simplest and most efficient method is to use a single-threaded model to handle UI operations. (Exploration of the Art of Android Development) How does the child thread notify the main thread to update the UI (all messages are sent to the main thread through Handle to operate the UI)
Why doesn't the infinite loop of Looper cause the application to freeze? Will it consume a lot of resources? From the analysis of the main thread and the child thread above, we can see that the Looper will continuously retrieve messages in the thread. If the child thread's Looper is in an infinite loop, once the task is completed, the user should exit manually instead of letting it sleep and wait. (Quoted from Gityuan) A thread is actually a piece of executable code. When the executable code is executed, the life cycle of the thread should be terminated and the thread exits. As for the main thread, we never want it to run for a period of time and then exit by itself, so how can we ensure that it can survive? The simple way is that the executable code can be executed all the time, and the infinite loop can ensure that it will not be exited. For example, the binder thread also uses the infinite loop method. It performs read and write operations with the Binder driver in a different loop method. Of course, it is not a simple infinite loop and will sleep when there is no message. Android is based on the message processing mechanism. The user's behavior is in this Looper loop. When we click the screen when it is dormant, we wake up the main thread to continue working. Does the endless loop of the main thread consume a lot of CPU resources? Actually, it doesn't. This involves the Linux pipe/epoll mechanism. Simply put, when there is no message in the MessageQueue of the main thread, it is blocked in the nativePollOnce() method in the queue.next() of the loop. At this time, the main thread will release CPU resources and enter a dormant state until the next message arrives or a transaction occurs. The main thread is awakened by writing data to the write end of the pipe. The epoll mechanism used here is an IO multiplexing mechanism that can monitor multiple descriptors at the same time. When a descriptor is ready (read or write ready), it immediately notifies the corresponding program to perform a read or write operation. It is essentially synchronous I/O, that is, reading and writing are blocked. Therefore, the main thread is in a dormant state most of the time and does not consume a lot of CPU resources. When does the main thread's Looper exit? When the App exits, mH (Handler) in ActivityThread receives the message and executes the exit.
If you try to manually exit the main thread Looper, the following exception will be thrown
Why is exit not allowed? Because the main thread is not allowed to exit. Once it exits, it means the program has hung, and the exit should not be done in this way. Handler message processing order When Looper executes the message loop loop(), the following line of code will be executed. msg.target is the Handler object
Let's take a look at the source code of dispatchMessage
If the Message object has a CallBack callback, this CallBack is actually a Runnable, which only executes this callback and then ends. The CallBack code for creating the Message is as follows:
The handleCallback method calls the run method of Runnable.
If the Message object has no CallBack callback, enter the else branch to determine whether the Handler's CallBack is empty. If it is not empty, execute the handleMessage method of CallBack, and then return. The CallBack code for constructing the Handler is as follows:
1. Finally, the handleMessage() function of Handler is called, which is the function we often rewrite, and the message is processed in this method. Usage scenarios It can be seen that Handler.Callback has the priority to process messages. When a message is processed and intercepted by Callback (return true), the handleMessage(msg) method of Handler will not be called; if Callback processes the message but does not intercept it, it means that a message can be processed by Callback and Handler at the same time. We can use CallBack to intercept the message of Handler. Scenario: Hook ActivityThread.mH. There is a member variable mH in ActivityThread. It is a Handler and an extremely important class. Almost all plug-in frameworks use this method. Execution logic of Handler.post(Runnable r) method We need to analyze how the commonly used Handler.post(Runnable r) method is executed. Is a new thread created? In fact, no, this Runnable object is just called by its run method, and a thread is not started at all. The source code is as follows:
Finally, the Runnable object is packaged into a Message object, that is, this Runnable object is the CallBack object of the Message and has the right to priority execution. How does Handler perform thread switching? The principle is very simple. Threads share resources. The child thread sends messages through methods such as handler.sendXXX and handler.postXXX, and then retrieves messages in the message queue through Looper.loop(), and finally hands them over to the handle.dispatchMessage method for message distribution and processing. How to deal with memory leaks caused by improper use of Handler?
For the analysis and solution of specific memory leaks, please refer to this article. At the same time, there is another key point. If there is a delayed message, when the interface is closed, the message in the Handler has not been processed, so how is the message finally processed? After testing, for example, I delay sending a message for 10 seconds after opening the interface, close the interface, and finally receive the message (print log) in the handMessage method of the Handler (created by the anonymous inner class). Because there will be a reference chain of MessageQueue -> Message -> Handler -> Activity, the Handler will not be destroyed, and the Activity will not be destroyed. Correctly create a Message instance
All messages will be recycled and put into sPool, using the flyweight design pattern. Handler in-depth question answering ThreadLocal ThreadLocal provides a copy of the variable for each thread, so that each thread does not access the same object at a certain time, thus isolating the data sharing of multiple threads. If you are asked to design a ThreadLocal, and the goal of ThreadLocal is to allow different threads to have different variables V, then the most direct way is to create a Map whose Key is the thread and Value is the variable V owned by each thread. ThreadLocal can hold such a Map internally. You might design it like this In fact, the implementation of Java is as follows. There is also a Map in the Java implementation, called ThreadLocalMap, but the one holding ThreadLocalMap is not ThreadLocal, but Thread. The Thread class has a private attribute threadLocals, whose type is ThreadLocalMap, and the Key of ThreadLocalMap is ThreadLocal. The simplified code is as follows
In the Java implementation, ThreadLocal is just a proxy tool class, which does not hold any thread-related data. All thread-related data is stored in Thread. From the perspective of data affinity, it is more reasonable for ThreadLocalMap to belong to Thread. Therefore, the get method of ThreadLocal actually gets the ThreadLocalMap unique to each thread. Another reason is that it is not easy to cause memory leaks. If we use our design, the Map held by ThreadLocal will hold a reference to the Thread object, which means that as long as the ThreadLocal object exists, the Thread object in the Map will never be recycled. The life cycle of ThreadLocal is often longer than that of the thread, so this design can easily lead to memory leaks. In the Java implementation, Thread holds ThreadLocalMap, and the reference to ThreadLocal in ThreadLocalMap is still a weak reference, so as long as the Thread object can be recycled, the ThreadLocalMap can be recycled. Although the Java implementation seems more complicated, it is safer. ThreadLocal and memory leaks But everything is not always so perfect. If you use ThreadLocal in the thread pool, it may cause memory leaks. The reason is that the thread in the thread pool lives too long and often dies with the program. This means that the ThreadLocalMap held by the Thread will never be recycled. In addition, the Entry in the ThreadLocalMap is a weak reference to the ThreadLocal, so as long as the ThreadLocal ends its life cycle, it can be recycled. However, the Value in the Entry is strongly referenced by the Entry, so even if the Value's life cycle ends, the Value cannot be recycled, resulting in memory leaks. So we can manually release resources through the try{}finally{} solution
The above ThreadLocal content is mainly referenced from here. epoll mechanism The application of the epoll mechanism in Handler, when there is no message in the MessageQueue of the main thread, it will be blocked in the nativePollOnce() method in the queue.next() of the loop, and finally call epoll_wait() to block and wait. At this time, the main thread will release the CPU resources and enter a dormant state until the next message arrives or a transaction occurs, and wake up the main thread by writing data to the write end of the pipe. The epoll mechanism used here is an IO multiplexing mechanism that can monitor multiple descriptors at the same time. When a descriptor is ready (ready to read or write), it will immediately notify the corresponding program to perform read or write operations. It is essentially synchronous I/O, that is, reading and writing are blocked. Therefore, the main thread is in a dormant state most of the time and does not consume a lot of CPU resources. Here is a good in-depth article about IO multiplexing select, poll, epoll detailed explanation, here are the last two paragraphs in the article:
The reason why I choose the epoll mechanism at the bottom of Handler is that I feel that epoll is more efficient. In select/poll, the kernel scans all monitored file descriptors only after the process calls a certain method, while epoll registers a file descriptor in advance through epoll_ctl(). Once a file descriptor is ready, the kernel will use a callback mechanism similar to callback to quickly activate the file descriptor, and the process will be notified when it calls epoll_wait(). (Here, the traversal of file descriptors is removed, and the mechanism of listening to callbacks is used. This is the charm of epoll.) Handler's synchronization barrier mechanism What should we do if there is an urgent message that needs to be processed first? This actually involves the design of the architecture, the design of general scenarios and special scenarios. You may think of the sendMessageAtFrontOfQueue() method, but it is actually far more than that. The Handler adds a synchronization barrier mechanism to implement the function of [asynchronous message priority] execution. postSyncBarrier() sends a synchronization barrier, removeSyncBarrier() removes the synchronization barrier The role of the synchronization barrier can be understood as intercepting the execution of synchronous messages. The main thread's Looper will keep calling MessageQueue's next() to take out the message at the head of the queue for execution, and then take the next one after the message is executed. When the next() method finds that the head of the queue is a message of a synchronization barrier when taking the message, it will traverse the entire queue and only look for messages with the asynchronous flag set. If an asynchronous message is found, then the asynchronous message will be taken out for execution, otherwise the next() method will be blocked. If the next() method is blocked, the main thread is idle at this time, that is, it is not doing anything. Therefore, if the head of the queue is a message of a synchronization barrier, then all the synchronous messages behind it will be intercepted until the synchronization barrier message is removed from the queue, otherwise the main thread will not process the synchronous messages behind the synchronization screen. By default, all messages are synchronous messages. Only when the asynchronous flag is manually set, the message will be asynchronous. In addition, synchronous barrier messages can only be sent internally, and this interface is not open to us. You will find that all the message-related codes in Choreographer have manually set the asynchronous message flag, so these operations are not affected by the synchronization barrier. The reason for this may be to ensure that the upper-level app can perform the work of traversing and drawing the View tree as soon as possible when receiving the screen refresh signal. All actions in the Choreographer process are asynchronous messages, which ensures the smooth operation of Choreographer and the execution of doTraversal (doTraversal → performTraversals is to execute view layout, measure, and draw). If there are other synchronous messages in this process, they cannot be processed and must wait until after doTraversal. Because if there are too many messages to be executed in the main thread, and these messages are sorted according to timestamps, if there is no synchronization barrier added, the work of traversing and drawing the View tree may be forced to delay execution because it also needs to queue. Then it is possible that when a frame is about to end, the calculation of screen data will only start to be calculated. Even if the calculation this time is less than 16.6ms, it will also cause frame loss. So, can the control of synchronization barrier messages ensure that the work of traversing and drawing the View tree is handled as soon as the screen refresh signal is received? It can only be said that the synchronization barrier is done as much as possible, but it does not guarantee that it can be processed as soon as possible. Because the synchronization barrier is sent to the message queue when scheduleTraversals() is called, that is, only when a View initiates a refresh request, the synchronization messages after this moment will be intercepted. If the work sent to the message queue before scheduleTraversals() will still be fetched out in sequence and executed. The following is a partial detailed analysis: WindowManager maintains all the activities' DecorView and ViewRootImpl. As we mentioned earlier, ViewRootImpl is initialized in the addView method of WindowManagerGlobal, and then it calls its setView method and passes the DecorView as a parameter. So let's see what ViewRootImpl does.
The assignParent of DecorView is called in the setView() method
The parameter is ViewParent, and ViewRootImpl implements the ViewParent interface, so here we bind DecorView and ViewRootImpl. The root layout of each activity is DecorView, and the parent of DecorView is ViewRootImpl, so in the child view, the work such as invalidate() is executed, and the parent is looped to find the parent, and finally find the ViewRootImpl. So in fact, the refresh of the View is controlled by ViewRootImpl. Even when a small View on the interface initiates a redraw request, it must go to ViewRootImpl layer by layer, and then start traversing the View tree, traversing to this View that needs to be redrawed, and then call its onDraw() method to draw. The last call to View.invalidate() operation for redrawing is ViewRootImpl.scheduleTraversals(), and the requestLayout method is called in the ViewRootImpl.setView() method.
Finally, the scheduleTraversals() method was also called, which is actually the key to screen refresh. In fact, when the life cycle of onCreate---onResume is completed, its DecoView is bound to a newly created ViewRootImpl object. At the same time, it starts to arrange a traversal View task, that is, to draw the View tree operation to wait for execution, and then set the DecoView parent to the ViewRootImpl object. Therefore, we cannot get the width and height of the View in onCreate~onResume, and the drawing of the interface is only executed after onResume. You can refer to my previous article for analysis. This article can be used to refer to a series of analysis and screen refresh mechanism of ViewRootImpl.scheduleTraversals(). Most of the content here is also referenced, and the analysis content related to synchronization barriers is also included. Choreographer's main function is to coordinate the time of animation, input and drawing, it receives timing pulses from the display subsystem (such as vertical synchronization) and then arranges the rendering of part of the next frame to work. The frame rate can be monitored through Choreographer.getInstance().postFrameCallback();
Call method is registered in Application
Causes of frame dropping: There are generally two types of reasons for frame dropping. One is that the time for traversing the view tree to calculate screen data exceeds 16.6ms; the other is that the main thread has been processing other time-consuming messages, which has caused the work of traversing the view tree to be delayed, thus exceeding the opportunity for 16.6ms to switch the next frame of the bottom layer. Handler lock related issues Since multiple Handlers can be added to the MessageQueue (each Handler may be in different threads when sending a message), how does it ensure thread safety internally? Handler.sendXXX, Handler.postXXX will eventually be adjusted to the enqueueMessage method of MessageQueue The source code is as follows:
It uses synchronized keyword to ensure thread safety. At the same time, messagequeue.next() will also use synchronized locks to ensure thread safety when fetched, and insertion will also add locks. This problem is actually not difficult, it just depends on whether you understand the source code. Synchronization method in Handler How to make the handler.post message execute and then continue to execute, synchronize the method runWithScissors
Some applications of Handler in systems and third-party frameworks HandlerThread HandlerThread inherits from Thread. As the name suggests, it is actually an encapsulation of Handler and Thread. It has been packaged very well and is very secure for us. It also uses synchronized internally to ensure thread safety, such as the getLooper method.
In the run method of the thread, the Looper can only be created after the thread is started and assigned to mLooper. The blocking here is to wait for the Looper to be created successfully. At the same time, the method is modified with Public, indicating that the method provides external calls and the Looper is successfully created and provided to external use. IntentService A brief look at the source code can see the Handler application. Handler's handMessage will eventually callback to the onHandleIntent method.
How to create a program that does not crash Create a program that does not crash, please refer to this article of mine Applications in Glide I believe that Glide should be very familiar with it. We all know that the control of Glide's life cycle (if you don't understand, you can read the analysis of Glide's related articles, which is the same principle as LiveData) is to add an empty Fragment to the Activity or Fragment, and then manage the life cycle of the Fragment through the FragmentMannager to achieve the control of the life cycle. The following is an excerpt from a Glide code to add a Fragment:
After reading the above code, you may have doubts.
In fact, the answer is very simple. After learning the Handler principle, we know that after step 5, we do not add the Fragment to the FragmentManager (event queue), but send the event adding the Fragment. Next, let's look at the code
Adding Fragment will eventually go to the scheduleCommit method of FragmentManagerImpl, we can see that it is sending events through Handler. This explains why the Fragment was not added to the FragmentManager immediately after step 5, so the Map cache Fragment is needed to mark whether there is a Fragment added. Then, in step 6, the message removing the Map cache is sent, because the Handler handles the message in an orderly manner. Summarize In fact, this article does not analyze the source code very carefully, but it has been expanded from the application of various aspects of the entire system, as well as the use of some third-party frameworks. I hope this article will be helpful to you. If you like it, please like it~ |
<<: Google outlines Android app development and policy changes for 2021
Guangdiantong is an advertising platform based on...
Since LeEco first proposed the LeEco ecosystem co...
Many people have discussed why Zhang became so po...
1. Review of the national passenger car market in...
Editing and drawing: Wu Futong Source: WeChat pub...
Our African team is professional in shooting bles...
As the first "space courier" during the...
On October 21, 2014, Apple announced through its ...
In the pursuit of sports performance, carbon plat...
The Ultimate Intelligence of the Spiritual Mercha...
A friend said he wanted to buy a ticket to attend...
There is a question that has always troubled mank...
There is only half a month left until Children...
Back in the 1960s, when Steve Mahan was a kid, hi...
[September 9 news] Apple will hold a press confer...