Detailed explanation of Java thread mechanism and thread scheduling principle for Android performance optimization

Detailed explanation of Java thread mechanism and thread scheduling principle for Android performance optimization

[[414446]]

This article is reprinted from the WeChat public account "Android Development Programming", the author is Android Development Programming. Please contact the Android Development Programming public account for reprinting this article.

Preface Summary

In daily work, if it is used improperly, there will be problems such as data confusion, low execution efficiency (it is worse than running with a single thread) or deadlock program hanging, so it is very important to master multithreading;

Threads have many advantages:

1. Improve the utilization efficiency of multiple processors;

2. Simplify business function design;

3. Implement asynchronous processing;

Risks of multithreading:

1. Thread safety of shared data;

2. Activity issues during multi-threaded execution;

3. Performance loss caused by multithreading;

Compared with other knowledge points, multithreading has a certain learning threshold and is more difficult to understand;

We are very clear about the advantages of threads, and we are also aware of the risks of threads, but it is not so easy to control the risks;

This article starts from the basic concepts to the final concurrency model, and explains the knowledge of threads;

1. What is a thread?

1. Introduction to threads

  • A thread is the smallest unit that can be executed independently in a process and is also the basic unit for CPU resource allocation;
  • A process is the basic condition for a program to apply for resources from the operating system. A process can contain multiple threads. Threads in the same process can share resources in the process, such as memory space and file handles.
  • The operating system allocates resources to processes, but CPU resources are special in that they are allocated to threads. The CPU resources mentioned here are CPU time slices.
  • The relationship between processes and threads is like the relationship between a restaurant and its employees. The restaurant provides services to customers, and the specific way of providing services is achieved by individual employees.
  • The role of a thread is to perform a specific task, which can be downloading files, loading pictures, drawing interfaces, etc.
  • Next, we will look at the four properties, six methods, and six states of threads;

2. Four attributes of threads

A thread has four attributes: number, name, category, and priority. In addition, some of the attributes of a thread are inherited. Let's take a look at the functions of the four attributes of a thread and the inheritance of a thread.

①No.

The thread number (id) is used to identify different threads, and each thread has a different number;

Note: It cannot be used as a unique identifier. After a thread with a certain number ends, the number may be used by a subsequently created thread. Therefore, the number is not suitable for use as a unique identifier. The number is a read-only attribute and cannot be modified.

②Name

  • Each thread has its own name. The default value of the name is Thread-thread number, such as Thread-0;
  • In addition to the default value, we can also set a name for the thread to distinguish each thread in our own way;
  • Function: Setting a name for a thread allows us to quickly locate the problem using the name of the thread when a problem occurs in a thread

③Category

  • The thread category (daemon) is divided into daemon threads and user threads. We can set the thread as a daemon thread through setDaemon(true);
  • When the JVM is about to exit, it will consider whether all user threads have been executed, and if so, it will exit;
  • For daemon threads, the JVM does not consider whether it has completed execution when exiting;
  • Function: Daemon threads are usually used to perform unimportant tasks, such as monitoring the operation of other threads. The GC thread is a daemon thread;
  • Note: setDaemon() must be set before the thread is started, otherwise the JVM will throw an IllegalThreadStateException;

④ Priority

Function: The thread priority is used to indicate which thread the application wants to run first. The thread scheduler will decide which thread to run first based on this value.

⑤Value range

The thread priority in Java ranges from 1 to 10, with a default value of 5. The following three priority constants are defined in Thread;

  • Minimum priority: MIN_PRIORITY = 1;
  • Default priority: NORM_PRIORITY = 5;
  • Highest priority: MAX_PRIORITY = 10;

Note: There is no guarantee that the thread scheduler will use the thread priority as a reference value and will not necessarily execute the threads in the priority order we set;

6. Thread starvation

Improper use of priorities can cause some threads to never execute, a condition known as thread starvation;

⑦Inheritance

The inheritance of a thread means that the class and priority attributes of a thread will be inherited. The initial values ​​of these two attributes of a thread are determined by the thread that starts the thread.

If daemon thread A with a priority of 5 starts thread B, then thread B is also a daemon thread with a priority of 5;

At this time, we call thread A the parent thread of thread B, and thread B the child thread of thread A;

3. Six important methods of threads

There are six common methods of threads, which are three non-static methods start(), run(), join() and three static methods currentThread(), yield(), sleep();

Let's take a look at the effects and precautions of these six methods.

①start()

  • Function: The function of the start() method is to start the thread;
  • Note: This method can only be called once. Calling it again will not only fail to allow the thread to execute again, but will also throw an illegal thread state exception;

② run()

  • Function: The run() method contains the specific logic of the task. This method is called by the JVM. Generally, developers do not need to call this method directly.
  • Note: If you call the run() method and the JVM also calls it once, this method will be executed twice.

③ join()

  • Function: The join() method is used to wait for other threads to finish executing; if thread A calls the join() method of thread B, thread A will enter a waiting state until thread B finishes running;
  • Note: The waiting state caused by the join() method can be interrupted, so calling this method requires catching the interrupt exception.

④Thread.currentThread()

  • Function: The currentThread() method is a static method used to obtain the thread that executes the current method;
  • We can call Thread.currentThread() in any method to get the current thread and set its name, priority and other properties;

⑤Thread.yield()

  • Function: The yield() method is a static method used to make the current thread give up its use of the processor, which is equivalent to lowering the thread priority;
  • Calling this method is like saying to the thread scheduler: "If other threads want processor resources, give them, otherwise I will continue to use them";
  • Note: This method does not necessarily put the thread into a suspended state;

⑥ Thread.sleep(ms)

Function: The sleep(ms) method is a static method used to make the current thread sleep (pause) for a specified time.

4. Six states of threads

①Thread life cycle

The life cycle of a thread can not only be triggered by the developer, but also affected by other threads. The following is a schematic diagram of the transition between various states of a thread;

We can get the state of the thread through Thread.getState(), which returns an enumeration class Thread.State;

There are 6 states of threads: new, runnable, blocked, waiting, time-limited waiting and terminated. Let's take a look at the transition process between these 6 states;

New state: When a thread is created but not started, it is in the new (NEW) state;

②Runnable state: When we call the start() method of a thread, the thread enters the runnable state, which is divided into the ready state and the running state;

③ Ready state: A thread in the ready state can be scheduled by the thread scheduler. After scheduling, the thread state will be changed from the ready state to the running state. A thread in the ready state is also called an active thread. When the yield() method of the thread is called, the thread state may change from the running state to the ready state.

④Running state: The running state means that the thread is running, that is, the processor is executing the run() method of the thread;

⑤ Blocked state: When the following situations occur, the thread is in a blocked state, and fails to initiate a blocking I/O operation, apply for a lock held by other threads, or enter a synchronized method or code block;

⑥Waiting state: After a thread executes a specific method, it will wait for other threads to complete execution. At this time, the thread enters the waiting state;

⑦Waiting state: The following methods can make the thread enter the waiting state;

  • Object.wait()
  • LockSupport.park()
  • Thread.join()

Runnable state: The following methods can make the thread change from the waiting state to the runnable state, and this change is also called wakeup;

  • Object.notify()
  • Object.notifyAll()
  • LockSupport.unpark()

⑧Limited time waiting state

  • The difference between the timed waiting state (TIMED_WAITING) and the waiting state is that the timed waiting state is to wait for a period of time, and when the time is up, it will be converted to the runnable state;
  • The following methods can make the thread enter a time-limited waiting state. The ms, ns, and time parameters in the following methods represent milliseconds, nanoseconds, and absolute time respectively;
  • Thread.sleep(ms);
  • Thread.join(ms);
  • Object.wait(ms);
  • LockSupport.parkNonos(ns);
  • LockSupport.parkUntil(time);

⑨ Termination status

When the thread's task is completed or an exception occurs during task execution, the thread is in the TERMINATED state.

2. The principle of thread scheduling

A brief introduction to the Java memory model, cache, and Java thread scheduling mechanism related to thread scheduling principles;

1. Java's memory model

  • The Java memory model specifies that all variables are stored in the main memory, and each thread has its own working memory;
  • JVM divides the memory into several parts, among which the method area and heap memory area are shared by threads;
  • Detailed explanation of Java memory model

2. Cache

① Introduction to Cache

The processing power of modern processors is much greater than the access rate of main memory (DRAM). The time required for main memory to perform a memory read/write operation is much shorter than the time required for the processor to execute hundreds of instructions if it is used by the processor.

In order to bridge the gap between the processor and the main memory, hardware designers added a cache between the main memory and the processor;

When the processor performs memory read and write operations, it does not deal directly with the main memory, but through the cache;

A cache is equivalent to a very small hash table implemented by hardware. The key of this hash table is the memory address of an object, and the value can be a copy of the memory data or the data to be written to the memory.

② Cache internal structure

From the internal structure point of view, the cache is equivalent to a chained hash table, which contains several buckets, each bucket contains several cache entries;

③Cache entry structure

Cache entries can be further divided into three parts: Tag, Data Block and Flag;

Tag: Contains part of the information about the memory address corresponding to the data in the cache line (the high-order bits of the memory address)

Data: Block is also called cache line, which is the smallest unit of data exchange between cache and main memory. It can store data read from the memory and data to be written into the memory.

Flag: used to indicate the status information of the corresponding cache line

3. Java thread scheduling mechanism

At any time, the CPU can only execute one machine instruction, and each thread can only execute instructions after obtaining the right to use the CPU;

That is, at any time, only one thread occupies the CPU and is in a running state;

Multithreaded concurrent operation actually means that multiple threads take turns to obtain the right to use the CPU and perform their respective tasks;

The JVM is responsible for thread scheduling, which allocates CPU usage rights to multiple threads according to a specific mechanism;

Thread scheduling models are divided into two categories: time-sharing scheduling model and preemptive scheduling model;

①Time-sharing scheduling model

The time-sharing scheduling model allows all threads to take turns to obtain the right to use the CPU, and evenly distributes the time slice that each thread occupies the CPU;

② Preemptive Scheduling Model

  • JVM uses a preemptive scheduling model, that is, the thread with the highest priority is allowed to occupy the CPU first. If the threads have the same priority, a thread is randomly selected and allowed to occupy the CPU.
  • That is, if we start multiple threads at the same time, there is no guarantee that they can take turns to get equal time slices;
  • If our program wants to intervene in the thread scheduling process, the easiest way is to set a priority for each thread;

3. Detailed explanation of thread security issues

The thread safety issue does not mean that threads are unsafe, but that interleaving operations between multiple threads may cause data anomalies;

Next, we will look at the race conditions related to thread safety and the three points to ensure in achieving thread safety: atomicity, visibility, and orderliness;

① Atomicity

  • The literal meaning of atomic is indivisible. For an operation involving shared variable access, if the operation is indivisible from any thread other than its execution thread, then the operation is an atomic operation, and accordingly the operation is said to have atomicity.
  • Indivisible means that when accessing (reading/writing) a shared variable, from the perspective of other threads other than the execution thread, the operation has only two states: unstarted and finished, and the middle part of the operation is unknown;
  • Atomic operations that access the same set of shared variables cannot be interleaved, which excludes the possibility that while one thread is executing an operation, another thread reads or updates the shared variables accessed by the operation, resulting in dirty data and lost updates;

②Visibility

  • In a multi-threaded environment, after a thread updates a shared variable, subsequent threads accessing the variable may not be able to read the updated result immediately, or even never. This is another manifestation of the thread safety problem: visibility;
  • Visibility refers to whether an update to a shared variable by one thread is visible to other threads that read the variable;
  • The visibility problem is related to the computer's storage system. Variables in a program may be allocated to registers instead of main memory. Each processor has its own registers, and one processor cannot read the contents of another processor's registers.
  • Even if the shared variables are allocated to the main memory for storage, visibility cannot be guaranteed because the processor does not access the main memory directly, but through the cache;
  • An update to a variable by a thread running on a processor may only be updated to the processor's write buffer (Store Buffer), not yet to the cache, let alone the processor;
  • Visibility describes whether an update of a shared variable by one thread is visible to another thread. Ensuring visibility means that a thread can read the new value of the corresponding shared variable;
  • From the perspective of ensuring thread safety, it is not enough to ensure atomicity, but visibility must also be guaranteed. Ensuring both visibility and atomicity can ensure that a thread can correctly see updates made to shared variables by other threads;

③ Orderliness

  • Orderliness means that the memory access operations performed by one processor for one thread appear out of order to the thread running on another processor;
  • The sequence structure is a basic structure in structured programming, which means that we want one operation to be executed before another operation;
  • However, in a multi-core processor environment, the execution order of the code is not guaranteed. The compiler may change the order of two operations, and the processor may not execute instructions in the order of the program code;
  • Reordering processors and compilers are an optimization of the code. It can improve the performance of the program without affecting the correctness of the single-threaded program, but it will affect the correctness of the multi-threaded program and cause thread safety issues.
  • In order to improve the execution efficiency of instructions, modern processors often do not execute instructions in the order of the program, but execute whichever instruction is ready first. This is the out-of-order execution of the processor;

4. Implementing thread safety

To achieve thread safety, we must ensure the atomicity, visibility and order mentioned above;

The common way to achieve thread safety is to use locks and atomic types, and locks can be divided into four types: internal locks, explicit locks, read-write locks, and lightweight locks (volatile);

Let's take a look at the usage and characteristics of these four lock and atomic types;

1. Lock

The role of lock is to allow multiple threads to collaborate better and avoid data anomalies caused by the interleaving of operations of multiple threads.

Five characteristics of the lock:

  • Critical Section: The code executed by the thread holding the lock after acquiring the lock and before releasing the lock is called the critical section;
  • Exclusivity: Locks are exclusive, which can ensure that a shared variable can only be accessed by one thread at any time. This ensures that the critical section code can only be executed by one thread at a time. The operations in the critical section are indivisible, which ensures atomicity.
  • Serial: Locking is equivalent to changing the operation of multiple threads on shared variables from concurrent to serial;
  • Three guarantees: Locks can protect shared variables to achieve thread safety. Its functions include ensuring atomicity, visibility, and order;
  • Scheduling strategy: The scheduling strategy of locks is divided into fair strategy and unfair strategy. The corresponding locks are called fair locks and unfair locks. Fair locks will check whether there are threads waiting in line before locking. If there are, they will give priority to the threads in front. Fair locks guarantee the fairness of lock scheduling at the cost of increasing context switching and increase the possibility of thread suspension and wakeup.
  • The overhead of fair locks is greater than that of unfair locks, so the default scheduling strategy of ReentrantLock is the unfair strategy;

2. Volatile Keyword

The volatile keyword can be used to modify shared variables. The corresponding variables are called volatile variables. Volatile variables have the following characteristics:

  • Volatile literally means "unstable", that is, volatile is used to modify variables that are prone to change. Unstable means that the read and write operations of such variables must be read from the cache or main memory, and will not be allocated to registers;
  • Lower than lock: The overhead of volatile is lower than that of lock. The read and write operations of volatile variables will not cause context switching, so the volatile keyword is also called lightweight lock;
  • Higher than ordinary variables: The overhead of volatile variable read operations is higher than that of ordinary variables. This is because the value of a volatile variable must be read from the cache or main memory each time and cannot be temporarily stored in a register;
  • Release/Storage Barrier: For write operations on volatile variables, the JVM inserts a release barrier before the operation and a storage barrier after the operation. The storage barrier has the function of flushing the processor cache, so inserting a storage barrier after a volatile variable write operation can make all operation results before the storage barrier synchronized for other processors.
  • Load/acquire barrier: For read operations on volatile variables, the JVM inserts a load barrier before the operation and an acquire barrier after the operation; the load barrier flushes the processor cache, allowing the processor where the thread is located to synchronize updates made by other processors to the shared variable to the processor's cache;
  • Ensure orderliness: Volatility can prohibit instruction reordering, that is, using volatile can ensure the orderliness of operations;
  • Ensure visibility: The load barrier executed by the read thread and the store barrier executed by the write thread work together to make the write operation of the write thread on the volatile variable visible to the read thread, thus ensuring visibility;
  • Atomicity: In terms of atomicity, for long/double variables, volatile can guarantee the atomicity of read and write operations; for non-long/double variables, volatile can only guarantee the atomicity of write operations; if a shared variable is involved before a volatile variable is written, a race condition may still occur, because when a shared variable is assigned to a volatile variable, other threads may have already updated the value of the shared variable;

3. Atom Type

Atom Types:

There is an atomic package under JUC, which contains a set of atomic classes. Using the methods of atomic classes, thread safety can be guaranteed without locking. Atomic classes achieve thread safety from the hardware level through the CAS instruction in the Unsafe class.

This package contains AtomicInteger, AtomicBoolean, AtomicReference, AtomicReferenceFIeldUpdater, etc.

Let's first look at an example of using the atomic integer AtomicInteger to increment;

// Initial value is 1

AtomicInteger integer = new AtomicInteger(1);

//Self-increment

int result = integer.incrementAndGet();

// The result is 2

System.out.println(result);

AtomicReference and AtomicReferenceFIeldUpdater can make our own classes atomic, and their principles are implemented through Unsafe CAS operations;

Let's look at their usage and differences below;

①、Basic usage of AtomicReference

  1. class AtomicReferenceValueHolder {
  2. AtomicReference<String> atomicValue = new AtomicReference<>( "HelloAtomic" );
  3. }
  4. public void getAndUpdateFromReference() {
  5. AtomicReferenceValueHolder holder = new AtomicReferenceValueHolder();
  6. // Compare and set value
  7. // If the value is HelloAtomic, change the value to World
  8. holder.atomicValue.compareAndSet( "HelloAtomic" , "World" );
  9. //World
  10. System. out .println(holder.atomicValue.get());
  11. // Modify and get the modified value
  12. String value = holder.atomicValue.updateAndGet(new UnaryOperator<String>() {
  13. @Override
  14. public String apply(String s) {
  15. return   "HelloWorld" ;
  16. }
  17. });
  18. // Hello World
  19. System. out .println(value);
  20. }

② Basic usage of AtomicReferenceFieldUpdater

AtomicReferenceFieldUpdater is slightly different from AtomicReference in usage. We directly expose the String value and modify it with volatile.

And pass the current class and the value class to the newUpdater() method to get the Updater. This usage is a bit like reflection, and AtomicReferenceFieldUpdater is usually used as a static member of the class;

  1. public class SimpleValueHolder {
  2. public   static AtomicReferenceFieldUpdater<SimpleValueHolder, String> valueUpdater
  3. = AtomicReferenceFieldUpdater.newUpdater(
  4. SimpleValueHolder.class, String.class, "value" );
  5. volatile String value = "HelloAtomic" ;
  6. }
  7. public void getAndUpdateFromUpdater() {
  8. SimpleValueHolder holder = new SimpleValueHolder();
  9. holder.valueUpdater.compareAndSet(holder, "HelloAtomic" , "World" );
  10. //World
  11. System. out .println(holder.valueUpdater.get(holder));
  12. String value = holder.valueUpdater.updateAndGet(holder, new UnaryOperator<String>() {
  13. @Override
  14. public String apply(String s) {
  15. return   "HelloWorld" ;
  16. }
  17. });
  18. //HelloWorld
  19. System. out .println(value);
  20. }

③Difference between AtomicReference and AtomicReferenceFieldUpdater

The functions of AtomicReference and AtomicReferenceFieldUpdater are similar, and AtomicReference is simpler to use than AtomicReferenceFIeldUpdater;

However, in terms of internal implementation, AtomicReference also has a volatile variable inside;

Compared with using AtomicReference and AtomicReferenceFIeldUpdater, one more object needs to be created;

For a 32-bit machine, the header of this object occupies 12 bytes, and its members occupy 4 bytes, which is 16 bytes more;

For 64-bit machines, if pointer compression is enabled, this object also occupies 16 bytes;

For 64-bit machines, if pointer compression is not enabled, the object will occupy 24 bytes, of which the object header occupies 16 bytes and the members occupies 8 bytes;

This overhead can become significant when creating thousands of objects using AtomicReference;

This is why BufferedInputStream, Kotlin coroutines, and Kotlin's lazy implementation choose AtomicReferenceFieldUpdater as the atomic type;

Because of the overhead, AtomicReference is generally only used when the number of instances created by the atomic type is relatively small, such as a singleton. Otherwise, AtomicReferenceFieldUpdater is used.

4. Lock usage tips

  • Using locks will bring certain expenses, and mastering the skills of using locks can reduce the expenses and potential problems caused by locks to a certain extent. Here are some tips for using locks;
  • Long locks are not as good as short locks: try to lock only the necessary parts;
  • A big lock is not as good as a small lock: it is possible to split the locked object;
  • Public locks are not as good as private locks: you may put the lock logic in private code. If you allow external callers to lock, it may lead to improper use of the lock and cause deadlock;
  • Nested locks are not as good as flat locks: avoid nested locks when writing code;
  • Separate read-write locks: Separate read locks and write locks as much as possible;
  • Coarsening high-frequency locks: merge frequent and short locks, because each lock will bring a certain amount of overhead;
  • Eliminate useless locks: Avoid locking as much as possible, or use volatile instead;

5. Four thread activity issues

1. Deadlock

Deadlock is a common multithreading activity problem. If two or more threads are suspended forever because they are waiting for each other, this is called deadlock.

Next we will look at the four conditions that cause deadlock and three ways to avoid deadlock;

2. Four conditions for deadlock

When multiple threads deadlock, these threads and related shared variables will meet the following four conditions:

  • Resource mutual exclusion: The resources involved must be exclusive, that is, the resources can only be used by one thread at a time
  • Resources cannot be grabbed: The resources involved can only be actively released by the thread holding the resource and cannot be grabbed by other threads (passive release)
  • Occupy and wait for resources: The thread involved holds at least one resource and applies for other resources, which are held by other threads, and the thread does not release the held resources.
  • Circular waiting for resources: The threads involved must wait for resources held by other threads, and other threads in turn wait for resources held by the thread

As long as a deadlock occurs, the above conditions must be met, but even if all the above conditions are met, a deadlock may not occur;

3. Three ways to avoid deadlock

To eliminate the deadlock, just destroy one of the above conditions;

Since the lock is exclusive and cannot be released passively, we can only destroy the third and fourth conditions;

①, Rough locking method

  • Use coarse-grained locks instead of multiple locks. The scope of the lock becomes larger. Multiple threads accessing shared resources only need to apply for one lock, because each thread only needs to apply for one lock to perform its own task. In this way, the two conditions of "occupying and waiting for resources" and "circular waiting for resources" are no longer valid.
  • The disadvantage of coarse locking is that it reduces concurrency and may cause waste of resources, because when using coarse locking, only one thread can access resources at a time, so other threads can only put their tasks on hold;

② Lock sorting method

Lock sorting refers to the related threads applying for locks in a globally unified order;

If multiple threads need to apply for locks, we only need to let these threads apply for locks in a globally unified order, so that the condition of "circular waiting for resources" can be destroyed;

③tryLock

Explicit lock ReentrantLock.tryLock(long timeUnit) This method allows us to set a timeout for the operation of applying for a lock, which can destroy the condition of "occupying and waiting for resources";

④Open call

An open call is a method that does not hold a lock when calling an external method. An open call can destroy the condition of "occupying and waiting for resources";

6. How do threads collaborate?

There are two common ways of collaboration between threads: waiting and interruption;

When an operation in one thread needs to wait for the operation in another thread to complete, it involves waiting thread cooperation;

There are five commonly used waiting thread collaboration methods: join, wait/notify, await/signal, await/countDown and CyclicBarrier. Let's take a look at the usage and differences of these five thread collaboration methods;

1. join

  • Using the Thread.join() method, we can make a thread wait for another thread to finish executing before continuing;
  • The join() method implements waiting through the wait() method. In the join() method, it will constantly determine whether the thread that calls the join() method is still alive. If so, it will continue to wait;

Below is a simple usage of join() method;

  1. public void tryJoin() {
  2. Thread threadA = new ThreadA();
  3. Thread threadB = new ThreadB(threadA);
  4. threadA.start();
  5. threadB.start();
  6. }
  7. public class ThreadA extends Thread {
  8. @Override
  9. public void run() {
  10. System. out .println( "Thread A starts executing" );
  11. ThreadUtils.sleep(1000);
  12. System. out .println( "Thread A execution ended" );
  13. }
  14. }
  15. public class ThreadB extends Thread {
  16. private final Thread threadA;
  17. public ThreadB(Thread thread) {
  18. threadA = thread;
  19. }
  20. @Override
  21. public void run() {
  22. try {
  23. System. out .println( "Thread B starts waiting for thread A to finish execution" );
  24. threadA.join ();
  25. System. out .println( "Thread B ends waiting and starts doing what it wants to do" );
  26. } catch (InterruptedException e) {
  27. e.printStackTrace();
  28. }
  29. }
  30. }

2. wait/notify

  • The process in which a thread is suspended because the protection conditions required to perform an operation (target action) are not met is called waiting;
  • A thread updates a shared variable, making the protection conditions required by other threads meet, and the process of waking up the suspended thread is called notification;
  • The thread executing the wait() method is called the waiting thread, and the thread executing the notify() method is called the notifying thread;

Below is the sample code for wait/notify usage;

  1. final Object lock = new Object();
  2. private volatile boolean conditionSatisfied;
  3. public void startWait() throws InterruptedException {
  4. synchronized (lock) {
  5. System.out.println ( " The waiting thread has acquired the lock" );
  6. while(!conditionSatisfied) {
  7. System. out .println( "The protection condition is not met, the waiting thread enters the waiting state" );
  8. lock.wait();
  9. }
  10. System. out .println( "Waiting for the thread to be awakened and start executing the target action" );
  11. }
  12. }
  13. public void startNotify() {
  14. synchronized (lock) {
  15. System. out .println( "Notify thread to acquire lock" );
  16. System. out .println( "Notification thread is about to wake up the waiting thread" );
  17. conditionSatisfied = true ;
  18. lock.notify();
  19. }
  20. }

3. Wait/notify principle

  • JVM maintains an entry set (Entry Set) and a wait set (Wait Set) for each object;
  • The entry set is used to store threads that apply for the internal lock of the object, and the wait set is used to store waiting threads on the object;
  • The wait() method will pause the current thread, and when the internal lock is released, the current thread will be stored in the waiting set of the object to which the method belongs;
  • Calling the notify() method of an object will wake up any thread in the object's wait set. The awakened thread will continue to stay in the object's wait set until the thread holds the corresponding internal lock again. The wait() method will remove the current thread from the object's wait set.
  • Adding the current thread to the waiting set, pausing the current thread, releasing the lock, and removing the awakened thread from the waiting set of the object are all implemented in the wait() method;
  • In the native code of the wait() method, it is determined whether the thread holds the internal lock of the current object. If not, an illegal monitor state exception will be reported. This is why the wait() method must be executed in a synchronized code block;

4. notify()/notifyAll()

notify() may cause signal loss, while notifyAll() will wake up waiting threads that do not need to be woken up, but it is guaranteed to be correct;

Therefore, in general, notifyAll() is preferred to ensure correctness;

Generally speaking, notify() is used to implement notification only when both of the following conditions are met:

①Just wake up one thread

When a notification only needs to wake up at most one thread, we can consider using notify() to implement the notification, but satisfying this condition alone is not enough;

When different waiting threads use different protection conditions, an arbitrary thread awakened by notify() may not be the thread we need to wake up, so condition 2 is needed to exclude it;

②The waiting set of the object contains only homogeneous waiting threads

Homogeneous waiting threads refer to threads that use the same protection condition and have consistent logic after the wait() call returns;

The most typical homogeneous threads are different threads created using the same Runnable, or multiple instances of the same Thread subclass;

5. await/signal

wait()/notify() is too low-level and has two problems: one is premature wakeup and the other is that it cannot distinguish whether the return of Object.wait(ms) is due to a wait timeout or being woken up by the notification thread;

await/signal basic usage

  1. private Lock lock = new ReentrantLock();
  2. private Condition condition = lock.newCondition();
  3. private volatile boolean conditionSatisfied = false ;
  4. private void startWait() {
  5. lock.lock();
  6. System.out.println ( " The waiting thread has acquired the lock" );
  7. try {
  8. while (!conditionSatisfied) {
  9. System. out .println( "The protection condition is not met, the waiting thread enters the waiting state" );
  10. condition.await();
  11. }
  12. System. out .println( "Waiting for the thread to be awakened and start executing the target action" );
  13. } catch (InterruptedException e) {
  14. e.printStackTrace();
  15. finally
  16. lock.unlock();
  17. System. out .println( "The waiting thread released the lock" );
  18. }
  19. }
  20. public void startNotify() {
  21. lock.lock();
  22. System. out .println( "Notify thread to acquire lock" );
  23. try {
  24. conditionSatisfied = true ;
  25. System. out .println( "Notification thread is about to wake up the waiting thread" );
  26. condition.signal();
  27. finally
  28. System. out .println( "Notify the thread to release the lock" );
  29. lock.unlock();
  30. }
  31. }
  • When we execute the above two functions in two threads respectively, we can get the following output;
  • The waiting thread acquires the lock
  • The protection condition is not met, and the waiting thread enters the waiting state
  • Notify the thread that it has acquired the lock
  • The notification thread is about to wake up the waiting thread
  • Waiting for the thread to be awakened and start executing the target action

6. awaitUntil() usage

awaitUntil(timeout, unit) method;

If the waiting ends due to a timeout, awaitUntil() will return false, otherwise it will return true, indicating that the waiting is awakened. Let's see how this method is used;

  1. private void startTimedWait() throws InterruptedException {
  2. lock.lock();
  3. System.out.println ( " The waiting thread has acquired the lock" );
  4. // Timeout after 3 seconds
  5. Date   date = new Date (System.currentTimeMillis() + 3 * 1000);
  6. boolean isWakenUp = true ;
  7. try {
  8. while (!conditionSatisfied) {
  9. if (!isWakenUp) {
  10. System. out .println( "Timed out, waiting task ended" );
  11. return ;
  12. } else {
  13. System. out .println( "The protection condition is not met and the waiting time has not expired, waiting to enter the waiting state" );
  14. isWakenUp = condition.awaitUntil( date );
  15. }
  16. }
  17. System. out .println( "Waiting for the thread to be awakened and start executing the target action" );
  18. finally
  19. lock.unlock();
  20. }
  21. }
  22. public void startDelayedNotify() {
  23. threadSleep(4 * 1000);
  24. startNotify();
  25. }
  • Waiting for thread to acquire the lock
  • The protection conditions are not met, and the waiting time has not come, waiting to enter the waiting state
  • Timed out, waiting task ended
  • Notify the thread to acquire the lock
  • Notify the thread that is about to wake up and wait for the thread

7. await/countDown

Using join() implements a thread waiting for another thread to end execution, but sometimes we just want a specific operation to end execution, and we don’t need to wait for the entire thread to end execution. At this time, we can use CountDownLatch to implement it;

await/countDown Basic usage

  1. public void tryAwaitCountDown() {
  2. startWaitThread();
  3. startCountDownThread();
  4. startCountDownThread();
  5. }
  6. final int prerequisiteOperationCount = 2;
  7. final CountDownLatch latch = new CountDownLatch(prerequisiteOperationCount);
  8. private void startWait() throws InterruptedException {
  9. System. out .println( "Waiting for thread to enter the waiting state" );
  10. latch.await();
  11. System. out .println( "Waiting for thread end waiting" );
  12. }
  13. private void startCountDown() {
  14. try {
  15. System. out .println( "Perform prerequisite" );
  16. finally
  17. System. out .println( "Count value minus 1" );
  18. latch.countDown();
  19. }
  20. }

8. CyclicBarrier

Sometimes multiple threads need to wait for each other's code somewhere (collection point) before these threads can continue to execute. At this time, CyclicBarrier can be used;

Basic usage of CyclicBarrier

  1. final int parties = 3;
  2. final Runnable barrierAction = new Runnable() {
  3. @Override
  4. public void run() {
  5. System. out .println( "The people are all here and start climbing the mountain" );
  6. }
  7. };
  8. final CyclicBarrier barrier = new CyclicBarrier(parties, barrierAction);
  9. public void tryCyclicBarrier() {
  10. firstDayClimb();
  11. secondDayClimb();
  12. }
  13. private void firstDayClimb() {
  14. new PartyThread( "Old Li comes first on the first day of climbing the mountain." ).start();
  15. new PartyThread( "Lao Wang is here, Xiao Zhang is not here yet" ).start();
  16. new PartyThread( "Xiao Zhang is here" ).start();
  17. }
  18. private void secondDayClimb() {
  19. new PartyThread( "Climb the mountain the next day, Lao Wang comes first" ).start();
  20. new PartyThread( "Xiao Zhang is here, Lao Li is not here yet" ).start();
  21. new PartyThread( "Old Li is here" ).start();
  22. }
  23. public class PartyThread extends Thread {
  24. private final String content;
  25. public PartyThread(String content) {
  26. this.content = content;
  27. }
  28. @Override
  29. public void run() {
  30. System. out .println(content);
  31. try {
  32. barrier.await();
  33. } catch (BrokenBarrierException e) {
  34. e.printStackTrace();
  35. } catch (InterruptedException e) {
  36. e.printStackTrace();
  37. }
  38. }
  39. }

7 commonly used asynchronous methods in Android: Thread, HandlerThread, IntentService, AsyncTask, thread pool, RxJava and Kotlin coroutines;

Summarize:

1. Threading has many advantages:

  • Improve the utilization efficiency of multiprocessors;
  • Simplify business function design;
  • Implement asynchronous processing;

2. Risks of multi-threading:

  • Thread safety of shared data;
  • Activity issues during multi-threaded execution;
  • Performance loss problems caused by multithreading;

3. Next time, we will explain the commonly used asynchronous methods in Android in detail and start from reality.

<<:  Why does WeChat PC version take up so much disk space?

>>:  Is the mobile phone industry struggling to make progress due to a lack of innovation or a change in concepts?

Recommend

Baidu Ocpc effect optimization tips!

As the threshold for smart delivery is gradually ...

Tips for attracting new users on Pinduoduo APP!

This article is an analysis of Pinduoduo's ca...

How to promote user sharing and dissemination?

For internet dogs, it is easy to organize an even...

GNMT - Google's neural network translation system

1. Introduction In September 2016, Google release...

APP promotion: How to cultivate core users!

Why do you want to be a core user? What is the ul...

Refined user growth case!

ABtest is gaining more and more attention. Fast a...

Is it really good to be a key player in a team?

[[138596]] Do you have an engineer on your team t...

The evolution of live streaming

The evolution of live streaming When live streami...

Why you must upgrade to iOS 14.5

Users who have upgraded to iOS 14 will definitely...

Introduction to 360 Search Golden Booth CPT Advertising Promotion!

360 Search Gold Booth Introduction to the advanta...

【ASO】It’s not easy to use the small comma in Apple title keywords well!

As we all know, the App Store allows developers t...

What is the minimum monthly consumption for 400 calls?

According to the data collected by customer servi...