Android performance optimization - detailed explanation of memory management explosion caused by OOM crash

Android performance optimization - detailed explanation of memory management explosion caused by OOM crash

[[414819]]

Preface

In app development, images are indispensable. If the various icon image resources are not properly processed and used, the app performance will be seriously degraded, affecting the user experience. The most intuitive feeling is lag, the phone heats up, and sometimes OOM.

So today we will analyze OOM and memory optimization summary;

1. What is OOM?

  • OOM, the full name of which is "Out Of Memory", is translated into Chinese as "run out of memory", which comes from java.lang.OutOfMemoryError;
  • When the JVM does not have enough memory to allocate space for the object and the garbage collector has no space to reclaim, this error is thrown (note: not an exception, because the problem is not serious enough to be handled by the application);
  • This phenomenon usually occurs in the development of client apps that use many or large images. In layman's terms, when our app needs to apply for a piece of memory to install images, the system thinks that the memory used by our app is enough, even if it has 1G of free memory, it does not agree to give my app more memory, and then even if the system immediately throws an OOM error, the program does not catch the error, so the pop-up box crashes;

2. Types of OOM

1. JVM memory model:


According to the JVM specification, the JAVA virtual machine manages the following memory areas at runtime:

  • Program counter: the line number indicator of the bytecode executed by the current thread, thread-private;
  • JAVA virtual machine stack: the memory model of Java method execution. The execution of each Java method corresponds to the push and pop operations of a stack frame.
  • Native method stack: similar to the "JAVA virtual machine stack", but provides a memory environment for the operation of native methods;
  • JAVA heap: where object memory is allocated, the main area for memory garbage collection, shared by all threads. It can be divided into new generation and old generation;
  • Method area: used to store class information, constants, static variables, code compiled by the just-in-time compiler, and other data that has been loaded by the JVM. "Permanent generation" in Hotspot;
  • Runtime constant pool: part of the method area, storing constant information, such as various literals, symbolic references, etc.;
  • Direct memory: It is not part of the JVM runtime data area, but is directly accessible memory. For example, NIO uses this part.
  • According to the JVM specification, except for the program counter, other memory areas may throw OOM.

2. The most common OOM situations are the following three:

  • java.lang.OutOfMemoryError: Java heap space ------>Java heap memory overflow, this is the most common situation, usually caused by memory leak or improper heap size setting. For memory leak, you need to use memory monitoring software to find the leaking code in the program, and the heap size can be modified by virtual machine parameters such as -Xms, -Xmx;
  • java.lang.OutOfMemoryError: PermGen space ------> Java permanent generation overflow, that is, the method area overflows, which usually occurs when there are a large number of Class or jsp pages, or when using reflection mechanisms such as cglib, because the above situations will generate a large amount of Class information stored in the method area. This situation can be solved by changing the size of the method area, using a form similar to -XX:PermSize=64m -XX:MaxPermSize=256m. In addition, too many constants, especially strings, can also cause method area overflow;
  • java.lang.StackOverflowError ------> It will not throw OOM error, but it is also a common Java memory overflow. JAVA virtual machine stack overflow is generally caused by an infinite loop or deep recursive call in the program. This overflow may also occur if the stack size is set too small. You can set the stack size through the virtual machine parameter -Xss;

3. Why does OOM occur?

Each process or each virtual machine of the Android system app has a maximum memory limit. If the memory resources requested exceed this limit, the system will throw an OOM error;

It has little to do with the remaining memory of the entire device. For example, the virtual machine of the earlier Android system has a maximum memory of 16M. When an app is started, the virtual machine keeps requesting memory resources to load images. When the memory limit is exceeded, OOM occurs.


Why is there no memory? There are two reasons:

1. Too little allocation: For example, the available memory of the virtual machine itself (usually specified by VM parameters at startup) is too small;

2. The application uses too much memory and does not release it after use, which is a waste, causing memory leaks or memory overflows;

  • Memory leak: The memory that has been applied for is not released, so the virtual machine cannot use the memory again. At this time, this memory is leaked because the applicant does not use it and the virtual machine cannot allocate it to others.
  • Memory overflow: The memory requested exceeds the memory size that the JVM can provide, which is called overflow;

In the days before automatic garbage collection, such as in C and C++, we had to be responsible for the memory allocation and release operations ourselves. If we allocated memory and forgot to release it after use, such as new but no delete in C++, then memory leaks may occur. Occasional memory leaks may not cause problems, but a large number of memory leaks may cause memory overflows;

In the Java language, due to the existence of an automatic garbage collection mechanism, we generally do not need to actively release the memory occupied by unused objects, that is, in theory, there will be no "memory leaks". However, if the coding is improper, for example, a reference to an object is placed in a global Map, although the method ends, the garbage collector will recycle the memory based on the object's reference status, resulting in the object not being recycled in time. If this situation occurs more often, it will lead to memory overflow, such as the caching mechanism often used in the system. Memory leaks in Java are different from forgetting to delete in C++, which are often leaks due to logical reasons.

4. How to avoid OOM and optimize memory

1. Reduce the memory usage of objects

The first step to avoid OOM is to minimize the size of memory occupied by newly allocated objects and try to use lighter objects.

1) Use a lighter data structure

We can consider using ArrayMap/SparseArray instead of traditional data structures such as HashMap.

The brief working principle of HashMap shows that in most cases, it is inefficient and takes up more memory compared to the ArrayMap container written specifically for Android's mobile operating system.

The usual HashMap implementation consumes more memory because it requires an additional instance object to record the Mapping operation.

In addition, SparseArray is more efficient because it avoids autoboxing of keys and values ​​and avoids unboxing after boxing.

2) Avoid using Enum in Android

Enumerations typically require twice as much memory as static constants. You should strictly avoid using enumerations on Android. So please avoid using enumerations in Android.

3) Reduce the memory usage of Bitmap objects

Bitmap is a big fat guy that easily consumes memory. Reducing the memory usage of the created Bitmap is a top priority. Generally speaking, there are two measures:

inSampleSize: Scaling ratio. Before loading the image into memory, we need to calculate a suitable scaling ratio to avoid loading unnecessary large images.

decode format: Decoding format, select ARGB_8888/RBG_565/ARGB_4444/ALPHA_8, there are big differences.

4) Use smaller images

When it comes to resource images, we need to pay special attention to whether there is space for compression of this image and whether a smaller image can be used. Trying to use smaller images can not only reduce memory usage, but also avoid a large number of InflationExceptions. If a large image is directly referenced by an XML file, it is very likely that an InflationException will occur when the view is initialized due to insufficient memory. The root cause of this problem is actually OOM.

2. Reuse of memory objects

  • The final implementation of most object reuse solutions is to use object pool technology, either explicitly create an object pool in the program when writing code, and then handle the reuse implementation logic. Or use some existing reuse features of the system framework to reduce the repeated creation of objects, thereby reducing memory allocation and recycling;
  • Reuse system-provided resources: The Android system itself has many built-in resources, such as strings/colors/images/animations/styles and simple layouts, etc. These resources can be directly referenced in the application. Doing so can not only reduce the application's own load and reduce the size of the APK, but also reduce memory overhead to a certain extent, and improve reusability. However, it is also necessary to pay attention to the differences in Android system versions. For those situations where the performance on different system versions is very different and does not meet the requirements, the application still needs to build them in;
  • Pay attention to the reuse of ConvertView in views such as ListView/GridView where there are a large number of repeated subcomponents;
  • Reuse of Bitmap objects;
  • Avoid creating objects in the onDraw method;
  • Frequently called methods such as onDraw must be careful not to create objects here, because it will quickly increase memory usage and easily cause frequent GC or even memory jitter;
  • StringBuilder: Sometimes, a large number of string concatenation operations are required in the code. In this case, it is necessary to consider using StringBuilder to replace the frequent "+";

3. Avoid object memory leaks

Memory object leakage will cause some unused objects to not be released in time, which will occupy valuable memory space and easily lead to insufficient free space when memory is needed later, resulting in OOM. Obviously, this also makes the available space in the memory area of ​​each generation level smaller, making GC more likely to be triggered, memory jitter is likely to occur, and thus cause performance problems.

1) Pay attention to Activity leakage

  • Generally speaking, Activity leakage is the most serious problem among memory leaks. It occupies a lot of memory and has a wide impact. We need to pay special attention to Activity leakage caused by the following two situations:
  • Internal class references lead to Activity leaks: The most typical scenario is Activity leaks caused by Handlers. If there are delayed tasks in the Handler or the queue of tasks waiting to be executed is too long, the Handler may continue to execute and cause Activity leaks. At this time, the reference relationship chain is Looper -> MessageQueue -> Message -> Handler -> Activity. To solve this problem, you can remove the messages and runnable objects in the Handler message queue before the UI exits. Or you can use Static + WeakReference to disconnect the reference relationship between the Handler and the Activity.
  • The Activity Context is passed to other instances, which may cause it to be referenced and leaked;
  • Leaks caused by inner classes do not only occur in Activities, but also in any other places where inner classes appear. We can consider using static inner classes as much as possible, and use the WeakReference mechanism to avoid leaks caused by mutual references;

2) Consider using Application Context instead of Activity Context

For most situations where it is not necessary to use Activity Context (Dialog's Context must be Activity Context), we can consider using Application Context instead of Activity Context to avoid inadvertent Activity leakage;

3) Pay attention to the timely recycling of temporary Bitmap objects

  • Although in most cases we will add a cache mechanism to the Bitmap, but sometimes, some Bitmaps need to be recycled in time. For example, a relatively large bitmap object created temporarily should be recycled as soon as possible after being transformed into a new bitmap object, so that the space occupied by the original bitmap can be released faster.
  • Special attention should be paid to the createBitmap() method provided in the Bitmap class: the bitmap returned by this function may be the same as the source bitmap. When recycling, it is necessary to check whether the references of the source bitmap and the return bitmap are the same. Only when they are not equal can the recycle method of the source bitmap be executed.

4) Note the deregistration of the listener

There are many listeners in Android programs that need to be registered and unregistered. We need to make sure to unregister those listeners in a timely manner when appropriate. If you manually add a listener, you need to remember to remove the listener in a timely manner.

5) Watch out for object leaks in cache containers

Sometimes, we put some objects into the cache container to improve the reusability of objects. However, if these objects are not cleared from the container in time, it may also cause memory leaks. For example, for the 2.3 system, if a drawable is added to the cache container, it is easy to cause activity leakage due to the strong application of drawable and View. Starting from 4.0, this problem does not exist. To solve this problem, it is necessary to specially encapsulate the cache drawable on the 2.3 system, handle the problem of reference unbinding, and avoid leakage.

6) Pay attention to WebView leaks

WebView in Android has a big compatibility problem. Not only does the difference in Android system version have a big difference in WebView, but also there are big differences in WebView in ROM shipped by different manufacturers. More seriously, the standard WebView has a memory leak problem, please see here. Therefore, the usual way to solve this problem is to start another process for WebView, communicate with the main process through AIDL, and the process where WebView is located can choose the appropriate time to destroy it according to business needs, so as to achieve complete release of memory.

7) Pay attention to whether the Cursor object is closed in time

In the program, we often query the database, but there are often cases where the Cursor is not closed in time after being used accidentally. If these Cursor leaks occur repeatedly, they will have a great negative impact on memory management. We need to remember to close the Cursor object in time.

4. Optimize memory usage strategy

1) Use large heap with caution

  • As mentioned earlier, Android devices have different sizes of memory space depending on the hardware and software settings, and they set different sizes of Heap limit thresholds for applications. You can get the available Heap size of the application by calling getMemoryClass(). In some special scenarios, you can declare a larger heap space for the application by adding the largeHeap=true attribute under the application tag of the manifest. Then, you can get this larger heap size threshold through getLargeMemoryClass(). However, the original intention of declaring a larger Heap threshold is for a small number of applications that consume a lot of RAM (such as a large picture editing application).
  • Don't just request a large heap size because you need more memory. Use a large heap only when you know where a lot of memory is being used and why it must be kept. So use the large heap property with caution. Using extra memory can affect the overall user experience of the system and make each GC run longer.
  • When switching tasks, the system performance will be greatly reduced. In addition, large heap does not necessarily mean that a larger heap can be obtained. On some machines with strict restrictions, the size of large heap is the same as the normal heap size. Therefore, even if you apply for large heap, you should still check the actual heap size obtained by executing getMemoryClass().

2) Consider the device memory threshold and other factors to design an appropriate cache size

When designing the Bitmap LRU cache for ListView or GridView, the following points need to be considered:

  • How much memory is left for applications?
  • How many images will be displayed on the screen at once? How many images need to be cached in advance so that they can be displayed immediately when swiping quickly?
  • What is the screen size and density of the device? An xhdpi device will need a larger cache than an hdpi device to hold the same number of images.
  • What are the sizes and configurations of Bitmaps designed for different pages? How much memory will it take?
  • How often are the page images accessed? Are there some that are accessed more frequently than others? If so, perhaps you want to keep the most frequently accessed ones in memory, or set up multiple LruCache containers for different groups of bitmaps (grouped by access frequency).

3) onLowMemory() and onTrimMemory()

Android users can quickly switch between different applications at will. In order to allow background applications to switch to foreground quickly, each background application will occupy a certain amount of memory. The Android system will decide to reclaim part of the background application memory based on the current system memory usage. If the background application is directly restored to forground from the suspended state, a faster recovery experience can be obtained. If the background application is restored from the Kill state, it will be a little slow in comparison.

  • onLowMemory(): The Android system provides some callbacks to notify the current application of its memory usage. Generally speaking, when all background applications are killed, the forground application will receive the onLowMemory() callback. In this case, it is necessary to release the non-essential memory resources of the current application as soon as possible to ensure that the system can continue to run stably.
  • onTrimMemory(int): Android system 4.0 and above also provides the onTrimMemory() callback. When the system memory reaches certain conditions, all running applications will receive this callback. At the same time, the following parameters will be passed in this callback, representing different memory usage situations. When receiving the onTrimMemory() callback, you need to make a judgment based on the type of parameter passed and reasonably choose to release some of your own memory usage. On the one hand, it can improve the overall running smoothness of the system, and on the other hand, it can prevent yourself from being judged by the system as an application that needs to be killed first.
  • TRIM_MEMORY_UI_HIDDEN: All UI interfaces of your application are hidden, that is, the user clicks the Home button or the Back button to exit the application, causing the application's UI interface to be completely invisible. At this time, some non-essential resources should be released when they are invisible.

When your app is running in the foreground, you may receive one of the following values ​​returned from onTrimMemory():

  • TRIM_MEMORY_RUNNING_MODERATE: Your app is running and is not listed as killable. However, the device is running in a low memory state and the system begins to trigger the mechanism of killing processes in the LRU cache.
  • TRIM_MEMORY_RUNNING_LOW: Your app is running and is not listed as killable. However, the device is running in a low memory state and you should release unused resources to improve system performance.
  • TRIM_MEMORY_RUNNING_CRITICAL: Your app is still running, but the system has killed most of the processes in the LRU Cache, so you should release all non-essential resources immediately. If the system cannot reclaim enough RAM, the system will clear all processes in the LRU cache and start killing those processes that were previously considered not to be killed, such as the process that contains a running Service.

When the app process is cached in the background, you may receive one of the following values ​​returned from onTrimMemory():

  • TRIM_MEMORY_BACKGROUND: The system is running low on memory and your process is in the least likely position to be killed in the LRU cache. Although your application process is not in high risk of being killed, the system may have started killing other processes in the LRU cache. You should release resources that are easily recoverable so that your process can be preserved and can be quickly recovered when the user returns to your application.
  • TRIM_MEMORY_MODERATE: The system is running low on memory and your process is close to the middle of the LRU list. If the system starts to become more memory-constrained, your process may be killed.
  • TRIM_MEMORY_COMPLETE: The system is running low on memory and your process is in the LRU list of processes most likely to be killed. You should release any resources that do not prevent your application from recovering.

Because the onTrimMemory() callback was added in API 14, for older versions, you can use the onLowMemory callback for compatibility. onLowMemory is equivalent to TRIM_MEMORY_COMPLETE.

Please note: When the system starts to remove processes from the LRU cache, although it first performs operations in the order of LRU, it also considers the memory usage of the processes and other factors. The processes with less memory usage are more likely to be kept.

4) Resource files need to be stored in a suitable folder

We know that images in hdpi/xhdpi/xxhdpi and other folders with different dpi will be scaled on different devices. For example, if we only put a 100x100 image in the hdpi directory, then according to the conversion relationship, the image will be stretched to 200x200 when referenced by an xxhdpi phone. It should be noted that in this case, the memory usage will increase significantly. For images that you do not want to be stretched, you need to put them in the assets or nodpi directory.

5)Try to catch some large memory allocation operations

In some cases, we need to evaluate the code that may cause OOM in advance, and add a catch mechanism for the code that may cause OOM. We can consider trying a downgraded memory allocation operation in the catch. For example, when decoding a bitmap, if OOM is caught, we can try to double the sampling ratio and try decoding again.

6) Use static objects with caution

Because the life cycle of static is too long and consistent with the application process, improper use is likely to cause object leakage. Static objects should be used with caution in Android.

7) Pay special attention to unreasonable holdings in singleton objects

Although the singleton pattern is simple and practical and provides a lot of convenience, because the life cycle of the singleton is consistent with the application, improper use can easily lead to leakage of held objects.

8) Treasure Services Resources

If your application needs to use a service in the background, the service should be stopped unless it is triggered and performs a task. Also, be aware of memory leaks caused by failure to stop the service after the service completes its task. When you start a service, the system tends to keep the process where the service is located in order to keep the service. This makes the process expensive to run because the system has no way to free up the RAM space occupied by the service for other components, and the service cannot be Paged out. This reduces the number of processes that the system can store in the LRU cache, which affects the efficiency of switching between applications and may even cause unstable system memory usage, making it impossible to continue to maintain all currently running services. It is recommended to use IntentService, which will end itself as soon as possible after processing the task assigned to it. For more information, please read Running in a Background Service.

9) Optimize layout hierarchy and reduce memory consumption

The flatter the view layout is, the less memory it takes up and the higher the efficiency. We need to try to ensure that the layout is flat enough. When the system-provided View cannot achieve a flat enough layout, consider using a custom View to achieve the goal.

10) Use "abstract" programming with caution

Many times, developers use abstract classes as a "good programming practice" because abstraction can improve the flexibility and maintainability of the code. However, abstractions can cause a significant additional memory overhead: they require the same amount of code for execution, and that code will be mapped into memory, so if your abstraction does not significantly improve efficiency, you should try to avoid them.

11) Serialize data using nano protobufs

Protocol buffers are designed by Google for serializing structured data. They are language-independent, platform-independent, and highly extensible. They are similar to XML, but are lighter, faster, and simpler than XML. If you need to serialize and protocolize your data, it is recommended to use nano protobufs. For more details, please refer to the "Nano version" section of the protobuf readme.

12) Use dependency injection frameworks with caution

Those injection frameworks will perform many initialization operations by scanning your code, which will cause your code to require a lot of memory space to map the code, and the mapped pages will be retained in memory for a long time. Unless it is really necessary, it is recommended to use this technology with caution;

13) Use multi-process with caution

  • Using multiple processes can run some components of the application in separate processes, which can expand the memory usage of the application. However, this technology must be used with caution. Most applications should not use multiple processes rashly. On the one hand, using multiple processes makes the code logic more complicated. On the other hand, if used improperly, it may lead to a significant increase in memory. If your application needs to run a resident background task, and this task is not lightweight, you can consider using this technology;
  • A typical example is to create a music player that can play in the background for a long time. If the entire application runs in one process, when the background is playing, the UI resources in the foreground cannot be released. Such an application can be divided into two processes: one for operating the UI and the other for the background service.

14) Use ProGuard to remove unnecessary code

ProGuard can compress, optimize and obfuscate code by removing unnecessary code, renaming classes, fields and methods, etc. Using ProGuard can make your code more compact, which can reduce the memory space required for mapping code.

15) Use third-party libraries with caution

Many open source library codes are not written for mobile network environments, and may not be suitable for use on mobile devices. Even libraries designed for Android require special caution, especially when you don't know what the imported library does specifically. For example, one library uses nano protobufs, while the other uses micro protobufs. In this way, there are two protobuf implementations in your application. Similar conflicts may also occur in modules such as output logs, loading images, caching, etc. In addition, do not import the entire library for one or two functions. If there is no suitable library that matches your needs, you should consider implementing it yourself instead of importing a large and comprehensive solution.

Summarize

Memory optimization does not mean that the less memory the program occupies, the better. If you frequently trigger GC operations in order to maintain a lower memory usage, it will lead to a decline in the overall application performance to a certain extent. Here we need to make comprehensive considerations and make certain trade-offs.

Android memory optimization involves a lot of knowledge: the details of memory management, the working principle of garbage collection, how to find memory leaks, etc. OOM is a prominent point in memory optimization. Minimizing the probability of OOM is of great significance to memory optimization.

<<:  Why SwiftUI’s modifier order matters

>>:  Your Phone app on Windows is coming to more Android smartphones

Recommend

Apple should buy Netflix

While the focus of Apple's event last Thursda...

10 strategic indicators on how to quickly increase the volume of advertising!

This article will introduce in detail the specifi...

A set of online event planning templates

As we enter 2022, the epidemic, which had been do...

Prologue to the Kingdom of Genes: What is genetic material?

Produced by: Science Popularization China Author:...

Dragon Boat Festival Marketing Activities Guide!

Holidays have always been important marketing nod...

Do you have promotional abilities?

There are many tasks that operations are responsi...

Bong bracelet price cut reflects the "alchemy" of wearable devices

Since the price of Xiaomi Mi Band was announced a...

Android is at its most dangerous moment in Europe

[[133057]] Google Search alone and even Google An...