Random Access Memory (RAM) is a valuable resource in any software development environment. This is especially true on mobile operating systems where physical memory is usually limited. Although Android's Dalvik virtual machine plays a conventional garbage collection role, this does not mean that you can ignore the timing and location of memory allocation and release in your app. In order for the GC to reclaim memory from the app in a timely manner, we need to be careful to avoid memory leaks (usually caused by holding object references in global member variables) and release referenced objects at the appropriate time (lifecycle callbacks described below). For most apps, Dalvik's GC will automatically reclaim objects that leave the active thread. This article will explain how Android manages app processes and memory allocation, and how to proactively reduce memory usage when developing Android applications. For more information about Java resource management mechanisms, please refer to other books or online materials. If you are looking for an article on how to analyze your memory usage, please refer to Investigating Your RAM Usage. Android does not provide a swap space for memory, but it does use paging and memory-mapping (mmapping) mechanisms to manage memory. This means that any memory you modify (whether by allocating new objects or accessing the contents of mmaped pages) will be stored in RAM and cannot be paged out. Therefore, the only way to completely release memory is to release references to objects you may hold. When the object is not referenced by any other object, it can be reclaimed by the GC. There is only one exception: if the system wants to reuse the object somewhere else. Android implements shared RAM among different processes in the following ways: Each app process is forked from a process called Zygote. The Zygote process starts after the system boots and loads the code and resources of the common framework. To start a new program process, the system forks the Zygote process to generate a new process, and then loads and runs the app code in the new process. This allows most of the RAM pages to be allocated to the framework code, while allowing RAM resources to be shared among all the processes of the application. Most static data is mmapped into a process. This not only allows the same data to be shared between processes, but also allows it to be paged out when needed. For example, the following static data: For information on how to view the shared memory used by your app, see Investigating Your RAM Usage. Here are some facts about how Android allocates and deallocates memory: The Dalvik heap of each process has a limited virtual memory range. This is the logical heap size, which can grow as needed, but there will be an upper limit defined by the system. 3) Limit the memory of the application To maintain a multitasking environment, Android sets a hard heap size limit for each app. The exact heap size limit varies depending on the RAM size of different devices. If your app has reached the heap size limit and tries to allocate memory, it will cause an OutOfMemoryError. In some cases, you may want to query the current device's heap size limit and then decide the cache size. You can query this through getMemoryClass(). This method will return an integer indicating the heap size limit of your application in Mb (megabytes). Android does not swap memory when users switch between different applications. Android puts processes that do not contain foreground components into the LRU cache. For example, when a user first starts an application, the system creates a process for it, but when the user leaves the application, the process is not immediately destroyed. The system puts the process in the cache, and if the user returns to the application later, the process can be fully restored, thereby achieving fast switching between applications. If your application has a cached process, this process will occupy memory that is not needed temporarily. This temporarily unused process is retained in memory, which will affect the overall performance of the system. Therefore, when the system begins to enter a low memory state, it will decide to kill certain processes based on the LRU rules and other factors. In order to keep your process cached as long as possible, please refer to the following section to learn when to release your reference. For processes that are not in the foreground, see Processes and Threads. You should consider RAM limitations at every stage of the development process, even in the design phase before you start writing code. There are many designs and implementations that can be used, with different efficiencies, even if they are just combinations and variations of the same techniques. To make your application more efficient, you should follow the following technical points when designing and implementing your code. If your app needs to use a service in the background, the service should be stopped unless it is triggered and performs a task. Also, be aware of memory leaks caused by failure to stop the service after the service completes its task. When you start a service, the system will tend to keep the process where the service is located in order to keep the service. This makes the process expensive to run because the system has no way to free up the RAM space occupied by the service for other components, and the service cannot be paged out. This reduces the number of processes that the system can store in the LRU cache, which will affect the efficiency of switching between apps. It may even cause unstable system memory usage, making it impossible to continue to maintain all currently running services. The best way to limit your service is to use an IntentService, which terminates itself as soon as possible after processing the intent task assigned to it. For more information, read Running in a Background Service. Keeping a Service when it is no longer needed is one of the worst mistakes for Android application memory management. So don't be greedy and keep a Service. Not only will it make your application perform poorly due to insufficient RAM space, but it will also make users discover applications with resident background behavior and may uninstall it. When the user switches to another app and your app UI is no longer visible, you should release all memory resources occupied by your app UI. Releasing UI resources at this time can significantly increase the system's ability to cache processes, which has a direct impact on the user experience. In order to receive notifications when the user leaves your UI, you need to implement the onTrimMemory() callback method in the Activtiy class. You should use this method to listen for callbacks at the TRIM_MEMORY_UI_HIDDEN level, which means that your UI is hidden and you should release resources that are only used by your UI. Please note: Your application will only receive an onTrimMemory() callback with the parameter TRIM_MEMORY_UI_HIDDEN when all UI components are hidden. This is different from the onStop() callback, which is executed when the activity instance is hidden. For example, when the user jumps from one activity in your app to another, the onStop() of the previous activity will be executed. Therefore, you should implement the onStop callback and release the activity's resources in this callback, such as releasing the network connection and unregistering the listening broadcast receiver. You should not release your UI resources unless you receive the onTrimMemory(TRIM_MEMORY_UI_HIDDEN) callback. This ensures that your UI resources are still available and the activity can be quickly restored when the user switches back from another activity. At any stage of your app's life cycle, the onTrimMemory callback method can also tell you that the memory resources of the entire device have begun to become tight. You should further decide which resources to release based on the memory level in the onTrimMemory callback. TRIM_MEMORY_RUNNING_MODERATE: Your app is running and is not listed as killable. However, the device is running in a low memory state and the system begins to trigger the mechanism of killing processes in the LRU cache. Also, when your app process is being cached, you may receive one of the following return values from onTrimMemory(): TRIM_MEMORY_BACKGROUND: The system is running low on memory and your process is in the least likely position to be killed in the LRU cache. Although your app process is not at high risk of being killed, the system may have already started killing other processes in the LRU cache. You should release resources that are easily recoverable so that your process can be preserved and can be quickly restored when the user returns to your app. Because the onTrimMemory() callback was added in API 14, for older versions, you can use the onLowMemory callback for compatibility. onLowMemory is equivalent to TRIM_MEMORY_COMPLETE. Note: When the system starts to evict processes from the LRU cache, although it does it in LRU order first, it also takes into account the memory usage of the processes, so processes that consume less memory are more likely to be kept. 4) Check how much memory you should use As mentioned earlier, each Android device has a different total RAM size and available space, so different devices provide different heap limits for apps. You can get the available heap size for your app by calling getMemoryClass(). If your app tries to request more memory, an OutOfMemory error will occur. In some special cases, you can declare a larger heap space by adding the largeHeap=true attribute to the application tag in the manifest. If you do this, you can get a larger heap size through getLargeMemoryClass(). However, the ability to get a larger heap is intended for a small number of applications that consume a lot of RAM (such as a large image editing application). Don't request a large heap size just because you need to use a lot of memory. Use a large heap only when you know clearly where a lot of memory is used and why this memory must be retained. So try to use a large heap as little as possible. Using extra memory will affect the overall user experience of the system and will make each GC run longer. The performance of the system will be greatly reduced when switching tasks. In addition, a large heap does not necessarily mean that a larger heap can be obtained. On some machines with strict restrictions, the size of the large heap is the same as the normal heap size. Therefore, even if you apply for a large heap, you should still check the actual heap size obtained by executing getMemoryClass(). When you load a bitmap, you only need to keep the data that fits the current screen device resolution. If the original image is higher than your device resolution, you need to scale it down. Remember that increasing the size of the bitmap will increase the memory usage by a power of two, because both X and Y are increasing. Note: In Android 2.3.x (API level 10) and below, the pixel data of the bitmap object is stored in native memory, which is not convenient for debugging. However, starting from Android 3.0 (API level 11), the bitmap pixel data is allocated in the Dalvik heap of your app, which improves the efficiency of GC and makes it easier to debug. Therefore, if your app uses bitmap and causes some memory problems on an old machine, switch to a machine with 3.0 or above for debugging. 6) Use optimized data containers Use optimized container classes in the Android Framework, such as SparseArray, SparseBooleanArray, and LongSparseArray. The usual HashMap implementation consumes more memory because it requires an additional instance object to record the Mapping operation. In addition, SparseArray is more efficient because it avoids autoboxing of keys and values, and avoids unboxing after boxing. Understand the costs and overhead of the languages and libraries you use, and keep this information in mind when designing your app from start to finish. Often, things that seem innocuous on the surface may actually result in a lot of overhead. For example: Enums usually consume twice as much memory as static constants. You should avoid using enums on Android. 8) Be aware of code “abstraction” Often, developers use abstractions as a "good programming practice" because they can improve the flexibility and maintainability of the code. However, abstractions incur a significant cost: usually they require the same amount of code as the executable. That code is mapped into memory. So if your abstractions do not significantly improve efficiency, you should try to avoid them. Protocol buffers are a language-independent, platform-independent, and highly extensible protocol designed by Google for serializing structured data. Similar to XML, it is lighter, faster, and simpler than XML. If you need to implement protocolization for your data, you should always use nano protobufs in your client code. Conventional protocolization operations generate a lot of cumbersome code, which can easily cause many problems for your app: increased RAM usage, significantly increased APK size, slower execution speed, and easier to reach the character limit of DEX. For more details, see the "Nano version" section of the protobuf readme. Using framework injection packages like Guice or RoboGuice is very effective because they can simplify your code. Notes: RoboGuice 2 changes your code style through dependency injection, making the Android development experience better. Do you often forget to check for null when calling getIntent().getExtras()? RoboGuice 2 can help you. Do you think it is unnecessary to cast the return value of findViewById() to TextView? RoboGuice 2 can help you. RoboGuice moves these guesswork out of Android development. RoboGuice 2 will take care of the details of injecting your View, Resource, System Service or other objects. However, those frameworks will perform a lot of initialization operations by scanning your code, which will cause your code to require a lot of RAM to map the code, and the mapped pages will be kept in RAM for a long time. Many open source library codes are not written for mobile network environments. If used on mobile devices, the efficiency is not high. When you decide to use a third-party library, you should do the tedious migration and maintenance work for mobile networks. Even libraries designed for Android can be dangerous because each library does something different. For example, one library uses nano protobufs and another uses micro protobufs. Then, you will have two protobuf implementations in your app. Such conflicts can also occur in modules such as outputting logs, loading images, caching, etc. Also don’t fall into the trap of importing an entire library just for one or two features. If there isn’t a suitable library that fits your needs, you should consider implementing it yourself instead of importing a large, all-in-one solution. There are many official articles on how to optimize the performance of the entire app: Best Practices for Performance. This article is one of them. Some articles explain how to optimize the CPU usage efficiency of the app, and some explain how to optimize the memory usage efficiency of the app. You should also read optimizing your UI to optimize your layout. You should also pay attention to the suggestions made by the lint tool and make optimizations. ProGuard can compress, optimize and obfuscate your code by removing unnecessary code, renaming classes, fields and methods, etc. Using ProGuard can make your code more compact, which can use less RAM required for mapped code. After writing all the code and generating the APK through the compilation system, you need to recalibrate the APK using zipalign. If you don't do this step, your APK will require more RAM because some things like image resources cannot be mapped. Notes: Google Play will not accept APKs that are not zipaligned. 15) Analyze your RAM usage Once you have a stable version, you need to analyze the memory usage of your app throughout its lifecycle and optimize it. For more details, see Investigating Your RAM Usage. If appropriate, there is a more advanced technique that can help your app manage memory usage: by splitting your app components into multiple components and running them in different processes. This technique should be used with caution, and most apps should not run in multiple processes. Because if used incorrectly, it can significantly increase memory usage, not reduce it. Consider using this technique when your app needs to run a lot of tasks in the background as well as in the foreground. A typical example is to create a music player that can play in the background for a long time. If the entire app runs in one process, when the background is playing, the UI resources in the foreground cannot be released. Such an app can be divided into two processes: one for operating the UI and the other for the background service. You can make a component run in another process by declaring the 'android:process' attribute in the manifest file.
|
<<: Android performance optimization memory
>>: Android performance optimization: battery life
Super practical! Don't take the detours when ...
[[150847]] After working for more than four years...
We always think that we have enough knowledge and...
Wuhan high-end tea drinking is unique and very un...
Liu Zhihui's Quantitative Learning Cloud Lect...
All roads lead to Rome, and all roads in Rome lea...
Today, I will share with you how to do content ma...
There are two main ways for enterprises to handle...
The course comes from the third phase of the Priv...
Every time when various Internet celebrities shar...
Director Tang of Meitan Academy: "How to Exp...
What is operation ? This kind of question appears...
Since the emergence of shopping payment software ...
Guangzhou housekeeping and cleaning companies are...
There is no need to explain the importance of ins...