Author | Kenny is a senior front-end development engineer at Ctrip. He joined Ctrip in 2021 and is engaged in mini-program/H5 related R&D work.1. BackgroundWith the continuous iteration of the project, the scale is increasing, and the drawbacks of the Taro3-based runtime are becoming increasingly prominent, especially on complex list pages, which greatly affects the user experience. This article will focus on the performance optimization of complex lists, try to establish detection indicators, understand performance bottlenecks, and provide some technical solutions after the experiment through preloading, caching, optimizing component levels, optimizing data structures, etc., hoping to bring some ideas to everyone. 2. Current status and analysis of the problemWe take a multi-functional list of a hotel as an example (see the figure below), set the detection standard (the number of setData and the response time of the setData as indicators), and the detection results are as follows:
Due to historical reasons, the code of this page was converted from WeChat's native code to taro1, and then iterated to taro3. There are problems in the project that may be overlooked by the native writing of the mini program. According to the indicator values measured multiple times above and the visual experience, the following problems exist: 2.1 The loading time of the list page is too long when entering for the first time, and the white screen lasts for a long time
2.2 The update of the page filter items is stuck, and the pull-down animation is stuck
2.3 The update of the infinite list is stuck, and the screen will turn white if you slide too fast
3. Try the optimization solution3.1 Jump to preload API:By observing the requests of the mini program, we can find that among the list page requests, two requests take a long time. In the upgrade of Taro3, the official mentioned preloading. In the mini program, there will be a certain delay (about 300ms, if it is a new download of sub-packages, the jump time will be longer) from calling the route jump API such as Taro.navigateTo to triggering onLoad on the mini program page. Therefore, some network requests can be requested together when initiating the jump. So we use Taro.preload to preload the request of complex list before the jump: // Page A // Page B After repeated tests using the same detection method, when using preload, the hotel list data can be obtained 300~400ms in advance. The left side is the old list without preload, and the right side is the preloaded list. It can be clearly seen that the preloaded list is faster. However, in actual use, we found that preload has some defects. For the receiving page, if the interface is relatively complex, it will invade the code of the business process to a certain extent. In essence, it is a pre-placement of the network request, so we can add a cache strategy to the network request part to achieve this effect, and the access cost will be greatly reduced. 3.2 Reasonable use of setDatasetData is the most frequently used API in mini-program development and is also the one that is most likely to cause performance issues. The process of setData can be roughly divided into several stages:
The time consumption of data transmission is positively correlated with the amount of data. When the old list page was loaded for the first time, a total of 4 interfaces were requested. SetData was called 6 times in a short period of time, and there were two times with large amounts of data. The optimization method we tried was to separate the two times with large amounts of data. We found that the other five times were all scattered states and data, which could be treated as one time.
After completing this step, the average time can be reduced by about 200ms, which is a small effect because the number of nodes on the page has not changed and the main time consumption of setData is still distributed in the rendering time. 3.3 Optimize the number of nodes on the pageAccording to WeChat official documentation, a node tree that is too large will increase memory usage and take longer to reorder styles. It is recommended that the number of nodes on a page should be less than 1,000, the node tree depth should be less than 30 layers, and the number of child nodes should be no more than 60. In the WeChat developer tool, we found that there are a large number of nodes in the two modules of this page. One is the filter item module, and the other is the long list module. Because this part has many functions and a complex structure, we use selective rendering. For example, when the user browses the list, the filter item does not generate a specific node. When clicking to expand the filter, the node is rendered, which alleviates the experience of the page list to a certain extent. On the other hand, for the writing of the overall layout, we consciously avoid too deep nesting, such as using RichText and replacing some selected images. 3.4 Optimize filter items3.4.1 Change animation modeDuring the process of refactoring the filter items, we found that the animation effect of the mini program was not ideal on some models. For example, when opening the filter item tab, we need to implement a pull-down effect. In the early implementation, there were two problems:
The old filter item animation uses a fadeIn animation implemented by keyframes and added to the outermost layer. However, no matter what, there will be a flash in the frame where the animation appears. After analysis, it is because the freeze caused by the keyframe animation: .filter-wrap { So, I tried to change the implementation method and implement the transition through transition: .filter-wrap { 3.4.2 Maintaining a simple stateWhen operating the filter items, each operation needs to loop through the filter item data structure according to the unique ID to find the corresponding item, change the item's state, and then re-set the entire structure. The official documentation mentions that when using setState, you should try to avoid processing too much data, which will affect the page's update performance. The approach to this problem is:
{ The flattened filter item data structure: {
const flattenFilters = data => { The above is a simple implementation. Compared with before and after, we only need to maintain a very simple object and add or delete its attributes. The performance is slightly improved, and the code is simpler and neater. In business code, there are many places like this where efficiency is improved by data structure conversion. Regarding the filter items, you can compare the average data of the detection, and reduce 200ms~300ms, which will also get some improvement:
3.5 Optimizing long listsIn the early days, the hotel list page introduced a virtual list, which rendered a certain number of hotels for a long list. The core idea is to only render the data displayed on the screen. The basic implementation is to listen to the scroll event and recalculate the data that needs to be rendered. For the data that does not need to be rendered, an empty div placeholder element is left.
The data shows that it takes an average of about 1900ms to pull down and update the list:
The solution to this problem is to load the data of the next page in advance and store the next page in a memory variable. When scrolling, directly get it from the memory variable and then update it to the data using setData.
2) CustomWrapper In order to improve performance, the official recommended CusomWrapper, which can isolate the wrapped components from the page. When the component is rendered, the entire page will not be updated, and page.setData will be changed to component.setData. Custom components are implemented based on Shadow DOM, which encapsulates the DOM and CSS in the component, so that the internal components are separated from the DOM of the main page. The #shadow-root in the picture is the root node, which becomes the shadow root and is rendered separately from the main document. #shadow-root can be nested to form a node tree (Shadow Tree) <custom-wrapper is="custom-wrapper"> The wrapped components are isolated, so the update of the internal data will not affect the entire page. You can simply look at the performance under low-performance clients. The effect is still obvious. When clicking at the same time, the pop-up window on the right will appear 200ms ~ 300ms faster on average (measured on the same model and the same environment). The lower the model, the more obvious it is. (The right side is under CustomWrapper) 3) Use the Mini Program’s native components Use the native components of the applet to implement this list item. The native components bypass the Taro3 runtime. That is to say, when the user operates the page, if it is a taro3 component, it needs to perform a diff calculation of the previous and next data, and then generate the node data required by the new virtual DOM, and then call the applet's API to operate the node. The native component bypasses this series of operations and directly updates the data of the underlying applet. Therefore, some time is shortened. You can take a look at the effect after implementation:
It can be seen that the native performance has been greatly improved, and the average update list time has been shortened by about 1s. However, there are also disadvantages to using native, which are mainly manifested in the following two aspects:
Comparing the three solutions, the performance improvement is gradually strengthened. Considering that the original purpose of using Taro is to cross-end, if native is used, this goal cannot be achieved. However, we are trying to solve this problem by using plug-ins to generate component code corresponding to native mini-programs during compilation, and finally achieve the best effect. 3.6 React.memoWhen there are too many subcomponents on a complex page, the rendering of the parent component will cause the subcomponents to follow the rendering. React.memo can do a shallow comparison to prevent unnecessary rendering: const MyComponent = React.memo(function MyComponent(props) { React.memo is a higher-order component. It is very similar to React.PureComponent, but it works on function components but not on class components. If your function component renders the same result given the same props, you can improve the performance of the component by wrapping it in a React.memo call, which means that in this case, React will skip rendering the component and reuse the result of the most recent render. By default, it only does a shallow comparison of complex objects. If you want to control the comparison process, please pass a custom comparison function as the second parameter to achieve it. function MyComponent(props) { IV. ConclusionWe spent a long time optimizing the performance of complex lists and tried various possible optimization points, from preloading the list page, changing the filter item data structure and animation implementation, to optimizing the experience of long lists and combining with native, which improved the page update and rendering efficiency. We are still paying close attention and continuing to explore. The following is a comparison of the final results (the right side is after optimization): |
<<: Whether iOS16 is worth updating depends on whether you like these features.
>>: Component Library Design Guide: The Birth of a Component Library
Since Foxconn acquired Sharp, every decision made...
When I first learned about the event, I actually ...
In summer, quick-drying clothes have become a hot...
In August 2010, Zhihu founder and CEO Zhou Yuan, ...
A week before the Mid-Autumn Festival last year, ...
1. Basics 1.1) Swift still uses // and /* */ for ...
Mangroves are woody plant communities that grow i...
Since the exposure of Google's modular mobile...
What happened with the United States withdrawing ...
As an App promoter , one of the most painful thin...
Through APP product operation , promotion channel...
Today, Xiaohongshu is increasingly becoming a bat...
[[134368]] Once a piece of software is installed ...
I calculated that there is still about half a mon...
Pikas and marmots are popular again. A blogger fo...