Taro Performance Optimization: Complex Lists

Taro Performance Optimization: Complex Lists
Author | Kenny is a senior front-end development engineer at Ctrip. He joined Ctrip in 2021 and is engaged in mini-program/H5 related R&D work.

1. Background

With the continuous iteration of the project, the scale is increasing, and the drawbacks of the Taro3-based runtime are becoming increasingly prominent, especially on complex list pages, which greatly affects the user experience. This article will focus on the performance optimization of complex lists, try to establish detection indicators, understand performance bottlenecks, and provide some technical solutions after the experiment through preloading, caching, optimizing component levels, optimizing data structures, etc., hoping to bring some ideas to everyone.

2. Current status and analysis of the problem

We take a multi-functional list of a hotel as an example (see the figure below), set the detection standard (the number of setData and the response time of the setData as indicators), and the detection results are as follows:

index

Number of setData

Rendering time (ms)

Entering the list page for the first time

7

2404

Pull down long list to update

3

1903

Update filter items in multi-screen lists

2

1758

Update list items in multi-screen lists

2

748

Due to historical reasons, the code of this page was converted from WeChat's native code to taro1, and then iterated to taro3. There are problems in the project that may be overlooked by the native writing of the mini program. According to the indicator values ​​measured multiple times above and the visual experience, the following problems exist:

2.1 The loading time of the list page is too long when entering for the first time, and the white screen lasts for a long time

  • The interface time for list page request is too long;
  • The initialization list is also too large and the number of setData is too many;
  • There are too many page nodes, which causes the rendering to take a long time;

2.2 The update of the page filter items is stuck, and the pull-down animation is stuck

  • There are too many nodes in the filter items, and the amount of setData data is large when updating;
  • Updates to the filter item components will cause the page to be updated as well;

2.3 The update of the infinite list is stuck, and the screen will turn white if you slide too fast

  • Requesting the next page too late;
  • When settingData, the amount of data is large and the response is slow;
  • When sliding too fast, there is no transition mechanism from white screen to rendering completion, and the experience is poor;

3. Try the optimization solution

3.1 Jump to preload API:

By observing the requests of the mini program, we can find that among the list page requests, two requests take a long time.

In the upgrade of Taro3, the official mentioned preloading. In the mini program, there will be a certain delay (about 300ms, if it is a new download of sub-packages, the jump time will be longer) from calling the route jump API such as Taro.navigateTo to triggering onLoad on the mini program page. Therefore, some network requests can be requested together when initiating the jump. So we use Taro.preload to preload the request of complex list before the jump:

 // Page A
const query = new Query({
// ...
})


Taro.preload({
RequestPromise: requestPromiseA({data: query }),
})
 // Page B
componentDidMount() {
// During the redirection process, a request is made. Since a promise is returned, it needs to be handled on page B:
Taro.getCurrentInstance().preloadData?.RequestPromise?.then(res => {
this.setState(this.processResData(res.data))
})
}

After repeated tests using the same detection method, when using preload, the hotel list data can be obtained 300~400ms in advance.

The left side is the old list without preload, and the right side is the preloaded list. It can be clearly seen that the preloaded list is faster.

However, in actual use, we found that preload has some defects. For the receiving page, if the interface is relatively complex, it will invade the code of the business process to a certain extent. In essence, it is a pre-placement of the network request, so we can add a cache strategy to the network request part to achieve this effect, and the access cost will be greatly reduced.

3.2 Reasonable use of setData

setData is the most frequently used API in mini-program development and is also the one that is most likely to cause performance issues. The process of setData can be roughly divided into several stages:

  • Traversal and update of the virtual DOM tree in the logic layer, triggering component lifecycle and observer, etc.;
  • Transfer data from the logic layer to the view layer;
  • The update of the view layer virtual DOM tree and the real DOM elements triggers the page rendering update.

The time consumption of data transmission is positively correlated with the amount of data. When the old list page was loaded for the first time, a total of 4 interfaces were requested. SetData was called 6 times in a short period of time, and there were two times with large amounts of data. The optimization method we tried was to separate the two times with large amounts of data. We found that the other five times were all scattered states and data, which could be treated as one time.

index

Number of setData

setData time (ms)

Reduce the percentage of time spent

Entering the list page for the first time

3

2182

9.23%

After completing this step, the average time can be reduced by about 200ms, which is a small effect because the number of nodes on the page has not changed and the main time consumption of setData is still distributed in the rendering time.

3.3 Optimize the number of nodes on the page

According to WeChat official documentation, a node tree that is too large will increase memory usage and take longer to reorder styles. It is recommended that the number of nodes on a page should be less than 1,000, the node tree depth should be less than 30 layers, and the number of child nodes should be no more than 60.

In the WeChat developer tool, we found that there are a large number of nodes in the two modules of this page. One is the filter item module, and the other is the long list module. Because this part has many functions and a complex structure, we use selective rendering. For example, when the user browses the list, the filter item does not generate a specific node. When clicking to expand the filter, the node is rendered, which alleviates the experience of the page list to a certain extent. On the other hand, for the writing of the overall layout, we consciously avoid too deep nesting, such as using RichText and replacing some selected images.

3.4 Optimize filter items

3.4.1 Change animation mode

During the process of refactoring the filter items, we found that the animation effect of the mini program was not ideal on some models. For example, when opening the filter item tab, we need to implement a pull-down effect. In the early implementation, there were two problems:

  • The animation will flash and then reappear.
  • When there are too many nodes on the filtering page, the click response is too slow and the user experience is poor

The old filter item animation uses a fadeIn animation implemented by keyframes and added to the outermost layer. However, no matter what, there will be a flash in the frame where the animation appears. After analysis, it is because the freeze caused by the keyframe animation:

 .filter-wrap {
animation: .3s ease-in fadeIn;
}


@keyframes fadeIn {
0% {
transform: translateY(-100%)
}
100% {
transform: translateY(0)
}
}

So, I tried to change the implementation method and implement the transition through transition:

 .filter-wrap {
transform: translateY(-100%);
transition: none;
&.active {
transform: translateY(0);
transition: transform .3s ease-in;
}
}

3.4.2 Maintaining a simple state

When operating the filter items, each operation needs to loop through the filter item data structure according to the unique ID to find the corresponding item, change the item's state, and then re-set the entire structure. The official documentation mentions that when using setState, you should try to avoid processing too much data, which will affect the page's update performance.

The approach to this problem is:

  • Flatten complex objects in advance, as shown below:

 {
"a": {
"subs": [{
"a1": {
"subs": [{
"id": 1
}]
}
}]
},
"b": {
"subs": [{
"id": 2
}]
},


// ...
}

The flattened filter item data structure:

 {
"1": {
"id": 1,
"name": "Han Ting",
"includes": [],
"excludes": [],
// ...
},
"2": {
// ...
},


// ...
}

  • Without changing the original data, a dynamic selection list is maintained using the flattened data structure:

 const flattenFilters = data => {
// ...


return {
[id]: {
id: 2,
name: "All Seasons",
includes: [],
excludes: []
// ...
},


// ...
}
}


const filters = [], filtersSelected = {}
const flatFilters = flattenFilters(filters)


const onClickFilterItem = item => {


// All operations need to get the flattened item first
const flatItem = flatFilters[item.id]


if (filtersSelected[flatItem.id]) {
// Already selected, need to uncheck
delete filtersSelected[flatItem.id]
}
else {
// Not selected, need to be selected
filtersSelected[flatItem.id] = flatItem
// Uncheck the excluded item
const idsSelected = Object.keys(filtersSelected)
const idsIntersection = intersection(idsSelected, flatItem.selfExcludes) // Intersection
if (idsIntersection.length) {
idsIntersection.forEach(id => {
delete filtersSelected[id]
})
}


// Other logic (quick screening, keywords, etc.)
}


this.setState({filtersSelected})
}

The above is a simple implementation. Compared with before and after, we only need to maintain a very simple object and add or delete its attributes. The performance is slightly improved, and the code is simpler and neater. In business code, there are many places like this where efficiency is improved by data structure conversion.

Regarding the filter items, you can compare the average data of the detection, and reduce 200ms~300ms, which will also get some improvement:

index

setData takes a long time

setData takes time

Reduce the percentage of time spent

Expand filter items in long lists

1023

967

5.47%

Click the filter option in the long list

1758

1443

17.92%

3.5 Optimizing long lists

In the early days, the hotel list page introduced a virtual list, which rendered a certain number of hotels for a long list. The core idea is to only render the data displayed on the screen. The basic implementation is to listen to the scroll event and recalculate the data that needs to be rendered. For the data that does not need to be rendered, an empty div placeholder element is left.

  • There is a slight lag when loading the next page:

The data shows that it takes an average of about 1900ms to pull down and update the list:

index

Number of setData

setData takes time

Drop-down list update

3

1903

The solution to this problem is to load the data of the next page in advance and store the next page in a memory variable. When scrolling, directly get it from the memory variable and then update it to the data using setData.

  • If you slide too fast, a white screen will appear (the faster the speed, the longer the white screen lasts, as shown in the left picture below): The principle of the virtual list is to use empty Views to occupy space. When you scroll back quickly, when the nodes are too complex during rendering, especially when the hotel has pictures, the rendering will be slow, resulting in a white screen. We tried three solutions: 1) Use a dynamic skeleton diagram to replace the original View to occupy space (as shown in the right picture below):

2) CustomWrapper

In order to improve performance, the official recommended CusomWrapper, which can isolate the wrapped components from the page. When the component is rendered, the entire page will not be updated, and page.setData will be changed to component.setData.

Custom components are implemented based on Shadow DOM, which encapsulates the DOM and CSS in the component, so that the internal components are separated from the DOM of the main page. The #shadow-root in the picture is the root node, which becomes the shadow root and is rendered separately from the main document. #shadow-root can be nested to form a node tree (Shadow Tree)

 <custom-wrapper is="custom-wrapper">
#shadow-root
<view class="list"></view>
</custom-wrapper>

The wrapped components are isolated, so the update of the internal data will not affect the entire page. You can simply look at the performance under low-performance clients. The effect is still obvious. When clicking at the same time, the pop-up window on the right will appear 200ms ~ 300ms faster on average (measured on the same model and the same environment). The lower the model, the more obvious it is.

(The right side is under CustomWrapper)

3) Use the Mini Program’s native components

Use the native components of the applet to implement this list item. The native components bypass the Taro3 runtime. That is to say, when the user operates the page, if it is a taro3 component, it needs to perform a diff calculation of the previous and next data, and then generate the node data required by the new virtual DOM, and then call the applet's API to operate the node. The native component bypasses this series of operations and directly updates the data of the underlying applet. Therefore, some time is shortened. You can take a look at the effect after implementation:

index

setData times (old)

setData times (new)

Drop-down list update

3

1

setData time consuming (old)

setData time (new)

Reduce the percentage of time spent

1903

836

56.07%

It can be seen that the native performance has been greatly improved, and the average update list time has been shortened by about 1s. However, there are also disadvantages to using native, which are mainly manifested in the following two aspects:

  • All styles contained in the component need to be written according to the specifications of the mini program and isolated from the styles of Taro;
  • You cannot use taro's API in native components, such as createSelectorQuery;

Comparing the three solutions, the performance improvement is gradually strengthened. Considering that the original purpose of using Taro is to cross-end, if native is used, this goal cannot be achieved. However, we are trying to solve this problem by using plug-ins to generate component code corresponding to native mini-programs during compilation, and finally achieve the best effect.

3.6 React.memo

When there are too many subcomponents on a complex page, the rendering of the parent component will cause the subcomponents to follow the rendering. React.memo can do a shallow comparison to prevent unnecessary rendering:

 const MyComponent = React.memo(function MyComponent(props) {
/* Render using props */
})

React.memo is a higher-order component. It is very similar to React.PureComponent, but it works on function components but not on class components.

If your function component renders the same result given the same props, you can improve the performance of the component by wrapping it in a React.memo call, which means that in this case, React will skip rendering the component and reuse the result of the most recent render.

By default, it only does a shallow comparison of complex objects. If you want to control the comparison process, please pass a custom comparison function as the second parameter to achieve it.

 function MyComponent(props) {
/* Render using props */
}


function areEqual(prevProps, nextProps) {
/*
If the result of passing nextProps to render is the same as the result of passing prevProps to render, then true is returned.
Otherwise return false
*/
}


export default React.memo(MyComponent, areEqual);

IV. Conclusion

We spent a long time optimizing the performance of complex lists and tried various possible optimization points, from preloading the list page, changing the filter item data structure and animation implementation, to optimizing the experience of long lists and combining with native, which improved the page update and rendering efficiency. We are still paying close attention and continuing to explore.

The following is a comparison of the final results (the right side is after optimization):


<<:  Whether iOS16 is worth updating depends on whether you like these features.

>>:  Component Library Design Guide: The Birth of a Component Library

Recommend

Foxconn begins to come to the fore. What kind of game is Terry Gou playing?

Since Foxconn acquired Sharp, every decision made...

How to plan a marketing campaign that reaches 1 million people

When I first learned about the event, I actually ...

Understand the 4 principles of Zhihu community operation in one article!

In August 2010, Zhihu founder and CEO Zhou Yuan, ...

Summary of Swift basic grammar learning

1. Basics 1.1) Swift still uses // and /* */ for ...

APP product operation user acquisition channels!

Through APP product operation , promotion channel...

These 4 strategies for promoting Xiaohongshu are a must-read!

Today, Xiaohongshu is increasingly becoming a bat...

Overview of open source software update tools based on the Omaha protocol

[[134368]] Once a piece of software is installed ...

What other “hot words” will appear in the mobile phone industry in 2020?

I calculated that there is still about half a mon...