This article will analyze in great detail the various performance issues in iOS interface construction and the corresponding solutions. It will also provide an open source microblog list implementation and show how to build smooth interactions through actual code. Index 1. Demo Project 2. The principle of screen display image 3. Causes and solutions for lag
4.AsyncDisplayKit
5. Weibo Demo performance optimization tips
6. How to evaluate the fluency of the interface Demo Project Before we start the technical discussion, you can download the demo I wrote and try it out on a real device: https://github.com/ibireme/YYKit. The demo contains a Weibo feed list, a publishing view, and a Twitter feed list. To be fair, I copied all the interfaces and interactions from the official app, and the data was also captured from the official app. You can also capture data yourself to replace the data in the demo for easy comparison. Although the official app has more and more complex functions, it will not cause too much difference in interactive performance. This demo can run on iOS 6 at least, so you can try it out on older devices. In my tests, even on an iPhone 4S or iPad 3, the demo list can still maintain a smooth interaction of 50-60 FPS when sliding quickly, while the list views of other apps such as Weibo and Moments have serious lags when sliding. Weibo's demo has about 4,000 lines of code, while Twitter's has only about 2,000 lines of code. The third-party library only uses YYKit, so the number of files is relatively small, making it easier to view. Okay, here is the main text. How screens display images Let's start with the principle of CRT monitors in the past. The electron gun of CRT scans from top to bottom line by line in the above way. After the scan is completed, the monitor presents a frame of the picture, and then the electron gun returns to the initial position to continue the next scan. In order to synchronize the display process of the monitor with the system's video controller, the monitor (or other hardware) will use the hardware clock to generate a series of timing signals. When the electron gun switches to a new line and prepares to scan, the monitor will send a horizontal synchronization signal (horizontal synchronization), referred to as HSync; and when a frame of the picture is drawn, the electron gun returns to its original position, and before preparing to draw the next frame, the monitor will send a vertical synchronization signal (vertical synchronization), referred to as VSync. The monitor is usually refreshed at a fixed frequency, and this refresh rate is the frequency generated by the VSync signal. Although most of the current devices are LCD screens, the principle remains unchanged. Generally speaking, the CPU, GPU, and display in a computer system work together in the above way. The CPU calculates the display content and submits it to the GPU. After the GPU completes the rendering, it puts the rendering results into the frame buffer. Then the video controller reads the data in the frame buffer line by line according to the VSync signal, and passes it to the display for display after possible digital-to-analog conversion. In the simplest case, there is only one frame buffer. In this case, the reading and refreshing of the frame buffer will have a relatively large efficiency problem. In order to solve the efficiency problem, the display system usually introduces two buffers, that is, a double buffering mechanism. In this case, the GPU will pre-render a frame into a buffer and let the video controller read it. When the next frame is rendered, the GPU will directly point the pointer of the video controller to the second buffer. In this way, the efficiency will be greatly improved. Although double buffering can solve the efficiency problem, it will introduce a new problem. When the video controller has not finished reading, that is, when the screen content is just half displayed, the GPU submits a new frame of content to the frame buffer and swaps the two buffers. The video controller will then display the lower half of the new frame of data on the screen, causing screen tearing, as shown in the following figure: To solve this problem, GPUs usually have a mechanism called vertical synchronization (V-Sync for short). When vertical synchronization is turned on, the GPU will wait for the VSync signal from the monitor to be sent before rendering a new frame and updating the buffer. This can solve the screen tearing problem and increase the smoothness of the screen, but it requires more computing resources and will also cause some delays. So what is the situation with mainstream mobile devices? From the information found online, we know that iOS devices always use double buffering and enable vertical synchronization. However, Google did not introduce this mechanism until version 4.1 of Android devices. Currently, the Android system uses triple buffering + vertical synchronization. Causes and solutions for lag After the VSync signal arrives, the system graphics service will notify the App through mechanisms such as CADisplayLink, and the App main thread will start calculating the display content in the CPU, such as view creation, layout calculation, image decoding, text drawing, etc. The CPU will then submit the calculated content to the GPU, which will transform, synthesize, and render it. The GPU will then submit the rendering results to the frame buffer and wait for the next VSync signal to arrive before displaying them on the screen. Due to the vertical synchronization mechanism, if the CPU or GPU does not complete content submission within a VSync time, that frame will be discarded and will wait for the next opportunity to display it, and the display will retain the previous content unchanged. This is the reason for the interface freeze. As can be seen from the above figure, no matter which of the CPU and GPU hinders the display process, it will cause frame drops. Therefore, during development, it is also necessary to evaluate and optimize the CPU and GPU pressures separately. CPU resource consumption causes and solutions Object creation The creation of objects will allocate memory, adjust properties, and even read files, which consumes CPU resources. Try to use lightweight objects instead of heavy objects to optimize performance. For example, CALayer is much lighter than UIView, so it is more appropriate to use CALayer to display controls that do not need to respond to touch events. If the object does not involve UI operations, try to create it in the background thread, but unfortunately, controls that contain CALayer can only be created and operated in the main thread. When creating view objects through Storyboard, its resource consumption will be much greater than creating objects directly through code. In performance-sensitive interfaces, Storyboard is not a good technical choice. Try to postpone the creation of objects and spread the creation of objects across multiple tasks. Although this is more difficult to implement and does not bring many advantages, you should try it if you have the ability to do it. If the object can be reused and the cost of reuse is lower than releasing and creating a new object, then such objects should be reused in a cache pool as much as possible. Object Adjustment Adjusting objects is also often a place where CPU resources are consumed. Here is a special mention of CALayer: CALayer does not have internal properties. When a property method is called, it temporarily adds a method to the object through the runtime resolveInstanceMethod and saves the corresponding property value to an internal Dictionary. At the same time, it notifies the delegate, creates animations, etc., which consumes a lot of resources. UIView's display-related properties (such as frame/bounds/transform) are actually mapped to CALayer properties, so when adjusting these properties of UIView, the resources consumed are much greater than general properties. In this regard, you should try to reduce unnecessary property modifications in your application. When the view hierarchy is adjusted, many method calls and notifications will appear between UIView and CALayer, so when optimizing performance, you should try to avoid adjusting the view hierarchy, adding and removing views. Object destruction Although object destruction consumes little resources, the accumulated resources cannot be ignored. Usually when a container class holds a large number of objects, the resource consumption during its destruction is very obvious. Similarly, if the object can be released on a background thread, move it to the background thread. Here is a small tip: capture the object in a block, then throw it into the background queue and send a message to avoid compiler warnings, so that the object can be destroyed on the background thread.
Layout calculation The calculation of view layout is the most common place in the App that consumes CPU resources. If the view layout can be calculated in advance in the background thread and cached, then this place will basically not cause performance problems. No matter what technology is used to lay out the view, it will eventually fall on the adjustment of UIView.frame/bounds/center and other properties. As mentioned above, adjusting these properties is very resource-intensive, so try to calculate the layout in advance and adjust the corresponding properties once when needed, rather than calculating and adjusting these properties multiple times and frequently. Autolayout Autolayout is a technology advocated by Apple itself, and it can improve development efficiency in most cases. However, Autolayout often causes serious performance problems for complex views. As the number of views increases, the CPU consumption caused by Autolayout will increase exponentially. For specific data, you can refer to this article: http://pilky.me/36/. If you don't want to manually adjust properties such as frame, you can use some tool methods instead (such as the common left/right/top/bottom/width/height shortcut properties), or use frameworks such as ComponentKit and AsyncDisplayKit. Text calculation If an interface contains a lot of text (such as Weibo, WeChat Moments, etc.), the calculation of the width and height of the text will take up a large part of the resources and is unavoidable. If you have no special requirements for text display, you can refer to the internal implementation of UILabel: use [NSAttributedString boundingRectWithSize:options:context:] to calculate the width and height of the text, and use -[NSAttributedString drawWithRect:options:context:] to draw the text. Although these two methods have good performance, they still need to be put into the background thread to avoid blocking the main thread. If you use CoreText to draw text, you can first generate a CoreText layout object, then calculate it yourself, and the CoreText object can be retained for later drawing. Text Rendering All text content controls that can be seen on the screen, including UIWebView, are typeset and drawn as Bitmaps through CoreText at the bottom layer. Common text controls (UILabel, UITextView, etc.) are typeset and drawn on the main thread. When displaying a large amount of text, the pressure on the CPU will be very high. There is only one solution to this problem, which is to customize the text control and use TextKit or the bottom-level CoreText to draw the text asynchronously. Although this is very troublesome to implement, the advantages it brings are also very great. After the CoreText object is created, it can directly obtain information such as the width and height of the text, avoiding multiple calculations (one calculation when adjusting the size of UILabel and another calculation internally when drawing UILabel); CoreText objects take up less memory and can be cached for multiple renderings later. Decoding of images When you create an image using the methods of UIImage or CGImageSource, the image data is not decoded immediately. The data in the CGImage is decoded before the image is set to UIImageView or CALayer.contents and the CALayer is submitted to the GPU. This step happens on the main thread and is unavoidable. If you want to bypass this mechanism, the common practice is to draw the image into a CGBitmapContext in the background thread first, and then create the image directly from the Bitmap. Currently, common online image libraries all have this function. Drawing of images Image drawing usually refers to the process of drawing an image into a canvas using methods that start with CG, and then creating a picture from the canvas and displaying it. The most common place for this is in [UIView drawRect:]. Since CoreGraphic methods are usually thread-safe, image drawing can be easily performed on a background thread. A simple asynchronous drawing process is as follows (the actual situation is much more complicated than this, but the principle is basically the same):
GPU resource consumption causes and solutions Compared to the CPU, the GPU can do relatively simple things: receive submitted textures and vertex descriptions (triangles), apply transformations, blend and render, and then output to the screen. Usually, the content you can see is mainly textures (pictures) and shapes (vector graphics simulated by triangles). Texture rendering All Bitmaps, including images, text, and rasterized content, must eventually be submitted from memory to video memory and bound as GPU Texture. Both the submission to video memory and the GPU adjustment and rendering of Texture consume a lot of GPU resources. When a large number of images are displayed in a short period of time (for example, when a TableView has a lot of images and slides quickly), the CPU occupancy rate is very low, the GPU occupancy rate is very high, and the interface will still drop frames. The only way to avoid this situation is to minimize the display of a large number of images in a short period of time and combine multiple images into one for display as much as possible. When the image is too large and exceeds the maximum texture size of the GPU, the image needs to be preprocessed by the CPU first, which will consume additional resources for both the CPU and GPU. Currently, the texture size limit for iPhone 4S and above is 4096x4096. For more detailed information, please see here: iosres.com. Therefore, try not to let the size of images and views exceed this value. Composing Views When multiple views (or CALayers) overlap and display, the GPU will first blend them together. If the view structure is too complex, the blending process will also consume a lot of GPU resources. In order to reduce the GPU consumption in this situation, the application should minimize the number and layers of views, and mark the opaque property in the opaque view to avoid useless Alpha channel synthesis. Of course, this can also be done using the above method to pre-render multiple views into one picture for display. Graphics Generation CALayer's border, rounded corners, shadows, masks, and CASharpLayer's vector graphics display usually trigger offscreen rendering, which usually occurs in the GPU. When a large number of rounded CALayers appear in a list view and slide quickly, you can observe that the GPU resources are full, while the CPU resources are consumed very little. At this time, the interface can still slide normally, but the average frame rate will drop to a very low level. To avoid this, you can try to turn on the CALayer.shouldRasterize property, but this will transfer the original off-screen rendering operation to the CPU. For some occasions where only rounded corners are needed, you can also use a pre-drawn rounded corner image to cover the original view to simulate the same visual effect. The most thorough solution is to draw the graphics to be displayed as images in the background thread, avoiding the use of properties such as rounded corners, shadows, and masks. AsyncDisplayKit AsyncDisplayKit is an open source library from Facebook that is used to keep the iOS interface smooth. I learned a lot from it, so I will spend a lot of time introducing and analyzing it below. The origin of ASDK
The author of ASDK is Scott Goodson (Linkedin). He used to work at Apple and was responsible for the development of some built-in applications of iOS, such as Stocks, Calculator, Maps, Clocks, Settings, and Safari. Of course, he also participated in the development of UIKit framework. Later, after joining Facebook, he was responsible for the development of Paper and created and open-sourced AsyncDisplayKit. Currently, he is responsible for iOS development and user experience improvement at Pinterest and Instagram. ASDK has been open source since June 2014, and version 1.0 was released in October. Currently, ASDK is about to release version 2.0. ASDK documentation If you want to understand the principles and details of ASDK, it is best to start with the following videos: 2014.10.15 NSLondon - Scott Goodson - Behind AsyncDisplayKit 2015.03.02 MCE 2015 - Scott Goodson - Effortless Responsiveness with AsyncDisplayKit 2015.10.25 AsyncDisplayKit 2.0: Intelligent User Interfaces - NSSpain 2015 The contents of the first two videos are similar, both of which introduce the basic principles of ASDK and other projects such as POP. In addition, you can also check out the ASDK-related discussions in Github Issues. Here are some important contents: About Runloop Dispatch About the difference between ComponentKit and ASDK Why Storyboard and Autolayout are not supported? How to evaluate the fluency of the interface Afterwards, you can also visit Google Groups to view and discuss more content: Basic principles of ASDK ASDK believes that tasks that block the main thread can be divided into the three categories above. Text and layout calculation, rendering, decoding, and drawing can be performed asynchronously in various ways, but UIKit and Core Animation related operations must be performed on the main thread. The goal of ASDK is to move these tasks away from the main thread as much as possible, and if they cannot be moved away, optimize the performance as much as possible. This is the common relationship between UIView and CALayer: View holds Layer for display, and most display properties in View are actually mapped from Layer; Layer's delegate is View here, and View can be notified when its properties change or animation occurs. UIView and CALayer are not thread-safe and can only be created, accessed, and destroyed in the main thread. ASDK created the ASDisplayNode class for this purpose, which wraps common view properties (such as frame/bounds/alpha/transform/backgroundColor/superNode/subNodes, etc.), and then implements a relationship like ASNode->UIView in the same way as UIView->CALayer. When there is no need to respond to touch events, ASDisplayNode can be set to layer backed, that is, ASDisplayNode acts as the original UIView function, saving more resources. Unlike UIView and CALayer, ASDisplayNode is thread-safe and can be created and modified in background threads. When a Node is first created, it does not create a new UIView or CALayer internally. It will not generate the corresponding object internally until the view or layer property is accessed for the first time in the main thread. When its properties (such as frame/transform) change, it does not immediately synchronize to the view or layer it holds, but saves the changed properties to an internal intermediate variable, and then sets it to the internal view or layer at once through a certain mechanism when needed later. By simulating and encapsulating UIView/CALayer, developers can replace UIView in the code with ASNode, which greatly reduces the development and learning costs, and can also obtain a lot of performance optimizations at the bottom of ASDK. For ease of use, ASDK encapsulates a large number of commonly used controls into subclasses of ASNode, such as Button, Control, Cell, Image, ImageView, Text, TableView, CollectionView, etc. With these controls, developers can try to avoid using UIKit-related controls directly to obtain more complete performance improvements. ASDK layer precomposition Sometimes a layer contains many sub-layers, and these sub-layers do not need to respond to touch events, nor do they need animation and position adjustment. ASDK implements a technology called pre-composing to render these sub-layers into a single image. During development, ASNode has replaced UIView and CALayer; after directly using various Node controls and setting them as layer backed, ASNode can even avoid creating internal UIView and CALayer through pre-composition. In this way, by drawing a large layer to a single image using a single drawing method, performance can be greatly improved. The CPU avoids the resource consumption of creating UIKit objects, and the GPU avoids the consumption of compositing and rendering multiple textures. Fewer bitmaps also means less memory usage. ASDK asynchronous concurrent operations
Since iPhone 4S, iDevices have all had dual-core CPUs, and the current iPad has even been updated to a 3-core CPU. Taking full advantage of multi-cores and executing tasks concurrently is very helpful in keeping the interface smooth. ASDK encapsulates operations such as layout calculation, text typesetting, and image/text/graphics rendering into smaller tasks, and uses GCD to execute them asynchronously and concurrently. If the developer uses ASNode-related controls, these concurrent operations will automatically be performed in the background without the need for excessive configuration. Runloop task distribution Runloop work distribution is a core technology of ASDK. It is not introduced in detail in the introduction video and documents of ASDK, so I will do more analysis here. If you don’t know much about Runloop, you can read my previous article Deep Understanding of RunLoop, which also mentions ASDK. The display system of iOS is driven by VSync signals, which are generated by the hardware clock and are sent 60 times per second (this value depends on the device hardware, for example, it is usually 59.97 on a real iPhone). After the iOS graphics service receives the VSync signal, it will notify the App through IPC. After the App's Runloop is started, it will register the corresponding CFRunLoopSource to receive the clock signal notification transmitted through mach_port, and then the Source callback will drive the animation and display of the entire App. Core Animation registers an Observer in RunLoop and listens to the BeforeWaiting and Exit events. The priority of this Observer is 2000000, which is lower than other common Observers. When a touch event arrives, RunLoop is awakened, and the code in the App will perform some operations, such as creating and adjusting the view hierarchy, setting the frame of UIView, modifying the transparency of CALayer, and adding an animation to the view; these operations will eventually be captured by CALayer and submitted to an intermediate state through CATransaction (the documentation of CATransaction mentions these contents briefly, but it is not complete). When all the above operations are completed, when RunLoop is about to go to sleep (or exit), the Observer paying attention to the event will be notified. At this time, the Observer registered by CA will merge all the intermediate states and submit them to the GPU for display in the callback; if there is an animation here, CA will trigger the relevant process multiple times through mechanisms such as DisplayLink. ASDK simulates this mechanism of Core Animation here: all modifications and submissions to ASNode always require some tasks to be executed in the main thread. When such tasks appear, ASNode will encapsulate the tasks with ASAsyncTransaction(Group) and submit them to a global container. ASDK also registers an Observer in RunLoop, monitoring the same events as CA, but with a lower priority than CA. Before RunLoop goes to sleep and after CA handles the events, ASDK will execute all tasks submitted in the loop. For specific code, see this file: ASAsyncTransactionGroup. Through this mechanism, ASDK can synchronize asynchronous and concurrent operations to the main thread at the appropriate opportunity and achieve good performance. other ASDK also encapsulates many advanced features, such as preloading of sliding lists, new layout modes added in V2.0, etc. ASDK is a huge library, and it is not recommended that you change the entire App to ASDK driver. It is enough to use ASDK to optimize the areas that need to improve interactive performance the most. Weibo Demo Performance Optimization Tips In order to demonstrate the functions of YYKit, I implemented the demos of Weibo and Twitter, and did a lot of performance optimization for them. The following are some techniques used in the optimization. Pre-typesetting After getting the API JSON data, I will calculate the data required by each Cell in the background thread and encapsulate it into a layout object CellLayout. CellLayout contains the CoreText layout results of all texts, the height of each control inside the Cell, and the overall height of the Cell. Each CellLayout does not take up much memory, so after it is generated, it can all be cached in memory for later use. In this way, TableView will not consume any unnecessary calculations when requesting each height function; when CellLayout is set inside the Cell, the layout does not need to be calculated inside the Cell. For a common TableView, calculating the layout result in the background in advance is a very important performance optimization point. In order to achieve the highest performance, you may need to sacrifice some development speed, do not use technologies such as Autolayout, and use less text controls such as UILabel. But if your performance requirements are not so high, you can try to use the estimated height function of TableView and cache the height of each Cell. Here is an open source project from the Baidu Zhidao team that can easily help you achieve this: FDTemplateLayoutCell. Pre-rendering Weibo's avatar was changed to a circle in a certain revision, so I followed it. When the avatar is downloaded, I will pre-render the avatar as a circle in the background thread and save it separately to an ImageCache. For TableView, off-screen rendering of Cell content will bring about greater GPU consumption. In Twitter Demo, I used a lot of layer rounded corner attributes to save trouble. You can quickly slide the list on a low-performance device (such as iPad 3), and you can feel that although the list is not very stuck, the overall average frame rate has dropped. When you use Instrument to check, you can see that the GPU is running at full capacity, while the CPU is relatively idle. To avoid off-screen rendering, you should try to avoid using layer borders, corners, shadows, masks and other technologies, and try to pre-draw the corresponding content in the background thread. Asynchronous drawing I only used the asynchronous drawing function on the control that displays text, but the effect is very good. I referred to the principle of ASDK and implemented a simple asynchronous drawing control. I extracted this code separately and put it here: YYAsyncLayer. YYAsyncLayer is a subclass of CALayer. When it needs to display content (such as calling [layer setNeedDisplay]), it will request an asynchronous drawing task from the delegate, that is, UIView. During asynchronous drawing, Layer will pass a block such as BOOL(^isCancelled)(), and the drawing code can call this block at any time to determine whether the drawing task has been canceled. When the TableView slides quickly, a large number of asynchronous drawing tasks will be submitted to the background thread for execution. However, sometimes when the sliding speed is too fast, the drawing task may be cancelled before it is completed. If you continue to draw at this time, it will cause a lot of CPU resources to be wasted, and even block the thread and cause subsequent drawing tasks to be delayed. My approach is to try to quickly and in advance determine whether the current drawing task has been cancelled; before drawing each line of text, I will call isCancelled() to make a judgment to ensure that the cancelled task can be exited in time and will not affect subsequent operations. At present, some third-party microblog clients (such as VVebo, Mokey, etc.) use a method to avoid the drawing process of cells when sliding at high speed. For related implementation, see this project: VVeboTableViewDemo. Its principle is that when sliding, after releasing the finger, the position of the cell when the sliding stops is immediately calculated, and several cells near that position are pre-drawn, while ignoring the cell currently sliding. This method is more skillful and greatly improves the sliding performance. The only disadvantage is that a lot of blank content will appear during fast sliding. If you don't want to implement the more troublesome asynchronous drawing but want to ensure the smoothness of sliding, this technique is a good choice. Global concurrency control When I use concurrent queues to perform a lot of drawing tasks, I occasionally encounter this problem: When a large number of tasks are submitted to the background queue, some tasks will be locked for some reason (here is the CGFont lock), causing the thread to sleep or be blocked. The concurrent queue will then create a new thread to perform other tasks. When this situation becomes more common, or when a large number of concurrent queues are used in the App to perform more tasks, there will be dozens of threads running, created, and destroyed at the same time in the App. The CPU uses time slice rotation to achieve thread concurrency. Although the concurrent queue can control the priority of the thread, when a large number of threads are created, run, and destroyed at the same time, these operations will still occupy the CPU resources of the main thread. ASDK has a Demo of the Feed list: SocialAppLayout. When there are too many cells in the list and they slide very quickly, the interface will still have a small amount of freezes. I cautiously guess that this may be related to this problem. This problem is inevitable when using concurrent queues, but using serial queues cannot fully utilize the resources of multi-core CPUs. I wrote a simple tool, YYDispatchQueuePool, which creates serial queues with the same number of CPUs for different priorities. Each time a queue is obtained from the pool, one of the queues will be polled and returned. I put all asynchronous operations in the App, including image decoding, object release, asynchronous drawing, etc., into the global serial queue for execution according to different priorities, so as to avoid performance problems caused by too many threads. More efficient asynchronous image loading SDWebImage still has some performance issues in this demo, and some parts cannot meet my needs, so I implemented a higher performance image loading library. When displaying a simple single image, it is enough to use UIView.layer.contents, and there is no need to use UIImageView to bring extra resource consumption. For this reason, I added methods such as setImageWithURL on CALayer. In addition, I also managed operations such as image decoding through YYDispatchQueuePool to control the total number of App threads. Other areas for improvement After completing the above optimizations, the Weibo Demo is already very smooth. However, in my opinion, there are still some further optimization techniques, but due to time and energy constraints, I have not implemented them. Here are a few simple ones: There are many visual elements in the list that do not require touch events. These elements can be pre-drawn into a picture using ASDK's layer synthesis technology. Further reduce the number of layers in each Cell and replace UIView with CALayer. Currently, each cell has the same type, but displays different contents. For example, some cells have pictures, and some cells have cards. Dividing cells by type and further reducing unnecessary view objects and operations in cells should have some effect. Divide the tasks that need to be executed on the main thread into small enough blocks, and schedule them through Runloop. In each loop, determine the time of the next VSync, and before the next VSync comes, delay the current unfinished tasks to the next opportunity. This is just my idea, and it may not be realized or work. How to evaluate the fluency of the interface Finally, I would like to mention that "premature optimization is the root of all evil". When the requirements are not determined and the performance problems are not obvious, there is no need to try to optimize, but to implement the functions correctly as much as possible. When optimizing performance, it is also best to follow the process of modifying code -> Profile -> modifying code, giving priority to solving the most worthy optimization areas. If you need a clear FPS indicator, you can try KMCGeigerCounter. For CPU jams, it can be detected through the built-in CADisplayLink; for GPU jams, it uses a 1x1 SKView to monitor. This project has two small problems: although SKView can monitor GPU jams, the introduction of SKView itself will bring a little extra resource consumption to the CPU/GPU; this project has some compatibility issues under iOS 9 and needs to be slightly adjusted. I also wrote a simple FPS indicator: FPSLabel has only a few dozen lines of code, and only uses CADisplayLink to monitor CPU lag. Although it is not as perfect as the above tool, it is not a big problem for daily use. Finally, use the GPU Driver preset of Instruments to view the resource consumption of the CPU and GPU in real time. In this preset, you can view almost all display-related data, such as the number of textures, the frequency of CA submission, GPU consumption, etc. This is the best tool for locating interface freeze problems. |
<<: Why can’t you lead a team well even after reading all the books on project management?
>>: Who is most likely to become the fourth pole of China's Internet?
Douyin began to enter the e-commerce field in an ...
Xi Mengyao’s fall during the recent Victoria’s Se...
During the operation of a product, there is one m...
Why do users keep using some products (including ...
The HTC One M8 is equipped with a 5-inch 1080p dis...
First, let’s analyze Daichao’s business processes...
Author: Zhang Yilin Recently, the 18-year-old Gu ...
The New Year is coming soon. After being busy out...
“Should we also do video accounts and live broadc...
A research team composed of American and European...
I have been dissecting Xiaohongshu accounts recen...
As the new media director of Fan Deng Reading, I ...
On May 21, BYD Auto Co., Ltd.'s business info...
A compulsory system course for traders, building ...