In mobile development, we often deal with multimedia data. Parsing this data often consumes a lot of resources and is a common performance bottleneck. This article focuses on a type of multimedia data, images. It introduces common image formats, how they are transmitted, stored, and displayed on mobile platforms, and a method to optimize image display performance: forcing sub-thread decoding. How are images stored and represented in the computer world? Images, like all other resources, are essentially binary data of 0s and 1s in memory. Computers need to render these raw contents into images that can be observed by the human eye. Conversely, the images also need to be saved in the memory or transmitted over the network in an appropriate form. Here is a raw hexadecimal representation of an image on disk: This method of encoding images in binary format according to certain rules is the format of the image. Common image formats There are many formats of pictures. In addition to the well-known JPG, PNG, GIF, there are dozens of other formats, such as Webp, BMP, TIFF, CDR, etc., which are used in different scenarios or platforms. These formats can be divided into two main categories: lossy and lossless. Lossy compression: Compared with color, the human eye is more sensitive to light brightness information. Based on this, by combining the color information in the image and retaining the brightness information, the storage volume can be reduced without affecting the image's appearance as much as possible. As the name suggests, the compressed image will permanently lose some details. The most typical lossy compression format is jpg. Lossless compression: Unlike lossy compression, lossless compression does not lose image details. It reduces the size of images by indexing and creating an index table for different color features in the image, thus reducing duplicate color data and achieving compression effects. Common lossless compression formats are png and gif. In addition to the formats mentioned above, it is necessary to briefly introduce the two formats of webp and bitmap: Webp: jpg, as the mainstream web image standard, can be traced back to the early 1990s and is already very old. Therefore, Google launched the Webp standard to replace the outdated jpg in order to speed up the loading of web images and improve the quality of image compression. Webp supports both lossy and lossless compression methods, and has a high compression rate. Lossless compression reduces the size of webp by 45% compared to png. For webp and jpg of the same quality, the former can also save half the traffic. At the same time, webp also supports animated images, which can be said to be the epitome of image compression formats. The disadvantage of webp is that browser and mobile support is not yet perfect. We need to introduce Google's libwebp framework, and encoding and decoding will also consume relatively more resources. bitmap: bitmap is also called bitmap file, it is a *non-compressed* image format, so the size is very large. The so-called non-compression means that the original information of each pixel of the image is arranged in sequence in the memory. For a typical 1920*1080 pixel bitmap image, each pixel is represented by RGBA four bytes to represent the color, so its size is 1920 * 1080 * 4 = 1012.5kb. Since bitmap simply stores the pixel information of the image in sequence, it can be directly rendered to the UI without decoding. In fact, images in other formats generally need to be decoded into bitmap first before being rendered to the interface. How to determine the format of an image? In some scenarios, we need to manually determine the format of the image data in order to perform different processing. Generally speaking, as long as we get the original binary data, we can perform simple classification based on the encoding characteristics of different compression formats. The following are some common implementations of the image framework that can be copied and used:
UIImageView performance bottleneck As mentioned above, most image formats need to be decoded into bitmaps before they can be rendered on the UI. UIImageView displays images in a similar process. In fact, a picture goes through the following steps from being in the file system to being displayed in UIImageView:
Due to the encapsulation of UIKit, these details are not directly displayed to developers. In fact, when we call [UIImage imageNamed:@"xxx"], the UIImage stores the undecoded image, and when we call [UIImageView setImage:image], the image is decoded in the main thread and displayed on the UI. At this time, the UIImage stores the decoded bitmap data. Image decompression is a CPU-intensive task. If we have a large number of images to display in a list, it will greatly slow down the system's response speed and reduce the running frame rate. This is a performance bottleneck of UIImageView. Solving performance bottlenecks: forced decoding If the decoded data is stored in UIImage, the speed will be much faster, so the optimization idea is: force decoding the original data of the image in the child thread, and then throw the decoded image back to the main thread for continued use, thereby improving the response speed of the main thread. The tools we need to use are the CGBitmapContextCreate method and related drawing functions of the Core Graphics framework. The overall steps are:
The following is the core code of SDWebImage implementation, and the corresponding analysis of the numbers is as follows:
Analysis of the above code: 1. Get a reference to CGImageRef from the UIImage object. These two structures are how Apple represents images at different levels. UIImage belongs to UIKit and is an abstraction of UI-level images, used to display images; CGImageRef is a structure pointer in QuartzCore, written in C language, used to create pixel bitmaps, and can edit images by operating the stored pixel bits. These two structures can be easily converted to each other:
2. Call the +colorSpaceForImageRef: method of UIImage to obtain the color space parameters of the original image. What is color space? It is the way to interpret the same color value. For example, if the data of a pixel is (FF0000FF), it will be interpreted as red in RGBA color space, but it will be interpreted as blue in BGRA color space. So we need to extract this parameter to ensure that the color space of the image before and after decoding is consistent. Color space types supported in CoreGraphic: 3. Calculate the number of bits required for each row of the decoded image , which is obtained by multiplying two parameters: the number of pixels per row, width, and the number of bits required to store one pixel4. The 4 here is actually determined by the pixel format and pixel combination of each image. The following table shows the pixel combinations supported by the Apple platform. The bpp in the table indicates how many bits are needed for each pixel; bpc indicates how many bits are needed for each color component. For a specific explanation, see the following figure: The decoded image uses the pixel combination of kCGImageAlphaNoneSkipLast RGB by default, without an alpha channel. Each pixel is 32 bits and 4 bytes. The first three bytes represent the red, green and blue channels, and the last byte is discarded and not interpreted. 4. The most critical function: calling the CGBitmapContextCreate() method to generate a blank image drawing context. We pass in some of the above parameters to specify the image size, color space, pixel arrangement and other properties. 5. Call the CGContextDrawImage() method to write the undecoded imageRef pointer content into the context we created. This step completes the implicit decoding work. 6. Create a new imageRef from the context , which is the decoded image. 7. Generate a UIImage object from imageRef for use by the UI layer , and specify the scale and orientation parameters of the image. Scale refers to the multiple by which the image needs to be compressed when it is rendered. Why does this parameter exist? Because in order to save the size of the installation package, Apple allows developers to upload versions of the same image with different resolutions, which are the familiar @2x and @3x suffix images. Devices with different screen qualities will obtain corresponding resources. In order to unify the drawing of images, these images will have their own scale attributes set. For example, for @2x images, the scale value is 2. Although the drawing width and height are the same as those of 1x images, the actual length is width * scale. Orientation is easy to understand. It is the rotation attribute of the image, telling the device which direction to use as the default orientation for rendering the image. Through the above steps, we successfully forced transcoding of images in the child thread and called back to the main thread for use, thereby greatly improving the rendering efficiency of images. This is also the best practice of mainstream apps and a large number of third-party libraries. Summarize To summarize this article:
Similar to UIImageView, UIKit hides a lot of technical details, lowering the learning threshold for developers, but on the other hand, it also limits our exploration of some underlying technologies. The forced decoding method mentioned in the article is actually a "side effect" of the CGBitmapContextCreate method, which is a relatively hack method. This is also a limitation of the iOS platform: Apple is too closed. Users are actually very sensitive to software performance (frame rate, response speed, crash rate, etc.). As a developer, you must constantly explore the principles behind performance bottlenecks and try to solve them. Performance optimization for mobile development is never-ending. |
<<: How can programmers achieve zero code defects?
>>: Android P status bar changed to only display 4 notification icons: make way for the notch screen
Source code introduction You can add beautiful an...
The iPhone is criticized every year, so why does ...
"How well do you understand WeChat official ...
What is the most important thing about Thai adver...
Currently, the hottest marketing method in the ma...
In recent years, China's smart TV and Android...
Author: Chen Bingwei, deputy chief physician of T...
[[246273]] summary: Why do BAT attach so much imp...
This article was published by the official accoun...
The so-called hot-selling content is content that...
Live streaming sales are very popular. It's m...
On August 27, the Internet Conference was held in...
Some time ago, the P2P industry experienced a sma...
There are two aspects of concerns about Sharp TVs...
In recent days, not only has the number of new inf...