Summary of iOS Graphics Programming

Summary of iOS Graphics Programming

iOS can use three APIs (UIKIT, Core Graphics, OpenGL ES and GLKit) to implement graphics programming.

The drawing operations included in these APIs are all drawn in a graphics context. A graphics context contains drawing parameters and all device-specific information required for drawing, including screen graphics context, offscreen bitmap context and PDF graphics context, which are used to draw graphics and images on the screen surface, a bitmap or a PDF file. Drawing in the screen graphics context is limited to drawing in an instance of the UIView class or its subclasses and is displayed directly on the screen. Drawing in the offscreen bitmap or PDF graphics context is not directly displayed on the screen.

1. UIKIT API

UIKIT is a set of Objective-C APIs that provide Objective-C encapsulation for line graphics, Quartz images and color operations, and provide 2D drawing, image processing and user interface level animation.

UIKIT includes classes such as UIBezierPath (drawing lines, angles, ellipses and other graphics), UIImage (displaying images), UIColor (color operations), UIFont and UIScreen (providing font and screen information), as well as functions for drawing and operating on bitmap graphics environments and PDF graphics environments. It also provides support for standard views and printing functions.

In UIKIT, the UIView class automatically creates a graphics context (corresponding to the CGContext type of the Core Graphics layer) as the current graphics drawing context when drawing. When drawing, you can call the UIGraphicsGetCurrentContext function to obtain the current graphics context.

2. Core Graphics and Quartz 2D API

Core Graphics is a C-based API that supports drawing vector graphics, lines, shapes, patterns, paths, gradients, bitmap images, and PDF content.

Quartz 2D is a 2D rendering engine in Core Graphics. Quartz is resource and device independent, providing path drawing, anti-aliased rendering, gradient fill patterns, images, transparent drawing and transparent layers, masking and shadows, color management, coordinate conversion, fonts, offscreen rendering, PDF document creation, display and analysis, etc.

Quartz 2D can be used with all graphics and animation technologies such as Core Animation, OpenGL ES, and UIKit.

Quartz uses paint mode for drawing.

The graphics environment used in Quartz is also represented by a class CGContext.

In Quartz, you can use a graphics environment as a drawing target. When using Quartz for drawing, all device-specific features are included in the specific type of graphics environment you use, so by providing different graphics environments to the same image operation function, you can draw the same image on different devices, thus achieving device independence of image drawing.

Quartz provides the following graphics environments for applications:

1) Bitmap graphics environment, used to create a bitmap.

Use the function CGBitmapContextCreate to create it.

2) PDF graphics environment, used to create a PDF file.

The Quartz 2D API provides two functions to create a PDF graphics environment:

CGPDFContextCreateWithURL creates a PDF graphics context with a Core Foundation URL that is the location of the PDF output.

CGPDFContextCreate, which is used when you want to output PDF to a data consumer.

3) Window graphics environment, used to draw on a window.

4) A layer context (CGLayer) is an offscreen drawing target associated with another graphics context. The purpose of using a layer context is to optimize the performance of drawing a layer to the graphics context that created it. A layer context can provide better offscreen drawing performance than a bitmap graphics context.

The main classes provided by Quartz include:

CGContext: represents a graphics environment;

CGPath: Use vector graphics to create paths, and can fill and stroke;

CGImage: used to represent bitmaps;

CGLayer: used to represent a drawing layer that can be used for repeated drawing and offscreen drawing;

CGPattern: used to represent Pattern, for repeated drawing;

CGShading and CGGradient: used to draw gradients;

CGColor and CGColorSpace; used for color and color space management;

CGFont, used to draw text;

CGPDFContentStream, CGPDFScanner, CGPDFPage, CGPDFObject, CGPDFStream, CGPDFString, etc. are used to create, parse and display PDF files.

3. OpenGL ES and GLKit

OpenGL ES is a versatile open standard C-based graphics library for embedded systems, used for 2D and 3D data visualization. OpenGL is designed to convert a set of graphics call functions to the underlying graphics hardware (GPU), which executes the graphics commands to implement complex graphics operations and calculations, thereby enabling high performance and high frame rate to utilize the 2D and 3D drawing capabilities provided by the GPU.

The OpenGL ES specification itself does not define drawing surfaces and drawing windows, so in order to use it, iOS must provide and create an OpenGLES rendering environment, create and configure a framebuffer to store the results of drawing commands, and create and configure one or more rendering targets.

In iOS, the EAGLContext class provided by EAGL is used to implement and provide a rendering environment to maintain the hardware state used by OpenGLES. EAGL is an Objective-C API that provides an interface for integrating OpenGL ES with Core Animation and UIKIT.

Before calling any OpenGLES functions you must first initialize an EAGLContext object.

Each thread of each iOS application has a current context. When calling OpenGLES functions, the state in this context is used or changed.

The EAGLContext class method setCurrentContext: is used to set the current context of the current thread. The EAGLContext class method currentContext returns the current context of the current thread. Before switching two contexts of the same thread, the glFlush function must be called to ensure that previously submitted commands are submitted to the graphics hardware.

You can use OpenGL ES in different ways to render OpenGL ES content to different targets: GLKit and CAEAGLLayer.

To create a full-screen view or integrate OpenGL ES content with a UIKit view, you can use GLKit. When using GLKit, the GLKView class provided by GLKit implements the rendering target and creates and maintains a framebuffer.

To render OpenGL ES content as part of a Core Animation layer, you can use CAEAGLLayer as the rendering target, create a framebuffer, and implement and control the entire drawing process yourself.

GLKit is a set of Objective-C classes that provide an object-oriented interface for using OpenGL ES to simplify the development of OpenGL ES applications. GLKit supports four key areas of 3D application development:

1) The GLKView and GLKViewController classes provide a standard OpenGLES view and an associated rendering loop. GLKView can be used as a rendering target for OpenGLES content, and GLKViewController provides control and animation for content rendering. The view manages and maintains a framebuffer, and the application only needs to draw in the framebuffer.

2) GLKTextureLoader provides applications with a way to automatically load texture images from sources of various image formats supported by iOS into OpenGLES image contexts, and can perform appropriate conversions and support synchronous and asynchronous loading methods.

3) Mathematical operation library, providing OpenGL ES 1.1 functions such as vector, matrix, quaternion implementation and matrix stack operations.

4) The Effect class provides implementations of standard common shading effects. It is possible to configure the effect and the associated vertex data, and then create and load the appropriate shader. GLKit includes three configurable shading effect classes: GLKBaseEffect implements the key lighting and material modes in the OpenGL ES 1.1 specification, GLKSkyboxEffect provides an implementation of a skybox effect, and GLKReflectionMapEffect includes reflection mapping support based on GLKBaseEffect.

Drawing process using GLKView and OpenGLES:

1) Create a GLKView object

GLKView objects can be created and configured programmatically or using Interface Builder.

When using programmatic method, first create a context and then call the initWithFrame:context: method.

When using the Interface Builder approach, after loading a GLKView from the storyboard, create a context and set it as the view’s context property.

The use of GLKit in iOS requires the creation of a graphics environment context of OpenGL ES 2.0 or higher.

A GLKit view automatically creates and configures all of its OpenGLES framebuffer objects and renderbuffers, and you can control the properties of these objects by modifying the view’s drawable properties.

2) Draw OpenGL content (issue drawing commands)

Drawing OpenGL content using GLKit view requires three sub-steps: preparing the OpenGLES foundation; issuing drawing commands; presenting display content to Core Animation. The GLKit class itself has implemented the first and third steps, and the user only needs to implement the second step, calling the appropriate OpenGLES drawing command in the view's drawRect method or the view's proxy object's glkView:drawInRect: to draw the content.

The GLKViewController class maintains an animation presentation loop (containing two methods, update and display), which is used to implement continuous animation of complex scenes.

The alternation rate of the animation presentation loop is dictated by the framesPerSecond property of the GLKViewController, and modified using the preferredFramesPerSecond property.

4. Other graphics programming related APIs

1) Core Animation

Core Animation is a set of Objective-C APIs that implement a high-performance composite engine and provide an easy-to-use programming interface to add smooth motion and dynamic feedback capabilities to user UI.

Core Animation is the basis for UIKit to implement animations and transformations, and is also responsible for the composite functions of views. Using Core Animation, you can implement custom animations and fine-grained animation control, and create complex layered 2D views that support animations and transformations.

Core Animation does not belong to the drawing system, but it is an infrastructure for hardware composite and operation of display content. The core of this infrastructure is the layer object, which is used to manage and operate display content. In iOS, each view corresponds to a Core Animation layer object, and like views, layers are organized into a layer relationship tree. A layer captures the view content as a bitmap that is easily manipulated by graphics hardware. In most applications, layers are used as a way to manage views, but you can also create independent layers in a layer relationship tree to display display content that views do not support.

OpenGL ES content can also be integrated with Core Animation content.

To implement animation using Core Animation, you can modify the property value of the layer to trigger the execution of an action object. Different action objects implement different animations.

Core Animation provides the following set of classes that applications can use to provide support for different types of animations:

CAAnimation is an abstract common base class. CAAnimation uses CAMediaTiming and CAAction protocols to provide time (such as cycle, speed, number of repetitions, etc.) and action behaviors (start, stop, etc.) for animation.

CAPropertyAnimation is an abstract subclass of CAAnimation that provides support for animation of layer properties specified by a key path;

CABasicAnimation is a concrete subclass of CAPropertyAnimation that provides simple insertion capabilities for a layer property.

CAKeyframeAnimation is also a specific subclass of CAPropertyAnimation, providing key frame animation support.

CATransition is a specific subclass of CAAnimation that provides effects that affect the entire layer content.

CAAnimationGroup is also a subclass of CAAnimation, allowing animation objects to be grouped together and run simultaneously.

2) Image I/O

Image I/O provides an interface for reading and writing data of image files in most formats. It mainly includes two classes: image source CGImageSourceRef and image target CGImageDestinationRef.

3) Sprite Kit

Sprite Kit is built on OpenGL ES. Sprite Kit uses graphics hardware to efficiently render animation frames, so you can animate and render any 2D texture images or game sprites at a high frame rate. The content you render includes sprites, text, CGPath shapes, videos, and more.

In Sprite Kit, animation and presentation are performed by an SKView view object. The game content is organized into scenes represented by SKScene objects. A scene contains sprites and other content to be presented. A scene also implements the logic and content processing associated with each frame.

At the same time, an SKView view only presents one scene. When the scene is presented, the scene-associated animation and frame-associated logic are automatically executed. When switching scenes, use the SKTransition class to perform animations between two scenes.

4) SceneKit

SceneKit is an Objective-C framework implemented using 3D graphics technology, including a high-performance rendering engine and an advanced descriptive API. You can use this framework to create simple games and user UIs with rich interfaces. To use SceneKit, you only need to use the descriptive API to describe the content of your scene (such as geometry, materials, lighting, and cameras, etc.) and the actions or animations you want to perform on those contents.

SceneKit's content is organized into a tree structure of nodes, called a scene graph. A scene consists of a root node that defines the scene's coordinate space, and other nodes that define the scene's visual content. SceneKit displays the scene on a view, processes the scene graph, and performs animations before presenting each frame on the GPU.

The main classes included in SceneKit are:

SCNView & SCNSceneRenderer: SCNView is a view that displays or presents SceneKit content. SCNSceneRenderer is a protocol that defines some important methods for views.

SCNScene: Represents a scene, which is a container for all SceneKit content. The scene can be loaded from a file created using a 3D authoring tool or created programmatically. The scene needs to be displayed on a view.

SCNNode: The basic building block of a scene, representing a node in the scene graph tree. The scene graph tree defines the logical structure between nodes on the scene and provides the visual content of the scene by attaching geometries, lights, and cameras to a node.

SCNGeometry, SCNLight, SCNCamera: These are the classes for geometries, lights, and cameras. SCNGeometry provides shapes, text, or custom vertex data for the scene, SCNLight provides shadow effects for the scene, and SCNCamera provides visual points for the scene.

SCNMaterial: Defines surface appearance properties for SCNGeometry objects, specifying how the object surface is shaded or textured and how it reacts to light.

Animation of SceneKit content:

SceneKit animations are based on the Core Animation framework and can be created implicitly or explicitly.

Implicit creation is actually achieved through some animation properties of the animation node: SceneKit automatically combines all changes to a scene including node properties during one run of the run loop into an atomic operation called a transaction, represented by the SCNTransaction class; when the animation period of the SCNTransaction class is set to not 0, all changes to the node animation properties are automatically animated.

As shown in the following code snippet:

  1. func fallAndFade(sender: a href= "" AnyObject /a ) {
  2. SCNTransaction.setAnimationDuration( 1.0 )
  3. textNode.position = SCNVector3(x: 0.0 , y: - 10.0 , z: 0.0 )
  4. textNode.opacity = 0.0  
  5. }

When creating an animation explicitly, you can choose a subclass of CAAnimation to create a specific type of animation. Use key-value pairs to specify properties and set animation parameters for the animation, and then attach the created animation to one or more elements of the scene. You can use different Core Animation animation classes to combine or sequence several animations or create animations that interpolate property values ​​between several keyframe values.

The following code snippet is an example of explicitly creating an animation:

  1. let animation = CABasicAnimation(keyPath: "geometry.extrusionDepth" )
  2. animation.fromValue = 0.0  
  3. animation.toValue = 100.0  
  4. animation.duration = 1.0  
  5. animation.autoreverses = true  
  6. animation.repeatCount = Float.infinity
  7. textNode.addAnimation(animation, forKey: “extrude”)

SceneKit also supports loading CAAnimation animation objects from a scene file using the SCNSceneSource class and then attaching them to SCNNode objects.

5) Metal

The Metal framework is a low-level API similar to OpenGL ES, providing support for advanced 3D graphics rendering or data-parallel computing tasks accelerated by the GPU. Metal is responsible for interacting with 3D graphics hardware, providing a fine-grained, low-level modern API that supports streaming computing for the organization, processing, submission of graphics and computing commands, and management of related resources and data. The goal of Metal is to minimize the CPU load when executing GPU tasks, eliminate performance bottlenecks when the GPU performs graphics and data-parallel computing operations, and effectively use multi-threaded parallel creation and submission of commands to the GPU.

Metal also provides a mapping programming language for writing graphics mapping or computing functions that can be used by Metal applications. Code written in the Metal mapping language can be compiled together with the application code at compile time, and then loaded onto the GPU for execution at runtime; it also supports editing the Metal mapping language code at runtime.

The Metal architecture includes the following important classes or protocols:

1. MTLDevice protocol and objects

A MTLDevice represents a GPU device that executes commands. The MTLDevice protocol defines relevant interfaces for it, including interfaces for querying device capability attributes and creating other device-specific objects, such as creating command queues, allocating buffers from memory, and creating textures.

The application obtains a MTLDevice object that the system can use by calling the MTLCreateSystemDefaultDevice function.

2. Commands and command encoders

In the Metal framework, 3D graphics rendering commands, calculation commands, and blitting commands must be encoded in the corresponding format before being submitted to a specific device GPU for execution so that they can be recognized and executed by the GPU.

The Metal framework provides an encoder protocol for each command:

MTLRenderCommandEncoder protocol: Provides an interface for encoding 3D graphics rendering commands to be executed during a single rendering cycle. MTLRenderCommandEncoder objects are used to represent the rendering state and drawing commands of a graphics rendering process.

MTLComputeCommandEncoder protocol: provides an interface for encoding data parallel computing tasks.

MTLBlitCommandEncoder protocol: Provides an interface for encoding simple copy operations between buffers and textures.

Only one command encoder can be active at a time to add commands to a command buffer, i.e. each command encoder must be completed before another command encoder using the same command buffer is created.

In order to support the parallel execution of multiple different tasks, Metal provides a MTLParallelRenderCommandEncoder protocol to support multiple MTLBlitCommandEncoders running in different threads at the same time to submit different command buffers to the same command buffer space. Each thread has its own command buffer object, which can only be accessed by one command encoder of the thread at the same time.

The MTLParallelRenderCommandEncoder object allows the command encoding of a rendering loop to be decomposed into multiple command encoders for encoding, using multi-threaded parallel processing to improve processing efficiency.

A command encoder object ends by calling its endEncoding method.

Command encoder object creation:

The command encoder object is created by the MTLCommandBuffer object. The MTLCommandBuffer protocol defines the following methods to create command encoder objects of the corresponding type:

RenderCommandEncoderWithDescriptor: creates a MTLRenderCommandEncoder object for performing graphics rendering tasks. The method parameter MTLRenderPassDescriptor represents a target for encoding rendering commands (a collection of attachment points, which can include up to four color point data attachment points, one depth point data attachment point, and one pattern point data attachment point). The graphics target to be rendered is specified in the attachment point property of the MTLRenderPassDescriptor object.

The computeCommandEncoder method creates a MTLComputeCommandEncoder object for a data-parallel computation task.

The blitCommandEncoder method creates a MTLBlitCommandEncoder object for memory blit operations and texture fill operations and mipmaps generation.

The parallelRenderCommandEncoderWithDescriptor: method creates a MTLParallelRenderCommandEncoder object. The rendering target is specified by the MTLRenderPassDescriptor parameter.

3. Command buffer MTLCommandBuffer object and protocol

After being encoded by the command encoder, the command encoder adds it to a MTLCommandBuffer object called a command buffer, which is then submitted to the GPU to execute the commands contained therein.

The MTLCommandBuffer protocol defines an interface for CommandBuffer objects and provides methods for creating command encoders, submitting CommandBuffers to a command queue, and checking status.

A CommandBuffer object contains the encoded commands intended to be executed on a specific device (GPU). Once all encoding is complete, the CommandBuffer itself must be submitted to a command queue and the command buffer is marked as ready so that it can be executed by the GPU.

In standard applications, usually the rendering commands for one rendering frame are encoded into one command buffer using one thread.

Creation of MTLCommandBuffer object and corresponding methods:

A MTLCommandBuffer object is created by the commandBuffer method or commandBufferWithUnretainedReferences method of MTLCommandQueue.

A MTLCommandBuffer object can only be submitted to the MTLCommandQueue object that created it.

A MTLCommandBuffer object also implements the following methods defined by the protocol:

The enqueue method is used to reserve a position in the command queue for the command buffer.

The commit method commits the MTLCommandBuffer object for execution.

The addScheduledHandler: method is used to register a code execution block for a command buffer object to be called when the command buffer is scheduled. Multiple scheduled execution blocks can be registered for a command buffer object.

The waitUntilScheduled method waits until a command buffer is scheduled and all scheduled execution blocks registered for that command buffer have completed execution.

The addCompletedHandler: method registers a code execution block for a command buffer object that is called after the device has completed executing the command buffer. You can also register multiple completion execution code blocks for a command buffer object.

The waitUntilCompleted method waits for the commands in the command buffer to be executed by the device and for all completion execution blocks registered for the command buffer to be executed.

The presentDrawable: method is used to present the contents of a display resource (CAMetalDrawable object) when the command buffer object is scheduled.

4. MTLCommandQueue protocol and command queue object

The MTLCommandQueue protocol is a queue that contains command buffers. The command queue is used to organize the execution order of the command buffer objects contained therein and control when the commands contained in the command buffer objects in the command queue are executed.

The MTLCommandQueue protocol defines an interface for the command queue. The main interface includes the creation of command buffer objects.

Creation of MTLCommandQueue object:

Use the newCommandQueue method or newCommandQueueWithMaxCommandBufferCount: method of the MTLDevice object to create a command queue object.

The following figure shows the relationship between the above objects:

As shown in the figure: You must set the presentation-related state for a presentation command encoder, set up and create related Metal resource objects such as buffers and textures used for presentation.

The states specified for the render command encoder include a render pipeline state (Render Pipeline State), a depth and pattern state (Depth Stencil State), and a sampler state (Sampler State).

A Blit command encoder is associated with a buffer and a texture, and is used to perform a Blit operation between the two.

There are three types of MTLResource Metal resource objects that a command encoder can allocate when specifying graphics or compute functionality:

MTLBuffer represents an unformatted memory that can contain any type of data. MTLBuffer is commonly used for polygon vertices, shader shaders, and calculation state data.

MTLTexture represents an image data with a specific texture type and point format. Texture objects can be used as a source for polygon vertices, fragments, or compute functions, or as output targets for graphics rendering in a rendering descriptor.

A MTLSamplerState object is used when a graphics or compute function performs texture sampling operations on a MTLTexture to define addressing, filtering, and other properties.

The graphics rendering encoder MTLRenderCommandEncoder can use the setVertex* and setFragment* method groups as its parameters to allocate one or more resources for the corresponding mapping function.

5. CAMetalLayer object and CAMetalDrawable protocol

Core Animation defines a CAMetalLayer class and a CAMetalDrawable protocol to provide a layer backing view for Metal content presentation. The CAMetalLayer object contains information about the location, size, visual properties (background color, borders, and shadows) of the content to be presented, and the resources used by the Metal presentation content. The CAMetalDrawable protocol is an extension of MTLDrawable, which specifies the MTLTexture protocol that the displayable resource object must conform to, so that the displayable resource object can be used as the target of the presentation command.

To implement the rendering of Metal content in a CAMetalLayer object, you should create a CAMetalDrawable object for each rendering process, get the MTLTexture object it contains, and then use it in the color attachment point property of the rendering pipeline description MTLRenderPipelineDescriptor to specify it as the target of the graphics rendering command.

A CAMetalLayer object is created by calling the nextDrawable method of the CAMetalLayer object.

After creating a displayable resource as the target of a graphics command, you can call the following steps to complete the graphics drawing.

1) First create a MTLCommandQueue object, and then use it to create a MTLCommandBuffer object;

2) Create a MTLRenderPassDescriptor object and specify the set of attached points that are used as the target of the encoded rendering commands in the graphics buffer; then use this MTLRenderPassDescriptor object to create a MTLRenderCommandEncoder object;

3) Create the corresponding Metal resource object to store the resource data used for drawing, such as vertex coordinates and vertex color data; and call the setVertex*:offset:atIndex: and setFragment*:offset:atIndex: methods of MTLRenderCommandEncoder to specify the resources used for the rendering encoder;

4) Create a MTLRenderPipelineDescriptor object and assign it vertexFunction and fragmentFunction properties, which are set using the corresponding mapping function MTLFunction objects read from the Metal mapping language code.

5) Use the newRenderPipelineStateWithDescriptor:error: method of MTLDevice or a similar method to create a MTLRenderPipelineState object based on the MTLRenderPipelineDescriptor; then call the setRenderPipelineState: method of MTLRenderCommandEncoder to set the pipeline pipeline for the rendering encoder object MTLRenderCommandEncoder;

6) Call the drawPrimitives:vertexStart:vertexCount: method of MTLRenderCommandEncoder to perform graphics rendering, then call the endEncoding method of MTLRenderCommandEncoder to end the encoding of this rendering process, and finally call the commit method of MTLCommandBuffer to execute the entire drawing command on the GPU.

<<:  Your Android is not working well because of these reasons

>>:  Inheriting Steve Jobs's wish, Apple University plans to enter China

Recommend

What’s so magical about “orange peel” that is better off dried than fresh?

At the beginning of this year, the TV series &quo...

Fireflies, the neglected revelation

A blue ghost firefly (Phausis reticulata) leaves ...

How to do cross-border e-commerce operation data analysis

As the proportion of younger generations among cr...

iPhone 7 is not good? Analysts say it is the last glory

After iPhone 7 received a major upgrade, everyone...

How to quickly build a marketing and promotion system for B2B products?

In the past two years, the SAAS product market ha...

Using Tik Tok recommendation algorithm for brand promotion

Speaking of the sudden emergence of Toutiao, a ve...

ASUS B150M-A review of Baidu smart security motherboard

The full launch of Intel's 100 series motherb...

How much does it cost to customize a dance school mini program in Huainan?

The main factors affecting the price of mini prog...

Brand marketing promotion: How to produce better strategies?

In most cases, people tend to think in a positive...