Author | jzg, a senior front-end development engineer at Ctrip, focusing on Android development; zcc, a senior front-end development engineer at Ctrip, focusing on iOS development. 1. IntroductionWith the popularity of short videos on mobile devices, audio and video editing tools play an important role in content apps. A variety of transition methods can bring more cool effects to short videos, thereby better winning the favor of users. This topic mainly includes a brief introduction to OpenGL and the use of related APIs, the basic use of GLSL shader language, and how to achieve image transition effects by writing custom shader programs. 2. Why use OpenGL and the difficulties in using itThe transition effect of the video is inseparable from the processing of graphics. Mobile devices generally choose to use GPU when processing 3D graphics-related calculations. Compared with CPU, GPU has more efficient performance in image animation processing. Taking Android as an example, mobile devices provide two different sets of APIs for GPU processing, namely Vulkan and OpenGL ES. Among them, VulKan only supports devices above Android 7.0, OpenGL ES supports all Android versions, and iOS does not have official support for Vulkan. At the same time, as a subset of OpenGL, OpenGL ES removes many non-absolutely necessary features such as glBegin/glEnd, quadrilaterals, polygons and other complex primitives for embedded devices such as mobile phones, PDAs and game consoles, eliminating its redundant functions, thereby providing a library that is easier to learn and easy to implement in mobile graphics hardware. Currently, in short video image processing, OpenGL ES has become one of the most widely used GPU processing APIs due to its good system support and highly streamlined functions. For convenience, OpenGL mentioned in this article refers to OpenGL ES. The difficulty of using OpenGL to process video transitions is how to write the shader of the transition effect. For this, we can refer to the open source GLTransitions website. This website has many open source transition effects that we can learn from and learn from. The following will be introduced in more detail. 3. Basic introduction and transition application of OpenGLOpenGL is an open graphics library, a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. What can OpenGL be used for?
We use OpenGL to process video transitions, which is the process of using OpenGL to process videos, graphics, and images as mentioned above. When using OpenGL for drawing, we mainly focus on vertex shaders and fragment shaders. Vertex shaders are used to determine the vertex positions of the drawn graphics, and fragment shaders are responsible for adding colors to the graphics. The main drawing process is as follows: The rendering process has the following steps: 1) Vertex data input: Vertex data is used to provide processing data for subsequent stages such as vertex shaders. 2) Vertex Shader: The main function of the vertex shader is to perform coordinate transformation. 3) Geometry Shader: Unlike vertex shaders, the input of geometry shaders is a complete primitive (for example, a point), and the output can be one or more other primitives (for example, a triangle), or no primitives. Geometry shaders are optional. 4) Primitive assembly and rasterization: Primitive assembly assembles the input vertices into specified primitives. After the primitive assembly and screen mapping stages, we transform the object coordinates into window coordinates. Rasterization is a discretization process, which converts 3D continuous objects into discrete screen pixels. 5) Fragment shader: Fragment shaders are used to determine the final color of a pixel on the screen. 6) Hybrid testing: The last stage of rendering is the test blending stage. The tests include clipping test, alpha test, template test and depth test. Fragments that have not passed the test will be discarded and do not need to go through the blending stage. Fragments that have passed the test will enter the blending stage. After the above steps, OpenGL can display the final graphics on the screen. In the OpenGL drawing process, what we can code are Vertex Shader and Fragment Shader. These are also the two necessary shaders in the rendering process. Vertex Shader processes data input from the client, applies transformations, and performs other types of mathematical operations to calculate lighting effects, displacements, color values, etc. For example, to render a triangle with 3 vertices, the Vertex Shader will be executed 3 times, once for each vertex. The three vertices in the figure have been combined, and the triangle has been rasterized fragment by fragment. Each fragment is filled by executing the Fragment Shader. The Fragment Shader will output the final color value we see on the screen. When drawing graphics, we use a variety of OpenGL state variables, such as the current color, control of the current view and projection transformation, line and polygon stipple mode, polygon drawing mode, pixel packing conventions, lighting position and characteristics, and material properties of the drawn object, etc. You can set its various states (or modes) and keep them in effect until you modify them again. You can set the current color to white, red, or any other color, and all objects drawn after that will use this color until the current color is set to another color again. Many state variables that represent modes can be used with glEnable() and glDisable(). So we say that OpenGL is a state machine. Because OpenGL performs a series of operations in sequence during the rendering process, just like an assembly line, we call the process of OpenGL drawing a rendering pipeline, which includes fixed pipelines and programmable pipelines. We use programmable pipelines, in which the position, color, texture coordinates of vertices, and how to modify the data after the texture is passed in, and how the generated fragments generate results, can be freely controlled. The following is a brief introduction to the pipeline and GLSL (shader language) which is essential in the variable programming pipeline. Pipeline: The rendering pipeline can be understood as a rendering pipeline. It refers to the input of the relevant descriptive information data of the 3D object to be rendered (for example: vertex coordinates, vertex color, vertex texture, etc.), after a series of changes and rendering processes in the rendering pipeline, output a frame of the final image. Simply put, it is a process in which a bunch of raw graphics data passes through a transmission pipeline, undergoes various changes and processing, and finally appears on the screen. Pipelines are divided into fixed pipelines and programmable pipelines. Fixed pipeline: In the process of rendering images, we can only implement a series of shader processing by calling the fixed pipeline effect of the GLShaderManager class. Programmable pipeline: In the process of rendering images, we can use custom vertex shaders and fragment shaders to process data. Since OpenGL has a wide range of usage scenarios, we can use programmable pipelines to handle tasks that cannot be completed by fixed pipelines or storage shaders. OpenGL Shading Language is a language used for shading coding in OpenGL, that is, short custom programs written by developers. They are executed on the GPU (Graphic Processor Unit), replacing part of the fixed rendering pipeline, making different levels in the rendering pipeline programmable. It can get the current state in OpenGL, and GLSL built-in variables are passed. GLSL uses C language as the basic high-level shading language, avoiding the complexity of using assembly language or hardware specification language. GLSL's shader code is divided into two parts: VertexShader and Fragment Shader. Shader is an editable program used to implement image rendering and replace the fixed rendering pipeline. Vertex Shader is mainly responsible for the calculation of the geometric relationship of vertices, while Pixel Shader is mainly responsible for the calculation of the color of the source. Vertex shader is a programmable processing unit, which is generally used to process vertex-related operations such as transformation (rotation/translation/projection, etc.), lighting, material application and calculation of each vertex of the graphics. Vertex shader is a program that operates on each vertex, and it is executed once for each vertex data. It replaces the vertex transformation and lighting calculation of the original fixed pipeline and is developed using GLSL. We can use shading language to develop vertex transformation, lighting and other functions according to our own needs, which greatly increases the flexibility of the program. The working process of the vertex shader is to transfer the original vertex geometry information (vertex coordinates, color, texture) and other attributes to the vertex shader, generate the changed vertex position information through the custom vertex shading program, pass the changed vertex position information to the subsequent primitive assembly stage, and the corresponding vertex texture, color and other information are passed to the fragment shader after rasterization. The input of the vertex shader is mainly the attributes, uniforms, samplers and temporary variables corresponding to the vertices to be processed, and the output is mainly the varyings generated after the vertex shader and some built-in output variables. Vertex shader example code: //Vertex position //High precision 3.1.4 Three methods of passing data to OpenGL shadersThe above vertex shader and fragment shader appear in the attribute, varying, uniform and other type definitions. The following is a brief introduction to these three types. attribute attribute: Attribute variables are variables that can only be used in vertex shaders. Attribute variables are generally used to represent some vertex data, such as vertex coordinates, normals, texture coordinates, vertex colors, etc. uniform uniform: A uniform variable is a variable passed to a shader by an external application. A uniform variable is like a constant in the C language, which means that a shader can only use but not modify a uniform variable. varying Varying: The amount passed from the vertex shader to the fragment shader, such as the vertex color passed to the fragment shader, can use varying (volatile variables). Note: Attributes cannot be passed directly to Fragment Shader. If you need to pass it to Fragment Shader, you need to pass it indirectly through Vertex Shader. Unifrom and Texture Data can be passed directly to Vertex Shader and Fragment Shader. The specific method of passing depends on the requirements. 3.1.5 How to use OpenGL to draw a pictureThe above introduces vertex shaders and fragment shaders, as well as how to pass data to OpenGL programs. Now we will use the knowledge points just introduced to draw pictures on the screen through OpenGL program, which is also the premise of making picture carousel transition effects. For OpenGL, drawing pictures is just drawing textures. Here, only for display effect, no transformation matrix is used to process the aspect ratio of the picture, and it is directly spread over the entire window. First, define a vertex shader: attribute vec4 a_position; //Incoming vertex coordinates Define another fragment shader: precision mediump float; //Define float precision, the texture coordinates use a two-dimensional vector vec2 of type float Here is the code for drawing an image texture using these two shaders on the Android side: class SimpleImageRender(private val context: Context) : GLSurfaceView.Renderer { This completes the drawing of a picture: 3.2 Application of OpenGL transition effects3.2.1 Porting open source transition effectsWhat is a transition effect? Generally speaking, it is the transition effect between two video images. In opengl, the transition of an image is actually the transition switch between two textures. Here I recommend an open source project, which is mainly used to collect various GL transition effects and their GLSL implementation codes, so that developers can easily port them to their own projects. The GLTransitions project has nearly 70 transition effects that can be easily used in image or video transitions. Many transition effects include common image processing methods such as blending, edge detection, erosion and dilation, from easy to difficult. For students who want to learn GLSL, it can not only help them get started quickly, but also help them learn some advanced image processing methods in GLSL. It is highly recommended. Since glsl code is universal on all platforms, it is relatively simple to port the effects of GLTransitions to mobile devices. Now let's take the first transition effect of the website as an example to introduce the general process of porting. First, let's take a look at the code of the fragment shader required for the transition, which is the key to achieving the transition. The sign function, mix function, fract function, and step function are built-in functions of glsl. Here, we only show the effect, and do not use the transformation matrix to process the aspect ratio of the image, but directly fill the entire window. uniform vec2 direction; // = vec2(0.0, 1.0) We can see that the fragment shader code of GLTransitions has provided the transition effect, but the user still needs to make some modifications. Taking the above code as an example, we need to define a variable progress of the transition progress (a floating point number with a value between 0 and 1). There are also two basic elements of transition, namely image texture. A transition requires two image textures, from texture 1 to texture 2. getToColor and getFromColor are functions for taking colors from texture 1 and texture 2. Of course, there is also the indispensable main function, which assigns the color calculated by our program to gl_FragColor, so we need to modify the above fragment shader code. As follows: precision mediump float; Here is also the vertex shader code, which mainly sets the vertex coordinates and texture coordinates. These two coordinates have been introduced above, so I won't repeat them here. The code is as follows: attribute vec4 a_position; Now that we have the two key shader programs, the vertex shader and the fragment shader, a basic transition is implemented. As long as we use these two shaders in our program, we can constantly update the two textures and the progress of the transition according to the current frame rate when drawing. The following is the code logic for drawing, taking Android as an example: frameIndex++ The above is the basic process of porting the transition effects in a GLTransitions website to Android. iOS is similar and very convenient. 3.2.2 Realizing Complex Transition EffectsThrough the above introduction, we have a simple understanding of how to use opengl to process image transitions. However, the previous operation can only make multiple images use the same transition, which is rather monotonous. The following is an idea to combine different transition effects when synthesizing transition effects with multiple images. Recall that when we just did the transition transplantation, we only used one opengl program. Now let's load multiple opengl programs, and then use the corresponding opengl programs at different time periods, so that we can more easily realize the combination of multiple transition effects. First, define an IDrawer interface to represent an object that uses an opengl program: interface IDrawer { Then define a renderer to control how to use these IDrawers: class ComposeRender : GLSurfaceView.Renderer { In order to facilitate the demonstration of the process, we first write the fixed values of the texture and the time taken for each transition (i.e. the number of frames used) in the code. For example, there are four pictures numbered 1, 2, 3, and 4. We define three IDrawers A, B, and C. A uses pictures 1 and 2, B uses pictures 2 and 3, and C uses pictures 3 and 4. Then each transition takes 200 frames, so that the combined transition of the three opengl programs can be realized. Here is one of the IDrawer implementation classes: class HelloWorldTransitionDrawer() : IDrawer { This way you can combine multiple transitions. IV. ConclusionWhen performing graphics processing on mobile devices, OpenGL has been favored by everyone due to its high efficiency and good compatibility. This article briefly introduces the basic concepts and drawing process of OpenGL, so that everyone can have a preliminary understanding of the drawing process of OpenGL. In the drawing process, it is more important for us developers to use GLSL to write vertex shaders and fragment shaders. When using OpenGL to process image carousel transitions, the key point is to write the shader required for the transition. We can refer to the open source transition effects on the GLTransitions website. The website provides a wealth of transition effects and shader codes, which can be easily ported to the client. For realizing complex transitions, that is, combining multiple transition effects, this article also provides an idea, which is to combine multiple OpenGL programs, load and use the corresponding OpenGL programs at the corresponding time points. Due to space constraints, this article shares some of our thoughts and practices on developing video transition effects based on OpenGL, hoping that it will be helpful to everyone. |
<<: iOS speech recognition wave animation based on Speech framework
>>: Nine commonly used control specifications in UI design are sorted out in this article!
Porsche, which suffered a setback in the Chinese ...
Every spring, many animals start preparing for re...
Friends who are losing weight are always worried ...
Let us first consider two examples. The first exa...
The online money-making project I shared today, a...
Stones are a very strange existence in the human ...
My understanding of operations actually starts wi...
Recently, Alibaba Cloud announced that the Alibaba...
Recently, the Fourth Chinese Archaeology Conferen...
The competition among Internet products has shift...
On January 24, Google announced that the 2018 I/O...
The 2018 Spring Festival holiday has ended, and t...
Our traditional art of "Bowu" is to ans...
The World Health Organization once proposed that ...
A bridge that can carry a billion users, but the ...