Using OpenGL to generate transition effects in mobile applications

Using OpenGL to generate transition effects in mobile applications

Author | jzg, a senior front-end development engineer at Ctrip, focusing on Android development; zcc, a senior front-end development engineer at Ctrip, focusing on iOS development.

1. Introduction

With the popularity of short videos on mobile devices, audio and video editing tools play an important role in content apps. A variety of transition methods can bring more cool effects to short videos, thereby better winning the favor of users. This topic mainly includes a brief introduction to OpenGL and the use of related APIs, the basic use of GLSL shader language, and how to achieve image transition effects by writing custom shader programs.

2. Why use OpenGL and the difficulties in using it

The transition effect of the video is inseparable from the processing of graphics. Mobile devices generally choose to use GPU when processing 3D graphics-related calculations. Compared with CPU, GPU has more efficient performance in image animation processing. Taking Android as an example, mobile devices provide two different sets of APIs for GPU processing, namely Vulkan and OpenGL ES. Among them, VulKan only supports devices above Android 7.0, OpenGL ES supports all Android versions, and iOS does not have official support for Vulkan. At the same time, as a subset of OpenGL, OpenGL ES removes many non-absolutely necessary features such as glBegin/glEnd, quadrilaterals, polygons and other complex primitives for embedded devices such as mobile phones, PDAs and game consoles, eliminating its redundant functions, thereby providing a library that is easier to learn and easy to implement in mobile graphics hardware.

Currently, in short video image processing, OpenGL ES has become one of the most widely used GPU processing APIs due to its good system support and highly streamlined functions. For convenience, OpenGL mentioned in this article refers to OpenGL ES.

The difficulty of using OpenGL to process video transitions is how to write the shader of the transition effect. For this, we can refer to the open source GLTransitions website. This website has many open source transition effects that we can learn from and learn from. The following will be introduced in more detail.

3. Basic introduction and transition application of OpenGL

OpenGL is an open graphics library, a cross-language, cross-platform application programming interface for rendering 2D and 3D vector graphics. What can OpenGL be used for?

  • Video, graphics, and image processing
  • 2D/3D game engine development
  • Scientific Visualization
  • Medical software development
  • CAD (Computer Aided Technology)
  • Virtual reality (AR, VR)
  • AI

We use OpenGL to process video transitions, which is the process of using OpenGL to process videos, graphics, and images as mentioned above.

When using OpenGL for drawing, we mainly focus on vertex shaders and fragment shaders. Vertex shaders are used to determine the vertex positions of the drawn graphics, and fragment shaders are responsible for adding colors to the graphics. The main drawing process is as follows:

The rendering process has the following steps:

1) Vertex data input:

Vertex data is used to provide processing data for subsequent stages such as vertex shaders.

2) Vertex Shader:

The main function of the vertex shader is to perform coordinate transformation.

3) Geometry Shader:

Unlike vertex shaders, the input of geometry shaders is a complete primitive (for example, a point), and the output can be one or more other primitives (for example, a triangle), or no primitives. Geometry shaders are optional.

4) Primitive assembly and rasterization:

Primitive assembly assembles the input vertices into specified primitives. After the primitive assembly and screen mapping stages, we transform the object coordinates into window coordinates. Rasterization is a discretization process, which converts 3D continuous objects into discrete screen pixels.

5) Fragment shader:

Fragment shaders are used to determine the final color of a pixel on the screen.

6) Hybrid testing:

The last stage of rendering is the test blending stage. The tests include clipping test, alpha test, template test and depth test. Fragments that have not passed the test will be discarded and do not need to go through the blending stage. Fragments that have passed the test will enter the blending stage.

After the above steps, OpenGL can display the final graphics on the screen.

In the OpenGL drawing process, what we can code are Vertex Shader and Fragment Shader. These are also the two necessary shaders in the rendering process.

Vertex Shader processes data input from the client, applies transformations, and performs other types of mathematical operations to calculate lighting effects, displacements, color values, etc. For example, to render a triangle with 3 vertices, the Vertex Shader will be executed 3 times, once for each vertex.

The three vertices in the figure have been combined, and the triangle has been rasterized fragment by fragment. Each fragment is filled by executing the Fragment Shader. The Fragment Shader will output the final color value we see on the screen.

When drawing graphics, we use a variety of OpenGL state variables, such as the current color, control of the current view and projection transformation, line and polygon stipple mode, polygon drawing mode, pixel packing conventions, lighting position and characteristics, and material properties of the drawn object, etc. You can set its various states (or modes) and keep them in effect until you modify them again.

You can set the current color to white, red, or any other color, and all objects drawn after that will use this color until the current color is set to another color again. Many state variables that represent modes can be used with glEnable() and glDisable(). So we say that OpenGL is a state machine.

Because OpenGL performs a series of operations in sequence during the rendering process, just like an assembly line, we call the process of OpenGL drawing a rendering pipeline, which includes fixed pipelines and programmable pipelines. We use programmable pipelines, in which the position, color, texture coordinates of vertices, and how to modify the data after the texture is passed in, and how the generated fragments generate results, can be freely controlled.

The following is a brief introduction to the pipeline and GLSL (shader language) which is essential in the variable programming pipeline.

Pipeline: The rendering pipeline can be understood as a rendering pipeline. It refers to the input of the relevant descriptive information data of the 3D object to be rendered (for example: vertex coordinates, vertex color, vertex texture, etc.), after a series of changes and rendering processes in the rendering pipeline, output a frame of the final image. Simply put, it is a process in which a bunch of raw graphics data passes through a transmission pipeline, undergoes various changes and processing, and finally appears on the screen. Pipelines are divided into fixed pipelines and programmable pipelines.

Fixed pipeline: In the process of rendering images, we can only implement a series of shader processing by calling the fixed pipeline effect of the GLShaderManager class.

Programmable pipeline: In the process of rendering images, we can use custom vertex shaders and fragment shaders to process data. Since OpenGL has a wide range of usage scenarios, we can use programmable pipelines to handle tasks that cannot be completed by fixed pipelines or storage shaders.

OpenGL Shading Language is a language used for shading coding in OpenGL, that is, short custom programs written by developers. They are executed on the GPU (Graphic Processor Unit), replacing part of the fixed rendering pipeline, making different levels in the rendering pipeline programmable. It can get the current state in OpenGL, and GLSL built-in variables are passed. GLSL uses C language as the basic high-level shading language, avoiding the complexity of using assembly language or hardware specification language.

GLSL's shader code is divided into two parts: VertexShader and Fragment Shader.

Shader is an editable program used to implement image rendering and replace the fixed rendering pipeline. Vertex Shader is mainly responsible for the calculation of the geometric relationship of vertices, while Pixel Shader is mainly responsible for the calculation of the color of the source.

Vertex shader is a programmable processing unit, which is generally used to process vertex-related operations such as transformation (rotation/translation/projection, etc.), lighting, material application and calculation of each vertex of the graphics. Vertex shader is a program that operates on each vertex, and it is executed once for each vertex data. It replaces the vertex transformation and lighting calculation of the original fixed pipeline and is developed using GLSL. We can use shading language to develop vertex transformation, lighting and other functions according to our own needs, which greatly increases the flexibility of the program.

The working process of the vertex shader is to transfer the original vertex geometry information (vertex coordinates, color, texture) and other attributes to the vertex shader, generate the changed vertex position information through the custom vertex shading program, pass the changed vertex position information to the subsequent primitive assembly stage, and the corresponding vertex texture, color and other information are passed to the fragment shader after rasterization.

The input of the vertex shader is mainly the attributes, uniforms, samplers and temporary variables corresponding to the vertices to be processed, and the output is mainly the varyings generated after the vertex shader and some built-in output variables.

Vertex shader example code:

 //Vertex position
attribute vec4 Position;
//Texture coordinates
attribute vec2 TextureCoord;
//Texture coordinates are used to receive and pass texture coordinates to the fragment shader
varying vec2 varyTextureCoord;
void main() {
gl_Position = Position;
varyTextureCoord = TextureCoord;
}
 //High precision
precision highp float;
//Used to receive the texture coordinates of the vertex shader
varying vec2 varyTextureCoord;
//Image texture
uniform sampler2D Texture;
//Image texture
uniform sampler2D Texture2;
const vec2 direction = vec2(0.0, 1.0);
void main(){
vec2 p = varyTextureCoord.xy/vec2(1.0).xy;
vec4 color = mix(texture2D(Texture, varyTextureCoord), texture2D(Texture2, varyTextureCoord), step(1.0-py,progress));
gl_FragColor = vec4(color);
}

3.1.4 Three methods of passing data to OpenGL shaders

The above vertex shader and fragment shader appear in the attribute, varying, uniform and other type definitions. The following is a brief introduction to these three types.

attribute

attribute: Attribute variables are variables that can only be used in vertex shaders. Attribute variables are generally used to represent some vertex data, such as vertex coordinates, normals, texture coordinates, vertex colors, etc.

uniform

uniform: A uniform variable is a variable passed to a shader by an external application. A uniform variable is like a constant in the C language, which means that a shader can only use but not modify a uniform variable.

varying

Varying: The amount passed from the vertex shader to the fragment shader, such as the vertex color passed to the fragment shader, can use varying (volatile variables).

Note: Attributes cannot be passed directly to Fragment Shader. If you need to pass it to Fragment Shader, you need to pass it indirectly through Vertex Shader. Unifrom and Texture Data can be passed directly to Vertex Shader and Fragment Shader. The specific method of passing depends on the requirements.

3.1.5 How to use OpenGL to draw a picture

The above introduces vertex shaders and fragment shaders, as well as how to pass data to OpenGL programs.

Now we will use the knowledge points just introduced to draw pictures on the screen through OpenGL program, which is also the premise of making picture carousel transition effects. For OpenGL, drawing pictures is just drawing textures. Here, only for display effect, no transformation matrix is ​​used to process the aspect ratio of the picture, and it is directly spread over the entire window.

First, define a vertex shader:

 attribute vec4 a_position; //Incoming vertex coordinates
attribute vec2 a_texCoord; //Incoming texture coordinates
varying vec2 v_texCoord; //Texture coordinates passed to the fragment shader
void main()
{
gl_Position = a_position; //Assign vertex coordinates to OpenGL's built-in variables
v_texCoord = a_texCoord; // pass the incoming texture coordinates to the fragment shader
}
 Define another fragment shader: 
 precision mediump float; //Define float precision, the texture coordinates use a two-dimensional vector vec2 of type float
uniform sampler2D u_texture; //texture
varying vec2 v_texCoord; //texture coordinates
void main(){
gl_FragColor = texture2D(u_texture, v_texCoord); //2D texture sampling, assign the color to OpenGL's built-in variable gl_FragColor
}

Here is the code for drawing an image texture using these two shaders on the Android side:

 class SimpleImageRender(private val context: Context) : GLSurfaceView.Renderer {
//Vertex coordinates
private val vCoordinates = floatArrayOf(
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f
)
//Texture coordinates
private val textureCoordinates = floatArrayOf(
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
)
//OpenGL program id
var programId = 0
//Vertex coordinate handle
var vCoordinateHandle = 0
// Texture coordinate handle
var textureCoordinateHandle = 0
//Texture id
var textureId = 0
private val vertexBuffer =
ByteBuffer.allocateDirect(vCoordinates.size * 4).order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vCoordinates)


private val textureBuffer =
ByteBuffer.allocateDirect(textureCoordinates.size * 4).order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(textureCoordinates)


override fun onSurfaceCreated(gl: GL10?, config: EGLConfig?) {
vertexBuffer.position(0)
textureBuffer.position(0)
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
//Edit and link the OpenGL program according to the vertex shader and fragment shader
programId =
loadShaderWithResource(context, R.raw.simple_image_vs, R.raw.simple_image_fs)
//Get the handle of the vertex coordinates
vCoordinateHandle = GLES20.glGetAttribLocation(programId, "a_position")
//Get the texture coordinate handle
textureCoordinateHandle = GLES20.glGetAttribLocation(programId, "a_texCoord")
//Generate texture
val textureIds = IntArray(1)
GLES20.glGenTextures(1, textureIds, 0)
if (textureIds[0] == 0) {
return
}
textureId = textureIds[0]
//Bind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId)
//Surround mode
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT)
//Filter method
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR)


val bitmap = BitmapFactory.decodeResource(context.resources, R.drawable.scene1)
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0)
bitmap.recycle()
}


override fun onSurfaceChanged(gl: GL10?, width: Int, height: Int) {
GLES20.glViewport(0, 0, width, height)
}


override fun onDrawFrame(gl: GL10?) {
//Clear the screen and clean up the color buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
//Set the color of the clear screen, here is the float color value range [0,1]
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)


//Use program
GLES20.glUseProgram(programId)


//Set to available state
GLES20.glEnableVertexAttribArray(vCoordinateHandle)
//size specifies the number of components of each vertex attribute. Must be 1, 2, 3, or 4. The initial value is 4. (For example, position is composed of 3 (x, y, z), and color is 4 (r, g, b, a))
//stride specifies the offset between consecutive vertex attributes. If it is 0, the vertex attributes will be understood as: they are closely arranged together. The initial value is 0.
//size 2 represents (x,y), stride 8 represents span (2 points as a group, 2 floats have 8 bytes)
GLES20.glVertexAttribPointer(vCoordinateHandle, 2, GLES20.GL_FLOAT, false, 8, vertexBuffer)


GLES20.glEnableVertexAttribArray(textureCoordinateHandle)
GLES20.glVertexAttribPointer(
textureCoordinateHandle,
2,
GLES20.GL_FLOAT,
false,
8,
textureBuffer
)


GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4)


}
}

This completes the drawing of a picture:

3.2 Application of OpenGL transition effects

3.2.1 Porting open source transition effects

What is a transition effect? ​​Generally speaking, it is the transition effect between two video images. In opengl, the transition of an image is actually the transition switch between two textures. Here I recommend an open source project, which is mainly used to collect various GL transition effects and their GLSL implementation codes, so that developers can easily port them to their own projects.

The GLTransitions project has nearly 70 transition effects that can be easily used in image or video transitions. Many transition effects include common image processing methods such as blending, edge detection, erosion and dilation, from easy to difficult.

For students who want to learn GLSL, it can not only help them get started quickly, but also help them learn some advanced image processing methods in GLSL. It is highly recommended.

Since glsl code is universal on all platforms, it is relatively simple to port the effects of GLTransitions to mobile devices. Now let's take the first transition effect of the website as an example to introduce the general process of porting.

First, let's take a look at the code of the fragment shader required for the transition, which is the key to achieving the transition. The sign function, mix function, fract function, and step function are built-in functions of glsl. Here, we only show the effect, and do not use the transformation matrix to process the aspect ratio of the image, but directly fill the entire window.

 uniform vec2 direction; // = vec2(0.0, 1.0)


vec4 transition (vec2 uv) {
vec2 p = uv + progress * sign(direction);
vec2 f = fract(p);
return mix(
getToColor(f),
getFromColor(f),
step(0.0, py) * step(py, 1.0) * step(0.0, px) * step(px, 1.0)
);
}

We can see that the fragment shader code of GLTransitions has provided the transition effect, but the user still needs to make some modifications. Taking the above code as an example, we need to define a variable progress of the transition progress (a floating point number with a value between 0 and 1). There are also two basic elements of transition, namely image texture. A transition requires two image textures, from texture 1 to texture 2. getToColor and getFromColor are functions for taking colors from texture 1 and texture 2. Of course, there is also the indispensable main function, which assigns the color calculated by our program to gl_FragColor, so we need to modify the above fragment shader code. As follows:

 precision mediump float;
uniform vec2 direction;// = vec2(0.0, 1.0)
uniform float progress; //Transition progress
uniform sampler2D u_texture0; //Texture 1
uniform sampler2D u_texture1; //Texture 2
varying vec2 v_texCoord; //texture coordinates
vec4 transition (vec2 uv) {
vec2 p = uv + progress * sign(direction);
vec2 f = fract(p);
return mix(
texture2D(u_texture1, f),
texture2D(u_texture0, f),
step(0.0, py) * step(py, 1.0) * step(0.0, px) * step(px, 1.0)
);
}


void main(){
gl_FragColor = transition(v_texCoord);
}

Here is also the vertex shader code, which mainly sets the vertex coordinates and texture coordinates. These two coordinates have been introduced above, so I won't repeat them here. The code is as follows:

 attribute vec4 a_position;
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main()
{
gl_Position = a_position;
v_texCoord = a_texCoord;
}

Now that we have the two key shader programs, the vertex shader and the fragment shader, a basic transition is implemented. As long as we use these two shaders in our program, we can constantly update the two textures and the progress of the transition according to the current frame rate when drawing.

The following is the code logic for drawing, taking Android as an example:

 frameIndex++
GLES20.glUseProgram(programId)


GLES20.glEnableVertexAttribArray(vCoordinateHandle)
GLES20.glVertexAttribPointer(vCoordinateHandle, 2, GLES20.GL_FLOAT, false, 8, vertexBuffer)


GLES20.glEnableVertexAttribArray(textureCoordinateHandle)
GLES20.glVertexAttribPointer(
textureCoordinateHandle,
2,
GLES20.GL_FLOAT,
false,
8,
textureBuffer
)


val uTexture0Handle = GLES20.glGetUniformLocation(programId, "u_texture0")
GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
GLES20.glBindTexture(
GLES20.GL_TEXTURE_2D,
imageTextureIds[(frameIndex / transitionFrameCount) % imageNum]
)
GLES20.glUniform1i(uTexture0Handle, 0)


val uTexture1Handle = GLES20.glGetUniformLocation(programId, "u_texture1")
GLES20.glActiveTexture(GLES20.GL_TEXTURE1)
GLES20.glBindTexture(
GLES20.GL_TEXTURE_2D,
imageTextureIds[(frameIndex / transitionFrameCount + 1) % imageNum]
)
GLES20.glUniform1i(uTexture1Handle, 1)


val directionHandle = GLES20.glGetUniformLocation(programId, "direction")
GLES20.glUniform2f(directionHandle, 0f, 1f)


val uOffsetHandle = GLES20.glGetUniformLocation(programId, "u_offset")
val offset = (frameIndex % transitionFrameCount) * 1f / transitionFrameCount
GLES20.glUniform1f(uOffsetHandle, offset)
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4)

The above is the basic process of porting the transition effects in a GLTransitions website to Android. iOS is similar and very convenient.

3.2.2 Realizing Complex Transition Effects

Through the above introduction, we have a simple understanding of how to use opengl to process image transitions. However, the previous operation can only make multiple images use the same transition, which is rather monotonous. The following is an idea to combine different transition effects when synthesizing transition effects with multiple images.

Recall that when we just did the transition transplantation, we only used one opengl program. Now let's load multiple opengl programs, and then use the corresponding opengl programs at different time periods, so that we can more easily realize the combination of multiple transition effects.

First, define an IDrawer interface to represent an object that uses an opengl program:

 interface IDrawer {
//Preparation stage, preparation procedures, resources
fun onPrepare()
//Draw
fun onDraw(frameIndex:Int){}


fun onSurfaceChanged(p0: GL10?, width: Int, height: Int){


}
}

Then define a renderer to control how to use these IDrawers:

 class ComposeRender : GLSurfaceView.Renderer {
private var frameIndex = 0 //How many frames are currently drawn
private var drawersFrames = 0 //The number of frames required for all drawers to be drawn once. Currently, each drawer takes up 200 frames.
private val framesPerDrawer = 200 //The number of frames required for each IDrawer to draw, temporarily fixed to 200 here


//The IDrawer collection used
private val drawers = mutableListOf(
HelloWorldTransitionDrawer(),
SimpleTransitionDrawer(),
PerlinTransitionDrawer(),
)


init {
drawersFrames = drawers.size.times(framesPerDrawer)
}


override fun onSurfaceCreated(p0: GL10?, p1: EGLConfig?) {
//Set the color of the clear screen, here is the float color value range [0,1]
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
//Clear the screen and clean up the color buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
drawers.forEach {
it.onPrepare()
}
}


override fun onSurfaceChanged(p0: GL10?, p1: Int, p2: Int) {
GLES20.glViewport(0, 0, p1, p2)
drawers.forEach {
it.onSurfaceChanged(p0, p1, p2)
}
}


override fun onDrawFrame(p0: GL10?) {
frameIndex++
//Clear the screen and clean up the color buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
val offset = frameIndex % drawersFrames
val logicFrame = if (offset == 0) 1 else offset
//Calculate the current frame number, which IDrawer is the one that draws, and let the corresponding IDrawer draw
drawers.forEachIndexed { index, iDrawer ->
if (logicFrame <= (index + 1).times(framesPerDrawer) && logicFrame >= index.times(
framesPerDrawer
)
) {
iDrawer.onDraw(logicFrame - index.times(framesPerDrawer))
}
}
}
}

In order to facilitate the demonstration of the process, we first write the fixed values ​​of the texture and the time taken for each transition (i.e. the number of frames used) in the code. For example, there are four pictures numbered 1, 2, 3, and 4. We define three IDrawers A, B, and C. A uses pictures 1 and 2, B uses pictures 2 and 3, and C uses pictures 3 and 4. Then each transition takes 200 frames, so that the combined transition of the three opengl programs can be realized.

Here is one of the IDrawer implementation classes:

 class HelloWorldTransitionDrawer() : IDrawer {
private val imageNum = 2 //Two image textures are needed


//The number of frames required for the transition, here is a fixed 200 frames
private val transitionFrameCount = 200
private val vCoordinates = floatArrayOf(
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f
)
private val textureCoordinates = floatArrayOf(
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f
)
var programId = 0
var vCoordinateHandle = 0
var textureCoordinateHandle = 0
var imageTextureIds = IntArray(imageNum)
private val vertexBuffer =
ByteBuffer.allocateDirect(vCoordinates.size * 4).order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vCoordinates).position(0)


private val textureBuffer =
ByteBuffer.allocateDirect(textureCoordinates.size * 4).order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(textureCoordinates).position(0)


override fun onPrepare() {
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
programId =
loadShaderWithResource(
MyApplication.getApp(),
R.raw.helloworld_transition_vs,
R.raw.helloworld_transition_fs
)
vCoordinateHandle = GLES20.glGetAttribLocation(programId, "a_position")
textureCoordinateHandle = GLES20.glGetAttribLocation(programId, "a_texCoord")
//Generate texture
val textureIds = IntArray(1)
GLES20.glGenTextures(1, textureIds, 0)
if (textureIds[0] == 0) {
return
}
loadTextures(intArrayOf(R.drawable.scene1, R.drawable.scene2))
}


override fun onDraw(frameIndex:Int) {
//Use program
GLES20.glUseProgram(programId)


//Set to available state
GLES20.glEnableVertexAttribArray(vCoordinateHandle)
//size specifies the number of components of each vertex attribute. Must be 1, 2, 3, or 4. The initial value is 4. (For example, position is composed of 3 (x, y, z), and color is 4 (r, g, b, a))
//stride specifies the offset between consecutive vertex attributes. If it is 0, the vertex attributes will be understood as: they are closely arranged together. The initial value is 0.
//size 2 represents (x,y), stride 8 represents span (2 points as a group, 2 floats have 8 bytes)
GLES20.glVertexAttribPointer(vCoordinateHandle, 2, GLES20.GL_FLOAT, false, 8, vertexBuffer)


GLES20.glEnableVertexAttribArray(textureCoordinateHandle)
GLES20.glVertexAttribPointer(
textureCoordinateHandle,
2,
GLES20.GL_FLOAT,
false,
8,
textureBuffer
)


val uTexture0Handle = GLES20.glGetUniformLocation(programId, "u_texture0")
GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
GLES20.glBindTexture(
GLES20.GL_TEXTURE_2D,
imageTextureIds[0]
)
GLES20.glUniform1i(uTexture0Handle, 0)


val uTexture1Handle = GLES20.glGetUniformLocation(programId, "u_texture1")
GLES20.glActiveTexture(GLES20.GL_TEXTURE1)
GLES20.glBindTexture(
GLES20.GL_TEXTURE_2D,
imageTextureIds[1]
)
GLES20.glUniform1i(uTexture1Handle, 1)


val directionHandle = GLES20.glGetUniformLocation(programId, "direction")
GLES20.glUniform2f(directionHandle, 0f, 1f)


val uOffsetHandle = GLES20.glGetUniformLocation(programId, "u_offset")
val offset = (frameIndex % transitionFrameCount) * 1f / transitionFrameCount
GLES20.glUniform1f(uOffsetHandle, offset)
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4)
}


private fun loadTextures(resIds: IntArray) {
if (resIds.isEmpty()) return
//Generate two textures directly
GLES20.glGenTextures(2, imageTextureIds, 0)
resIds.forEachIndexed { index, resId ->
if (imageTextureIds.indexOfFirst {
it == 0


} == 0) return
GLES20.glActiveTexture(GLES20.GL_TEXTURE0 + index)
//Bind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, imageTextureIds[index])
//Surround mode
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT)
//Filter method
GLES20.glTexParameteri(
GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR
)
GLES20.glTexParameteri(
GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR
)


val bitmap = BitmapFactory.decodeResource(MyApplication.getApp().resources, resId)
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0)
bitmap.recycle()
}
}
}

This way you can combine multiple transitions.

IV. Conclusion

When performing graphics processing on mobile devices, OpenGL has been favored by everyone due to its high efficiency and good compatibility.

This article briefly introduces the basic concepts and drawing process of OpenGL, so that everyone can have a preliminary understanding of the drawing process of OpenGL. In the drawing process, it is more important for us developers to use GLSL to write vertex shaders and fragment shaders. When using OpenGL to process image carousel transitions, the key point is to write the shader required for the transition. We can refer to the open source transition effects on the GLTransitions website. The website provides a wealth of transition effects and shader codes, which can be easily ported to the client.

For realizing complex transitions, that is, combining multiple transition effects, this article also provides an idea, which is to combine multiple OpenGL programs, load and use the corresponding OpenGL programs at the corresponding time points.

Due to space constraints, this article shares some of our thoughts and practices on developing video transition effects based on OpenGL, hoping that it will be helpful to everyone.

<<:  iOS speech recognition wave animation based on Speech framework

>>:  Nine commonly used control specifications in UI design are sorted out in this article!

Recommend

Why is Seahorse Dad called "the best husband in the animal world"?

Every spring, many animals start preparing for re...

Marketing = communication. Whether you like it or not, this era has come.

Let us first consider two examples. The first exa...

2020 Zero-cost Money-Making Strategy, Easily Earn 200+ Every Day

The online money-making project I shared today, a...

Thinking about paid product scene placement under the high traffic situation

The competition among Internet products has shift...

Android 9.0 first version is finished! March Pi Day push: Kill fragmentation

On January 24, Google announced that the 2018 I/O...

99% of people have seen this little bug, but can't name it.

Our traditional art of "Bowu" is to ans...

Foreign article analysis: iOS and Android on Windows

A bridge that can carry a billion users, but the ...