With the advent of 3D imaging technology, what changes will financial technology usher in?

With the advent of 3D imaging technology, what changes will financial technology usher in?

[[239201]]
Image source: Visual China

  • On July 30, 2018, the Xiaomi Mi8 transparent exploration version equipped with a 3D depth of field camera was officially released at 7:30 that evening and was quickly sold out in just one minute.
  • On June 27, 2018, ViVO released a TOF-based 3D depth-of-field camera at the MWC2018 conference, claiming that it could beat the iPhoneX.
  • On June 19, 2018, OPPO Find X was released, with its front-facing 3D structured light depth-of-field camera as one of its main black technologies.
  • On September 13, 2017, Apple released the iPhone X. Its Face ID facial recognition technology based on the 3D structured light depth-of-field camera heralded the entry of mobile phone front cameras into the 3D era.

With the successive releases of new mobile phones equipped with 3D depth-of-field devices, I believe readers can intuitively feel that 3D imaging technology has entered a period of rapid commercial growth in the field of consumer electronics.

This article will introduce the working principle and classification of 3D imaging technology and the market overview of 3D imaging sensors, and explore the impact of this technology on the financial technology field.

3D imaging working principle and classification

According to the working principle, 3D imaging technology is first divided into two categories: passive and active.

Passive vision imitates the principle of binocular vision of organisms. It consists of at least two image sensors. It uses the position of the object in each image sensor and the relative physical position of the two image sensors to calculate the depth of field based on the principle of geometric relationship measurement. Please note that depth of field and distance are different concepts, as shown in Figure 1 below.

The core of the binocular vision system is to associate the coordinate positions of the same observation point in each image sensor, as shown in the left figure of Figure 1 above. However, in actual use, due to the influence of objective factors such as the external environment and the surface texture properties of the photographed object, the automatic matching of feature points is more complex in terms of algorithm, and the matching accuracy also directly affects the depth of field calculation accuracy, affecting the overall effect of the system.

The active vision system effectively solves this problem due to its different working principle.

The active vision system uses an independent artificial light source to actively project onto the observed object to measure the depth of field. Active vision is divided into three categories based on the different projection light sources and depth of field technology principles: triangulation ranging method, structured light method, and time of flight method. As shown in Figure 2 below. The following is a detailed introduction:

Triangular

The triangulation method is a method of calculating the depth of field using triangular geometry using the spatial positions of the projected light source, the observed object, and the receiving image sensor. This method is the underlying basic algorithm of many active 3D depth of field vision systems.

Structured light

The structured light method can be considered as a countermeasure to the problem of feature point matching in passive vision systems. As shown in Figure 3 below, the meaning of structured light is that the active light source is projected onto the object under test through a specific pattern coding, such as projecting a densely distributed uniform grating onto the object under test. Due to the different depths of the irregularities on the surface of the object under test, the grating stripes reflected to the image sensor will be deformed. This process can be regarded as the modulation of the grating stripes by the depth information of the object surface.

By comparing the distorted grating pattern and the native pattern received by the image sensor, the depth information of each observation point can be parsed to form a depth point cloud, or depth frame.

It should be noted that the structured light method can be divided into three types according to the projection method: point, line, and surface. According to the pattern coding method, it can also be divided into time coding, space coding, and direct coding (such as grayscale coding). These technologies are different technical means to improve the demodulation and anti-interference of the coded pattern and to obtain the depth point cloud faster, as shown in Table 1 below. At present, in the field of mobile terminals, structured light coding is mainly static coding.

Time of flight

Time of flight, also known as TOF, is a depth calculation method based on measuring the time difference between the emission and reception of the projected light source signal to calculate the depth of field or distance.

TOF can be further divided into pulse wave ranging and continuous wave ranging according to how the time difference is measured. The principle of pulse wave ranging is to directly measure the time difference between the emission and reception of the pulse signal; while continuous wave ranging is to indirectly calculate the time difference by continuously emitting integer wavelengths and calculating the phase difference between the received wave and the transmitted wave, as shown in Figure 4 below.

Specifically, pulse wave ranging is mostly used in large-scale and long-distance scenarios such as industrial surveying and mapping, and the light source used is mainly laser. For mobile terminal devices, due to the usage scenario and power consumption, the distance between the target object and the lens is within ten or one hundred meters, so the TOF used on mobile terminals is expected to be mainly continuous wave phase ranging technology.

At present, structured light and TOF are the mainstream technical implementation solutions for active depth of field vision systems.

Structured light technology is relatively mature and has a high degree of component integration. Currently, Apple iPhone X has adopted structured light technology. Since TOF directly measures depth of field/distance, it has inherent technical advantages in response speed and range. With the support of international leading companies such as Google and Microsoft, the technology has a bright future.

It should be noted that the active 3D depth vision system has begun to draw on the advantages of the passive binocular vision system in its continuous development, that is, by introducing two image sensors (active stereo vision), using the spatial position relationship between the image sensors, and the light source motion trajectory from the dual-path light source emitter to the image sensor, to achieve higher-precision depth information. For example, the D400 Series depth camera module released by Intel is a representative of this technology.

3D Imaging and Sensing Market Overview

The previous article introduced the working principle of the 3D depth of field imaging system. So, is there a market demand for 3D imaging technology? How long will it take for 3D imaging technology to be fully popularized? We can find the answer from the following dimensions.

VCSEL Market Growth

As the core component of the 3D depth-of-field imaging system, the market trend of VCSEL laser in the consumer electronics field can indirectly reflect its 3D depth-of-field market demand and future development.

As shown in Figure 5 below, according to Yole's forecast, with the introduction of structured light active 3D imaging system in iPhone X in 2017, the overall VCSEL market will grow rapidly at a CAGR of 48%. In the field of consumer electronics, the VCSEL market size will climb from US$165 million in 2017 to US$3.1 billion in 2023, an increase of nearly 30 times.

3D Imaging Market Growth

According to Yole's forecast, the overall growth rate of the 3D imaging and sensing market is a CAGR of 44%. In the consumer electronics field, it will grow at a high CAGR of 82%, reaching US$13.8 billion by 2023, as shown in Figure 6 below.

The market share of 3D depth of field in terminal mobile phones

Due to its advanced technology, the active vision system will gradually replace the passive binocular vision system and be gradually adopted by mobile terminal manufacturers.

Currently in the field of 3D imaging, based on the characteristics of structured light and TOF technology, a relatively consensus prediction is that 3D imaging based on structured light technology will be adopted as the front camera of mobile phones, while TOF technology will have more prospects in rear cameras.

Currently, Apple is the leader in high-tech mobile phone terminals and occupies the supply channels of first-tier suppliers in the 3D imaging industry chain. It is expected to continue to lead Android phone manufacturers by 1-2 years in deploying 3D depth-of-field cameras, as shown in Figure 7 below.

According to TrendForce's analysis, the global smartphone 3D sensing penetration rate will rise from 2.1% in 2017 to 13.1% this year, and Apple will remain the main adopter.

It is also estimated that the total production of smartphones equipped with 3D sensing modules will reach 197 million in 2018, of which iPhones account for 165 million. In addition, the market output value of 3D sensing modules in 2018 is estimated to be about US$5.12 billion, of which iPhones account for 84.5%. It is expected that by 2020, the overall output value will reach US$10.85 billion, and the compound annual growth rate between 2018 and 2020 will reach 45.6%.

To sum up, the author believes that 3D active depth-of-field imaging technology will be popularized faster in the next 2-3 years. If upstream suppliers are able to keep up with the shipment levels of Android phone manufacturers and provide them with high-quality chips, modules and solutions, its popularization will be faster.

The impact of 3D imaging technology on financial technology

Faced with the advent of 3D imaging technology, what changes will occur in the field of financial technology? The author believes that financial technology will undergo a new generation of technological upgrades and replacements in at least the following four aspects, and will derive a wider range of application scenarios in the future.

Facial recognition technology upgrade

It is reported that the face recognition market size increased by 166.6% from 2015 to 2020, and it is expected that the face recognition technology market size will rise to US$2.4 billion by 2020. The current 2D face recognition technology suffers from feature information loss during the planar projection process due to the lack of depth information.

3D imaging technology retains the original effective information to the greatest extent through three-dimensional modeling, and can provide higher recognition accuracy and faster face recognition speed.

At the same time, the active vision system uses infrared as an active light source, which will effectively solve the problem of the impact of ambient light on facial recognition. 3D face recognition will also have stronger anti-counterfeiting capabilities, and can effectively identify whether the object is a real person or a photo based on the 3D model of the face.

Liveness Detection

The infrared emitter and infrared image sensor used in active 3D imaging technology can be used together to detect human liveness, which is not available in traditional 2D vision systems or passive binocular vision systems.

As mentioned above, VCSEL lasers currently use two wavelength bands: 850nm and 940nm. The 940nm band has become the most popular wavelength band used by the new generation of VCSELs due to its strong anti-interference ability and long effective distance (see Figure 8 below).

Among them, the 940nm band happens to be the ideal infrared band for heart rate monitoring and blood oxygen detection. Therefore, it can be expected that the liveness detection algorithm based on 3D active imaging will gradually be widely used in the field of financial technology.

Micro-expression lie detection

Micro-expression recognition technology based on 2D recognition will also usher in technological upgrades. Combined with human liveness detection technology, micro-expression recognition technology is expected to achieve wider application.

AR 3D Reconstruction

3D measurement and 3D reconstruction technologies with depth information will serve new retail scenarios more effectively. For example, in the field of smart tailoring, virtual shopping, furniture decoration and other scenarios, the simulated furniture will be placed in a position that is constrained by the size of the surrounding space due to the presence of depth information, giving it a strong sense of reality.

It can be said that with the advent and gradual popularization of 3D imaging technology, it is bound to trigger new applications and applicable scenarios. Let us look forward to the technological dividends brought by the new generation of black technology.

Author: Wang Yuan, Senior Researcher of Internet of Things Laboratory, Suning Financial Research Institute

<<:  Apple's history: Jobs' "illness" and Cook's "fate"

>>:  Mini Program Insight Report: Several major gameplays and confusions of traditional brands are all here

Recommend

7 Tips for Tik Tok Live Broadcasting

With the rise of Douyin live streaming , more and...

List of 7 essential promotion skills for operators!

Promotional methods + operational knowledge = imp...

CCTV vs. Apple: How to define online privacy?

CCTV is fighting Apple again, but unlike the last ...

New features of iOS 11 SDK that developers need to know

I am too old to stay up late to watch WWDC, but I...

Why are giant salamanders becoming extinct in the wild?

The giant salamander is a commodity, and is farme...

Startup App, 4 Cold Start Methods from 0 to 1

When cold-starting , Internet products can be rou...

How to play with short video information flow advertising creative sharing

Currently, short video information flow ads are v...

As a programmer, you must know these things about computers

[[130302]] Storage - block devices, file systems,...

If you work in the Internet industry, do you really understand traffic analysis?

In my current job, I come into contact with many ...