A master of the basics of artificial intelligence

A master of the basics of artificial intelligence

In 1955, John McCarthy, a computer scientist at Stanford University, first proposed the concept of "artificial intelligence" at the Dartmouth Conference.

After AlphaGo, you have been dazzled by the news about artificial intelligence these days, right? The most important thing is, after all this, you still don’t know much about artificial intelligence? The technology invented by the founder of a certain expert laid the foundation for machine vision, deep learning, speech recognition, etc. Who are they? What did they do? As the saying goes, no matter how things change, they still remain essentially the same. Generally speaking, technology and knowledge systems are passed down from generation to generation by masters, and they are constantly updated. As a research field that has been around for 60 years, artificial intelligence also has its "origin" to trace back.

The Massachusetts Institute of Technology (MIT) Artificial Intelligence Laboratory and Stanford Artificial Intelligence Laboratory, which conducted AI research earlier, are examples. In fact, in the discipline classification of the formal education system, AI is divided into two secondary disciplines: computer vision and machine learning, and many technologies in the field are interrelated.

Computer Vision

From feature descriptors to deep learning, computer vision has experienced two decades of vigorous development. In the past two years, especially with the recent achievements of Facebook, Microsoft, and Google in the field of artificial intelligence, we all know the magic of deep convolutional neural networks. Compared with other deep learning structures, convolutional neural networks can give better results in image and speech recognition. So, what did object recognition technology look like before this? The progress of computer vision technology is inseparable from their work:

Computer Vision Expert Family Tree

The originator of computer vision is the great David Marr. Starting from the perspective of computer science, he integrated mathematics, psychophysics, and neurophysiology to create the theory of human visual computing, which changed the face of visual research. His colleagues passed down his theory. Now, computer vision, computer graphics and machine learning are tending to merge, and the great technologies are interconnected and inseparable. And how does the most popular machine learning give birth to all kinds of strange robots or artificial intelligence programs?

Machine Learning

Generally speaking, the representatives in the field of machine learning include Geoffrey Hinton, Yann Lecun, Tom Mitchell, etc. Although they are now working in different institutions, they still uphold a strict master-disciple relationship (including PhD and postdoc, co-supervise relationship), as shown in the figure:

Initially, machine learning was divided into two major schools: connectionism and symbolism. Later, Pedro Domingos, a professor at the University of Washington, proposed the five major schools of machine learning and their representatives at last year's ACM Webminar: symbolism, connectionism, evolutionism, behaviorism, and Bayesianism.

Symbolists

Representatives:

Symbolism is an intelligent simulation method based on logical reasoning, also known as logicism. Its algorithm originates from logic and philosophy, and predicts results through deduction and reverse deduction of symbols. For example: predict the unknown term in 2+?=4 based on 2+2=?.

Most of the early AI researchers belonged to this category. This school of thought believes that the basic unit of human cognition and thinking is symbols, and the cognitive process is an operation on symbolic representation. It believes that humans are a physical symbol system, and computers are also a physical symbol system, so we can use computers to simulate human intelligent behavior, that is, use computer symbol operations to simulate human cognitive processes.

In general, the idea of ​​symbolism can be simply summarized as "cognition is calculation."

Connectionism

Representatives:

According to Wikipedia, connectionism is a theory that integrates the fields of cognitive psychology, artificial intelligence, and philosophy of mind. Connectionism establishes a model of the manifestation of psychological or behavioral phenomena - a network of interconnected simple components. There are many different forms of connectionism, but the most common form uses neural network models.

Artificial neurons

The central principle of connectionism is to describe psychological phenomena using interconnected networks of simple units. The form and units of the connections can be modified from model to model. For example, the units of the network could describe neurons, and the connections could describe synapses. Another model would have a network in which each unit is represented by a word, and each connection by a semantically similar word.

Back Propagation Algorithm Demonstration

Neural networks are the dominant form of connectionist models today, and the popular deep learning is also an extension of this school.

Google's cat-recognizing neural network

Evolutionism

Representatives:

According to Airbnb engineer Zhu Yun, evolutionism originated from biological evolution, which is good at using genetic algorithms and genetic programming. For example, the "Starfish Robot" developed by Josh Bongard of the University of Vermont based on biological evolution theory can "sense" various parts of the body through internal simulation and perform continuous modeling. Therefore, even without external programming, it can learn to walk by itself.

Genetic Algorithms

Genetic Programming

Starfish Robot

Bayesian

Representatives

Bayesian decision making is to use subjective probability estimation for some unknown states under incomplete intelligence, then use the Bayesian formula to correct the probability of occurrence, and finally use the expected value and the corrected probability to make the best decision. The basic idea is: given the known class conditional probability density parameter expression and prior probability, use the Bayesian formula to convert them into posterior probability, and make decisions based on the size of the posterior probability. The most common application of the Bayesian algorithm based on probability statistics is the anti-spam function.

Probabilistic Reasoning

Analogizer

Representatives:

Felix Hill, a computer scientist at the University of Cambridge, believes that recent progress in deep learning of artificial intelligence systems can be summarized as behaviorism and cognitivism. Behaviorism, as the name suggests, focuses on behavioral performance and ignores the role of the brain and nerves; cognitivism focuses on the psychological processes that constitute behavior.

Pedro Domingos finally said that although the five major schools have their own strengths, what we really need now is an algorithm that can solve all these problems in a unified way.

From: Leifeng.com

<<:  Variety shows usher in a hodgepodge era. How can we make delicious dishes when there is a lack of differentiation?

>>:  Chairman of the Artificial Intelligence Society: There are problems in China's artificial intelligence research

Recommend

Analysis of NetEase Cloud’s operational logic!

Let me make a bold prediction. I feel that a new ...

10 secrets of Apple: Inside Apple's product design studio

If we were to add an adjective to Apple, there wo...

How AI can help companies make better business decisions

How AI can help companies make better business de...

"Telekinesis" wins Oscars! Could Michelle Yeoh's multiverse really exist?

The first Asian-American Oscar winner for Best Ac...

3G or 4G? Are you also switching from “3” to “4”?

4G has been around for some time. Since the Minis...