Counting the 10 greatest algorithms of the 20th century

Counting the 10 greatest algorithms of the 20th century

Reference: The Best of the 20th Century: Editors Name Top 10 Algorithms.

By Barry A. Cipra. Address: http://www.uta.edu/faculty/rcli/TopTen/topten.pdf.

Several algorithm masters who invented the top ten algorithms

1. 1946 Monte Carlo Method

[1946: John von Neumann, Stan Ulam, and Nick Metropolis, all at the Los Alamos Scientific Laboratory, cook up the Metropolis algorithm, also known as the Monte Carlo method.]

In 1946, three scientists from the Las Vegas National Laboratory, John von Neumann, Stan Ulam and Nick Metropolis

The common invention is called the Monte Carlo method.

Its specific definition is:

Draw a square with a side length of one meter on the square, and draw an irregular shape inside the square with chalk.

Now we need to calculate the area of ​​this irregular shape. How do we calculate the column?

The Monte Carlo method tells us that if N (N is a large natural number) soybeans are evenly scattered in the square,

Then count how many soybeans are inside this irregular geometric shape, for example, there are M soybeans.

Then, the area of ​​this strange shape is approximately M/N. The larger N is, the more accurate the calculated value will be.

Here we have to assume that the beans are all on the same plane and do not overlap each other. (Sprinkling soybeans is just a metaphor.)

Monte Carlo methods can be used to approximate pi:

Let the computer randomly generate two numbers between 0 and 1 each time and see whether these two real numbers are within the unit circle.

Generate a series of random points, count the number of points in the unit circle and the total number of points, and the ratio of the area of ​​the inscribed circle to the area of ​​the square is PI:4, where PI is the pi.

(Thanks to the netizen Qilihe Chuncai for pointing out: S inscribed circle: S positive = PI: 4. For details, please see the 99th comment below the article. Corrected on the 16th),

As more random points are obtained (even if 10 to the power of 9 random points are taken, the result only matches pi in the first 4 digits),

The closer the result is to pi.

2. 1947 Simplex Method

[1947: George Dantzig, at the RAND Corporation, creates the simplex method for linear programming.]

In 1947, Gregorge Dantzig of the RAND Corporation invented the simplex method.

The simplex method has since become an important cornerstone of linear programming.

Linear programming, in simple terms, is a set of linear (all variables are first powers) constraints.

(For example, a1*x1+b1*x2+c1*x3>0), find the extreme value of a given objective function.

This may seem too abstract, but there are many examples in reality where this method can be used. For example, a company has limited human and material resources available for production ("linear constraints"), but its goal is to maximize profits ("the objective function takes the optimal value"). See, linear programming is not abstract!

As a part of operations research, linear programming has become an important tool in the field of management science.

The simplex method proposed by Dantzig is an extremely effective method for solving similar linear programming problems.

3. 1950 Krylov Subspace Iteration Method

[1950: Magnus Hestenes, Eduard Stiefel, and Cornelius Lanczos, all from the Institute for Numerical Analysis at the National Bureau of Standards, initiated the development of Krylov subspace iteration methods.]

1950: Magnus Hestenes, Edward Stiefel and

Cornelius Lanczos, invented the Krylov subspace iteration method.

The Krylov subspace iteration method is used to solve equations of the form Ax=b, where A is an n*n matrix. When n is sufficiently large, direct calculation becomes very

The Krylov method cleverly transforms it into an iterative form of Kxi+1=Kxi+b-Axi to solve it.

Here K (derived from the first letter of the author's surname, Russian Nikolai Krylov) is a constructed matrix close to A.

The beauty of iterative algorithms is that they simplify complex problems into phased, easy-to-compute sub-steps.

4. 1951 Decomposition Method for Matrix Calculation

[1951: Alston Householder of Oak Ridge National Laboratory formalizes the decompositional approach to matrix computations.]

In 1951, Alston Householder of Oak Ridge National Laboratory proposed a decomposition method for matrix calculations.

This algorithm proves that any matrix can be decomposed into triangular, diagonal, orthogonal and other special forms of matrices.

The significance of this algorithm makes it possible to develop flexible matrix calculation software packages.

1957 Optimizing Fortran Compiler

[1957: John Backus leads a team at IBM in developing the Fortran optimizing compiler.]

1957: John Backus leads a team at IBM that creates the Fortran optimizing compiler.

Fortran, also translated as "Fortran", is composed of the two words "Formula Translation", which means "formula translation".

It is the first high-level programming language in the world to be officially adopted and spread to this day.

This language has now developed to Fortran 2008 and is well known to people.

6. 1959-61 QR algorithm for calculating matrix eigenvalues

[1959–61: JGF Francis of Ferranti Ltd, London, finds a stable method for computing eigenvalues, known as the QR algorithm.]

1959-61: JGF Francis of Ferranti Ltd., London, found a stable method for calculating eigenvalues,

This is the famous QR algorithm.

This is also an algorithm related to linear algebra. Those who have studied linear algebra should remember the "eigenvalues ​​of matrices". Calculating eigenvalues ​​is a matrix calculation.

One of the most core issues is that traditional solutions involve finding the roots of higher-order equations, which is very difficult when the problem is large in scale.

The QR algorithm decomposes the matrix into the product of an orthogonal matrix (I hope you know what an orthogonal matrix is ​​when reading this article. :D.) and an upper triangular matrix.

Similar to the Krylov method mentioned above, this is another iterative algorithm that simplifies the complex problem of finding the roots of high-order equations into a staged and easy-to-solve problem.

The sub-steps of the calculation make it possible to solve the eigenvalues ​​of large matrices using computers.

The author of this algorithm is JGF Francis from London, UK.

7. 1962 Quick Sort Algorithm

[1962: Tony Hoare of Elliott Brothers, Ltd., London, presents Quicksort.]

1962: Tony Elliott Brothers Ltd. of London, Hall proposed quick sort.

Haha, congratulations, you finally saw what may be your first familiar algorithm~.

As a classic sorting algorithm, the quick sort algorithm is used everywhere.

The quick sort algorithm was first designed by Sir Tony Hoare. Its basic idea is to divide the sequence to be sorted into two halves.

The left half is always "small" and the right half is always "large", and this process continues recursively until the entire sequence is in order.

Speaking of Sir Tony Hoare, the quick sort algorithm was actually just a small discovery he made by accident. His main contributions to computers include

He was awarded the 1980 Turing Award for his contributions to the theory of formal methods and the invention of the ALGOL60 programming language.

The average time complexity of quick sort is only O(Nlog(N)), which is much faster than ordinary selection sort and bubble sort.

It is truly a historic initiative.

8. 1965 Fast Fourier Transform

[1965: James Cooley of the IBM TJ Watson Research Center and John Tukey of Princeton University and AT&T Bell Laboratories unveil the fast Fourier transform.]

1965: James Cooley of IBM Watson Research Institute and John Tukey of Princeton University,

AT&T Bell Labs jointly introduced the Fast Fourier Transform.

The Fast Fourier Algorithm is a fast algorithm of the Discrete Fourier Algorithm (which is the cornerstone of digital signal processing) with a time complexity of only O

(Nlog(N)); More important than time efficiency, the fast Fourier algorithm is very easy to implement in hardware, so it is widely used in the field of electronic technology.

Extremely wide application.

In the future, I will focus on this algorithm in my classic algorithm research series.

9. 1977 Integer Relation Detection Algorithm

[1977: Helaman Ferguson and Rodney Forcade of Brigham Young University advance an integer relation detection algorithm.]

1977: Helaman Ferguson and Rodney Forcade of the University of Birmingham, proposed the Forcade algorithm for detecting integer relations.

Integer relationship detection is an old problem, dating back to the time of Euclid. Specifically:

Given a set of real numbers X1, X2, ..., Xn, are there integers a1, a2, ... an that are not all zero, such that: a1 x 1 + a2 x2 + . . . + an x

n = 0?

This year, Helaman Ferguson and Rodney Forcade of Brigham Young University solved this problem.

This algorithm is used to "simplify the calculation of Feynman diagrams in quantum field theory". OK, you don't need to understand it, just understand it. :D.

10. 1987 Fast Multipole Algorithm

[1987: Leslie Greengard and Vladimir Rokhlin of Yale University invent the fast multipole algorithm.]

1987: Greengard and Rokhlin of Yale University invent the fast multipole algorithm.

This fast multipole algorithm is used to calculate the exact motion of N particles interacting via gravitational or electrostatic forces.

—such as the stars in the Milky Way, or the interactions between atoms in proteins. OK, you just understand it.

If you have any comments or questions, please leave a message or comment on the blog.

Original address: JULY

<<:  Quick sort algorithm popularization tutorial

>>:  Heap sort algorithm popularization tutorial

Recommend

Let’s go, “group” to find the shale “underground palace”

In ancient times, people conducted hydrogeologica...

Regarding iOS multithreading, it is enough for you to look at me

[[142590]] In this article, I will sort out sever...

A day in the life of a technical manager at Google

[[154079]] I shamelessly wrote this article about...

How much does it cost to be an agent for a makeup app in Cangzhou?

Why should you be an agent for WeChat Mini Progra...

Alipay Mini Program officially opens public beta to individual developers

[[258329]] Alipay Mini Program officially opened ...

Tesla builds more service centers

According to foreign media reports, in order to s...

21 problems faced by operators, are you affected?

Some time ago, I had in-depth communication and u...

Yamanaka City June Training Camp

Introduction to the June training camp resources ...