The first ten are the top ten algorithms from the Bible: Description from the initiator: "Proofs from the Bible" collects dozens of concise and elegant mathematical proofs, which quickly won the favor of a large number of mathematics enthusiasts. If there is another "Algorithms from the Bible", which algorithms will be included in it? ***Name: Union-find Strictly speaking, the union-find function is a data structure that is specifically used to handle the merging and querying of sets. The union-find function cleverly borrows the tree structure to reduce the programming complexity to an incredible level; after using some recursive techniques, almost all operations can be done with two lines of code. The good idea of path compression is the finishing touch of the entire data structure. The union-find function is extremely efficient, and the time complexity of a single operation can almost be regarded as a constant level; but because the actual behavior of the data structure is difficult to predict, accurate time complexity analysis requires a lot of advanced techniques. 2nd Place: Knuth-Morris-Pratt String Matching Algorithm For an introduction to this algorithm, please refer to this article: Teach you to thoroughly understand the KMP algorithm from beginning to end. The KMP algorithm was once not selected as one of the top ten greatest algorithms of the 20th century, but people obviously could not accept that such a beautiful and efficient KMP algorithm would not be selected. Therefore, in the final vote, the KMP algorithm ranked second. Third place: BFPRT algorithm In 1973, Blum, Floyd, Pratt, Rivest, and Tarjan worked together to write a paper titled "Time bounds for selection", which gave an algorithm for selecting the kth largest element in an array, commonly known as the "median of medians algorithm". Relying on a carefully designed pivot selection method, the algorithm theoretically guarantees the linear time complexity in the worst case, defeating the traditional algorithm with average linear and worst O(n^2) complexity. A group of experts played with the complexity analysis of recursive algorithms and constructed an algorithm that deserves to be called the Bible. Here I will briefly introduce an algorithm with a time complexity of O(N) for selecting the kth largest element in an array: Similar to the segmentation algorithm in quick sort: After each split, the position s of the pivot point in the array can be returned, and then the size of s is compared with k. If it is large, recursively divide the array again. If it is small, recurse array //s as the middle pivot point element. Otherwise, it returns array, which is the value returned in partition. //We need to find this s. After finding the s value that meets the requirements, traverse and output the elements on the side that is smaller than s. I found this proof that finding the k-th smallest element in an array has an average time complexity of O(N): The expected running time of the above program is ***proved to be O(n), assuming the elements are distinct. 4. Quicksort The quick sort algorithm covers almost all lists of classic algorithms. It was once selected as one of the top ten greatest algorithms of the 20th century. 5. Floyd-Warshall all-pairs shortest path algorithm For an introduction to this algorithm, please refer to this article I wrote: Comparison of Several Shortest Path Algorithms (http://blog.csdn.net/v_JULY_v/archive/2011/02/12/6181485.aspx). d: 2D array. d is the adjacent edges with minimum cost or shortest path. for k from 1 to n: for i from 1 to n: for j from 1 to n: d = min(d, d + d) No. 6: Gentry's Fully Homomorphic Encryption Scheme This algorithm is very beautiful, it allows a third party to perform arbitrary operations on encrypted data without obtaining the private key (not well understood). No. 7: Depth First Search, Breadth First Search They are the basis for many other algorithms. 8. Miller-Rabin similar experiment test The idea is to use the properties of prime numbers (such as using Fermat's Last Theorem) to find a small probability that the witness is not a prime number. If no evidence is found after enough random testing, the number is prime. No. 9: Binary Search To find an element in an ordered set, you can use the binary search algorithm, also called binary search. The binary search algorithm first compares the element in the middle of the set with the size of the key. There are three cases (assuming that the set is arranged from small to large): 1. If the key is less than the element in the middle position, the matching element must be on the left (if any), so a binary search is applied to the left area. 2. The key is equal to the element in the middle position, so the element is found. 3. If the key is greater than the element in the middle position, the matching element must be on the right (if any), so a binary search is applied to the right area. In addition, when the collection is empty, it means that it cannot be found. 10th place: Huffman coding Huffman Coding is a coding method, an entropy coding (weight coding) algorithm for lossless data compression. It was invented by David A. Huffman in 1952 when he was pursuing a doctorate at MIT and published in the paper "A Method for the Construction of Minimum-Redundancy Codes". 11. Cooley-Tukey FFT algorithm. Fast Fourier transform algorithm. 12. Linear programming. 13. Dijkstra algorithm. Same as the fifth one above, another shortest path algorithm. 14. Merge Sort. Merge sort. 15. Ford–Fulkerson algorithm. Network *** flow algorithm. 16. The Euclidean method. In mathematics, the Euclidean algorithm, also known as the Euclidean algorithm, is an algorithm for finding the most common divisor, that is, the most common factor of two positive integers. This algorithm is described as the first algorithm of TAOCP, which shows how much importance it has. It is the oldest known algorithm, dating back 3,000 years. The Euclidean algorithm first appeared in Euclid's "Elements" (Book VII, Propositions i and ii), and in China it can be traced back to the "Nine Chapters on the Mathematical Art" that appeared in the Eastern Han Dynasty. The extended Euclidean algorithm constructively proves that for any integers a and b, there exists a pair of x and y such that ax + by = gcd(a, b). 17. RSA encryption algorithm. An encryption algorithm will be introduced in detail later. 18. Genetic algorithm. 19. Expectation (EM) algorithm. This algorithm is selected as one of the top ten classic algorithms in the field of data mining. In statistical computing, the absolute expectation (EM) algorithm is an algorithm for finding the absolute likelihood estimate of parameters in a probabilistic model, where the probabilistic model depends on unobservable latent variables. Absolute expectation is often used in the field of data clustering in machine learning and computer vision. The absolute expectation algorithm performs calculations in two steps alternately. The first step is to calculate the expectation (E), using the existing estimated value of the latent variable to calculate its absolute likelihood estimate; the second step is to maximize (M), which maximizes the absolute likelihood value obtained in the E step to calculate the value of the parameter. The parameter estimate found in the M step is used in the next E step calculation, and this process is repeated alternately. 20. Data Compression Data compression is a technology that increases data density by reducing the redundancy of data stored in computers or in communication transmission, ultimately reducing the storage space of data. Data compression has a wide range of applications in the fields of file storage and distributed systems. Data compression also represents an increase in the capacity of medium size and an expansion of network bandwidth. 21. Hash function Hash, generally translated as "hash", is also directly transliterated as "hash". It transforms an input of arbitrary length (also called pre-mapping, pre-image) into an output of fixed length through a hash algorithm. The output is the hash value. 22. Dynamic Programming. 23. Heap sort algorithm. As a fast and stable algorithm, the average time complexity of the heap sort algorithm (the worst case) is O(n*lgn). Of course, in practical applications, a well-implemented quick sort algorithm is still better than the heap sort algorithm. However, the heap data structure can also be used as an efficient priority queue. 24. Recursion and backtracking algorithms. I believe you are familiar with these two algorithms, so I will not go into details here. 25. Longest Common Subsequence The longest common subsequence, abbreviated as LCS in English, is defined as follows: if a sequence S is a subsequence of two or more known sequences and is the longest of all sequences that meet this condition, then S is called the longest common subsequence of the known sequences. A dynamic programming method for calculating the longest common subsequence is as follows: Take two sequences X and Y as examples: Suppose a two-dimensional array f represents the length of the longest common subsequence before the i-th position of X and the j-th position of Y, then: f = same(1,1) f = max{f+same(i,j),f,f} Among them, same(a,b) is "1" when the a-th bit of X is exactly the same as the b-th bit of Y, otherwise it is "0". At this point, the smallest number in f is the length of the longest common subsequence of X and Y. By backtracking through the array, the longest common subsequence can be found. The space and time complexity of this algorithm are both O(n2). After optimization, the space complexity can be O(n) and the time complexity can be O(nlogn). 26. Algorithm and implementation of red-black tree Regarding red-black trees, there is an implementation in the Linux kernel. 27. A* search algorithm. Compared with BFS, Dijkstra and other algorithms, A* search algorithm, as an efficient shortest path search algorithm, has been increasingly widely used. 28. SIFT algorithm for image feature extraction and matching SIFT, scale-invariant feature transform, is a computer vision algorithm used to detect and describe local features in images. It searches for extreme points in spatial scale and extracts their position, scale, and rotation invariants. This algorithm was published by David Lowe in 1999 and improved in 2004. |
<<: The arrival of HTML5 era: the official version of HTML5 standard specification is released
>>: Why Google interviews never last longer than 30 minutes
At present, few people have found the profit mode...
Produced by: Science Popularization China Author:...
Why join the WeChat Mini Program Development Comp...
[[127176]] Recently, Google released a distributi...
In Europe and the United States, people pay great...
Recently, many Israeli media reported that the co...
I just recently completed a lottery project, and ...
Although the problem of 400 telephone numbers bei...
Only data can tell whether your promotion account...
Recently, according to the electrek website, afte...
1. Changan Automobile plans to increase its layou...
Five years after its launch, Google's self-de...
When taking over a new project, how can you quick...
Zhang Xiaolong once complained about how many peo...
From the beginning to profitability, we will brea...