Algorithm 1: Quick sort algorithm Quick sort is a sorting algorithm developed by Tony Hall. On average, sorting n items requires O(n log n) comparisons. In the worst case, O(n2) comparisons are required, but this is uncommon. In fact, quick sort is often significantly faster than other O(n log n) algorithms because its inner loop can be implemented efficiently on most architectures. Quick sort uses a divide and conquer strategy to divide a list into two sub-lists. Algorithm steps: 1 Pick an element from the sequence, called the "pivot", 2 Rearrange the sequence, placing all elements smaller than the pivot value in front of the pivot, and all elements larger than the pivot value behind the pivot (the same number can be on either side). After this partition is exited, the pivot is in the middle of the sequence. This is called a partition operation. 3 Recursively sort the subsequences of elements less than the base value and the subsequences of elements greater than the base value. The best case of recursion is that the size of the sequence is zero or one, that is, it is always sorted. Although the recursion continues, the algorithm will always exit because in each iteration, it will put at least one element in its best position. Algorithm 2: Heap sort algorithm Heapsort is a sorting algorithm designed using the heap data structure. A heap is a structure that approximates a complete binary tree and satisfies the properties of a heap: the key value or index of a child node is always smaller than (or larger than) its parent node. The average time complexity of heapsort is O(nlogn). Algorithm steps: 1. Create a heap H[0..n-1] 2. Swap the head of the heap (the highest value) with the tail of the heap 3. Reduce the size of the heap by 1 and call shift_down(0) to adjust the top data of the new array to the corresponding position 4. Repeat step 2 until the size of the heap is 1 Algorithm 3: Merge sort (Merge sort, translated as merge sort in Taiwan) is an effective sorting algorithm based on the merge operation. This algorithm is a very typical application of the divide and conquer method. Algorithm steps: Apply for space whose size is the sum of the two sorted sequences. This space is used to store the merged sequence. Set two pointers, the initial positions of which are the starting positions of the two sorted sequences. Compare the elements pointed to by the two pointers, select the relatively small element and put it into the merge space, and move the pointer to the next position Repeat step 3 until a pointer reaches the end of the sequence Copy all remaining elements of the other sequence directly to the end of the merged sequence Algorithm 4: Binary Search Algorithm The binary search algorithm is a search algorithm for finding a specific element in an ordered array. The search process starts from the middle element of the array. If the middle element is exactly the element to be found, the search process ends; if a specific element is greater than or less than the middle element, it is searched in the half of the array that is greater than or less than the middle element, and the comparison starts from the middle element as at the beginning. If the array is empty at a certain step, it means that it cannot be found. This search algorithm reduces the search range by half for each comparison. Binary search reduces the search area by half each time, and the time complexity is Ο(logn). Algorithm 5: BFPRT (Linear Search Algorithm) The problem solved by the BFPRT algorithm is very classic, that is, selecting the kth largest (kth smallest) element from a sequence of n elements. Through clever analysis, BFPRT can ensure that the time complexity is still linear in the worst case. The idea of this algorithm is similar to that of quick sort. Of course, in order to make the algorithm still achieve the time complexity of o(n) in the worst case, the five algorithm authors have made subtle processing. Algorithm steps: Divide n elements into n/5 (upper bound) groups of 5 each. Take the median of each group and sort them using any method, such as insertion sort. Recursively call the selection algorithm to find the median of all medians in the previous step, set it to x, and in the case of an even number of medians, set it to select the smaller one in the middle. Use x to split the array, let the number less than or equal to x be k, and the number greater than x be nk. If i==k, return x; if i<k, recursively search for the i-th smallest element among elements less than x; if i>k, recursively search for the ik-th smallest element among elements greater than x. Termination condition: When n=1, the returned element is the i-th small element. Algorithm 6: DFS (Depth First Search) Depth-First-Search is a type of search algorithm. It traverses the nodes of a tree along the depth of the tree, searching the branches of the tree as deeply as possible. When all the edges of node v have been explored, the search will backtrack to the starting node of the edge where node v was found. This process continues until all nodes reachable from the source node have been found. If there are still undiscovered nodes, one of them is selected as the source node and the above process is repeated. The whole process is repeated until all nodes have been visited. DFS is a blind search. Depth-first search is a classic algorithm in graph theory. The depth-first search algorithm can generate the corresponding topological sorting table of the target graph. The topological sorting table can be used to easily solve many related graph theory problems, such as the *** path problem, etc. Heap data structure is generally used to assist in the implementation of DFS algorithm. Algorithm steps: Visit vertex v; Starting from the unvisited adjacent points of v, perform depth-first traversal of the graph until all vertices in the graph that have paths connecting to v are visited; If there are still vertices in the graph that have not been visited, start from an unvisited vertex and re-perform depth-first traversal until all vertices in the graph have been visited. The above description may be abstract, so let's take an example: After visiting a starting vertex v in the graph, DFS starts from v and visits any of its adjacent vertices w1; then starts from w1 and visits the vertex w2 that is adjacent to w1 but has not been visited yet; then starts from w2 and performs a similar visit, and so on, until it reaches the vertex u whose adjacent vertices have been visited. Next, go back one step to the vertex that was just visited last time to see if there are other adjacent vertices that have not been visited. If there are, visit this vertex, and then start from this vertex and perform similar visits as above; if not, go back one step and search again. Repeat the above process until all vertices in the connected graph have been visited. Algorithm 7: BFS (Breadth First Search) Breadth-First-Search is a graph search algorithm. Simply put, BFS starts from the root node and traverses the nodes of the tree (graph) along the width of the tree (graph). If all nodes are visited, the algorithm terminates. BFS is also a blind search. Queue data structure is generally used to assist in the implementation of BFS algorithm. Algorithm steps: First put the root node into the queue. Take the first node from the queue and check whether it is the target. If the target is found, the search ends and the result is returned. Otherwise, add all its direct child nodes that have not yet been checked to the queue. If the queue is empty, it means that the entire map has been checked - that is, there is no target to be searched in the map. End the search and return "target not found". Repeat step 2. Algorithm 8: Dijkstra algorithm Dijkstra's algorithm was proposed by Dutch computer scientist Edsger Dijkstra. Dijkstra's algorithm uses breadth-first search to solve the single-source shortest path problem of non-negative weighted directed graphs, and the algorithm eventually obtains a shortest path tree. This algorithm is often used in routing algorithms or as a submodule of other graph algorithms. The input of the algorithm consists of a weighted directed graph G and a source vertex S in G. Let V denote the set of all vertices in G. Each edge in the graph is an ordered pair of elements formed by two vertices. (u, v) means that there is a path from vertex u to vertex v. Let E denote the set of all edges in G, and the weight of the edge is defined by the weight function w: E → [0, ∞]. Therefore, w(u, v) is the non-negative weight from vertex u to vertex v. The weight of an edge can be thought of as the distance between two vertices. The weight of a path between any two points is the sum of the weights of all the edges on the path. Given vertices s and t in V, Dijkstra's algorithm can find the best-weighted path (i.e., the shortest path) from s to t. This algorithm can also find the shortest path from a vertex s to any other vertex in a graph. For directed graphs without negative weights, Dijkstra's algorithm is the fastest known single-source shortest path algorithm. Algorithm steps: Initially, let S={V0}, T={other vertices}, and the distance values corresponding to the vertices in T are If <V0,Vi> exists, d(V0,Vi) is the weight on the arc <V0,Vi> If <V0,Vi> does not exist, d(V0,Vi) is ∞ Select a vertex W from T with the smallest distance value that is not in S and add it to S Modify the distance values of the remaining vertices in T: If W is added as the intermediate vertex, the distance value from V0 to Vi is shortened, so modify this distance value Repeat steps 2 and 3 above until all vertices are included in S, that is, W=Vi. Algorithm 9: Dynamic programming algorithm Dynamic programming is a method used in mathematics, computer science and economics to solve complex problems by breaking down the original problem into relatively simple sub-problems. Dynamic programming is often applicable to problems with overlapping sub-problems and extreme sub-structure properties, and the time consumed by dynamic programming methods is often much less than the naive solution. The basic idea behind dynamic programming is very simple. Basically, to solve a given problem, we need to solve its different parts (i.e., subproblems), and then combine the solutions to the subproblems to get the solution to the original problem. Often many subproblems are very similar, so dynamic programming attempts to solve each subproblem only once, thereby reducing the amount of computation: Once the solution to a given subproblem has been calculated, it is memoized and stored so that it can be looked up directly the next time the solution to the same subproblem is needed. This approach is particularly useful when the number of repeated subproblems grows exponentially with the size of the input. The most classic problem about dynamic programming is the knapsack problem. Algorithm steps: ***Substructure property. If the solution to the subproblem contained in the *** solution of the problem is also ***, we say that the problem has the *** substructure property (that is, it satisfies the *** principle). The *** substructure property provides important clues for the dynamic programming algorithm to solve the problem. Subproblem overlapping property. The subproblem overlapping property means that when a recursive algorithm is used to solve a problem from top to bottom, the subproblems generated each time are not always new problems, and some subproblems will be calculated repeatedly. The dynamic programming algorithm takes advantage of this overlapping property of subproblems, calculating each subproblem only once, and then saving its calculation results in a table. When the subproblem that has been calculated needs to be calculated again, it only needs to simply check the results in the table, thereby achieving higher efficiency. Algorithm 10: Naive Bayes classification algorithm The Naive Bayesian classification algorithm is a simple probabilistic classification algorithm based on the Bayesian theorem. The basis of Bayesian classification is probabilistic reasoning, which is how to complete reasoning and decision-making tasks when the existence of various conditions is uncertain and only the probability of their occurrence is known. Probabilistic reasoning is the opposite of deterministic reasoning. The Naive Bayesian classifier is based on the independence assumption, that is, it is assumed that each feature of the sample is unrelated to other features. The Naive Bayes classifier relies on an accurate natural probability model and can achieve very good classification results in supervised learning sample sets. In many practical applications, the Naive Bayes model parameter estimation uses the *** likelihood estimation method. In other words, the Naive Bayes model can work without using Bayesian probability or any Bayesian model. Despite these naive ideas and oversimplified assumptions, the Naive Bayes classifier can still achieve quite good results in many complex real-world situations. |
<<: Four ways of Android multithreading
>>: Android compatibility | NDK toolset update notes
I don’t know when foot soaking became a health-pr...
With the development of the Internet, traffic has...
U.S. renewable energy growth slows in 2022 as sup...
As we all know, the cost of acquiring P2P custome...
In operational work, there is a position that req...
Why is your Douyin live streaming sales not effec...
Where does the universe come from? People try to ...
Text/Joint Research Group of Financial Technology...
It has been rumored that Apple's iPhone 6 wil...
In 2014, Wang Kai, the host of "Fortune Stor...
It is obvious that after experiencing the rapid v...
We who do promotions all know that bidding is cha...
The dissemination and production of brainwash adv...
The last week of March 2025 is the 9th "Chin...
Wang Xin Recently, researchers from the Universit...