Paul

Paul

Time Complexity

Time Complexity: In computer science, the time complexity of an algorithm is a function that qualitatively describes the running time of the algorithm, but it does not indicate how much time a program takes to solve a problem. Instead, it describes how quickly the time required by the program grows as the problem size increases.

In other words, for a computer that processes data at high speed, the efficiency of processing a specific piece of data cannot measure the quality of a program. Instead, we should look at whether the program's running time remains the same when the size of the data increases by hundreds of times, or whether it slows down by hundreds of times, or even thousands of times. Regardless of how large the data is, if the time taken by the program remains constant, we say that the program is very good, with a time complexity of O(1), also known as constant complexity; if the time taken increases with the size of the data, the time complexity of the program is O(n), such as finding the maximum value among n numbers; while algorithms like bubble sort and insertion sort, where doubling the data results in a fourfold increase in time, belong to O(n^2) complexity. There are also some exhaustive algorithms where the required time increases geometrically, which is O(a^n) exponential complexity, or even O(n!) factorial complexity.

There cannot be a complexity of O(2n^2) because the "2" is a coefficient that does not affect the overall time growth of the program. Similarly, the complexity O(n^3+n^2) is simply O(n^3) complexity. Therefore, we would say that a program with O(0.01n^3) efficiency is lower than that of O(100*n^2), even though the former performs better when n is very small, the latter's time grows slower with data size, and eventually, O(n^3) complexity will far exceed O(n^2). We also say that O(n^{100}) complexity is less than O(1.01^n) complexity.

The previous types of complexity can be divided into two levels, with the latter's complexity being far greater than the former: one is O(1), O(log(n)), O(n^a), etc., which we call polynomial-level complexity because its size n appears in the base position; the other is O(a^n) and O(n!) type complexity, which is non-polynomial level and often beyond what computers can handle. When we are solving a problem, the algorithms we choose usually need to be of polynomial-level complexity, as non-polynomial level complexity requires too much time and often results in timeouts, unless the data size is very small.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.