f( n ) = 1
, because in this case a constant number of instructions is required (of course, in the absence of recursion - see below). A single cycle from 1
to n
gives the asymptotics f( n ) = n
, since before and after the cycle it performs a constant number of commands, and the constant number of instructions within the cycle is executed n
times.A
size n
contains the specified value: <?php $exists = false; for ( $i = 0; $i < n; ++$i ) { if ( $A[ $i ] == $value ) { $exists = true; break; } } ?>
f( n ) = n
(which means “linear” more precisely, we will look at the next section). The break
allows the program to complete earlier, even after a single iteration. However, I remind you that we are interested in the most unfavorable scenario, in which the array A
does not contain the specified value at all. Therefore, f( n ) = n
is still.f( n ) = n
. v = a[ 0 ] + a[ 1 ]
f( n ) = 1
.A
size n
two identical values: bool duplicate = false; for ( int i = 0; i < n; ++i ) { for ( int j = 0; j < n; ++j ) { if ( i != j && A[ i ] == A[ j ] ) { duplicate = true; break; } } if ( duplicate ) { break; } }
f( n ) = n
2 .n
iterations gives f( n ) = n
. The loop inside the loop is f( n ) = n
2 . The loop inside the loop inside the loop is f( n ) = n
3 . And so on. int i; for ( i = 0; i < n; ++i ) { f( n ); }
f( n )
executes exactly n
instructions, then we can say that the number of instructions in the entire program asymptotically approaches n
2 , since f( n )
is called n
times.f
, we say that our program is Θ( f( n ) )
. For example, in the examples above, the programs Θ( 1 )
, Θ( n
2 )
and Θ( n
2 )
, respectively. Θ( n )
pronounced “theta by n”. Sometimes we say that f( n )
(the original function of counting instructions, including constants) is Θ( - )
. For example, we can say that f( n ) = 2n
is a function that is Θ( n )
. In general, nothing new. You can also write that 2n ∈ Θ( n )
, which is pronounced: “two n belongs to theta from n”. Do not be confused by this notation: it simply says that if we count the number of commands required by the program, and it will be equal to 2n
, then the asymptotics of this algorithm is described as n
(which we find by discarding the constant). Having this notation, we present several true mathematical statements:Θ( )
) the time complexity , or simply the complexity of our algorithm. Thus, the algorithm with Θ( n )
has complexity n
. There are also special names for Θ( 1 )
, Θ( n )
, Θ( n
2 )
and Θ( log( n ) )
, because they are very common. They say that Θ( 1 )
is an algorithm with constant time , Θ( n )
is linear , Θ( n
2 )
is quadratic , and Θ( log( n ) )
is logarithmic (do not worry if you still do not know what logarithm - we'll talk about it soon).Θ
run slower than with a smaller one.A
size n
(sounds familiar, isn't it?), And we are asked to write a program that sorts it. Interest here is that such a need often arises in real systems. For example, a file browser needs to sort files by name in order to make it easier for users to navigate through them. Or another example: in a video game, there may be a problem of sorting 3D objects displayed on the screen according to their distance from the player’s point of view in the virtual world. Purpose: to determine which of them will be visible to him, and which - no (this is called the Visibility Problem ). Sorting is also interesting because for it there are many algorithms, some of which are worse than others. Also this task is simple for definition and explanation. So let's write a piece of code that will sort the array. b = [] n.times do m = a[ 0 ] mi = 0 a.each_with_index do |element, i| if element < m m = element mi = i end end a.delete_at( mi ) b << m end
a
in the code above. The minimum is designated as m
, and its index is mi
), which is placed at the end of the new array ( b
in our case) and removed from the original. Then, among the remaining values, the minimum is again, added to the new array (which now contains two values) and removed from the old one. The process is repeated until all elements are transferred from the original to the new array, which means the end of sorting. In our example, we have two nested loops. The outer loop “runs through” n
times, and the inner loop once for each element of the array a
. Since initially a
has n
elements and at each iteration we delete one of them, first the internal cycle “scrolls” n
times, then n - 1
, n - 2
and so on, until at the last iteration the internal cycle passes only once.1 + 2 + ... + (n - 1) + n
, then finding the complexity of this code will be somewhat problematic. But we can find an “upper limit” for it. To do this, we change the program (you can mentally, without touching the code) to make it worse than it is, and then we derive the complexity for what happened. Then you can confidently say that the original program works either as badly or (most likely) better.n
times at each iteration. Some of these repetitions will be useless, but it will help us analyze the complexity of the resulting algorithm. If we make this small change, then our new algorithm will have Θ( n
2 )
, because we get two nested loops, each of which is executed exactly n
times. And if so, then the original algorithm has O( n
2 )
. O( n
2 )
pronounced "big O from n
squared." This suggests that our program is asymptotically no worse than n
2 . It will work either better or also. By the way, if the code really has Θ( n
2 )
, then we still say that it is O( n
2 )
. To better understand this, imagine that changing the original program does not change it much, making it only slightly worse. Something like adding useless instructions to the beginning of a program. This will only change the constant in the instruction calculation function, which we ignore in the asymptotics. Thus, Θ( n
2 )
for the program is O( n
2 )
too.Θ( n )
also O( n
2 )
in addition to O( n )
. If we imagine that the Θ( n )
program is just a for
loop that runs n
times, we can make it worse by adding another for
inside, also repeating n
times. This will give a program with f( n ) = n
2 . To summarize: any program with Θ( a )
is O( b )
with b
worse than a
. Notice also that the change does not have to make sense or create code similar to the original one. All that is required of him is to increase the number of instructions in relation to the given n
. We use the result to count instructions, not to solve a problem.O( n
2 )
, we are safe: the analysis of the algorithm has shown that it will never work worse than n
2 . This gives us a good assessment of how fast our program is. Let's solve a few examples so that you get a better idea of the new notation.Θ( n )
has O( n )
Θ( n )
has O( n
2 )
Θ( n
2 )
has O( n
3 )
Θ( n )
has O( 1 )
O( 1 )
has Θ( 1 )
O( n )
has Θ( 1 )
Θ( n )
. Therefore, O( n )
can be achieved without changing the program.n
2 is worse than n
, this is true.n
3 is worse than n
2 , this is true1
not worse than n
, it is a lie. If a program asymptotically uses n
instructions (a linear number), then we cannot make it worse so that asymptotically it needs only 1
(constant number) instructions.Θ( 1 )
, then it is, of course, O( n )
. But if it is O( n )
, then there may not be Θ( 1 )
. For example, Θ( n )
algorithm is O( n )
, and Θ( 1 )
is not.O( n
2 )
, but also Θ( n
2 )
. If you do not know what an arithmetic progression is, then look at Wikipedia - this is not difficult.
complexity of the algorithm represents the upper limit of its present complexity, which, in turn, reflects Θ
, sometimes we say that Θ
gives us an accurate estimate . If we know that the boundary we found is not exact, we can use a lower-case
to denote it. For example, if the algorithm is Θ( n )
, then its exact complexity is n
. Therefore, this algorithm is O( n )
and O( n
2 )
at the same time. Since the algorithm is Θ( n )
, then O( n )
defines the boundary more precisely. And we can write O( n
2 )
as ( n
2 )
(pronounced "small o from n squared") to show what we know about the lack of strictness of the border. Of course, it is better when we can find the exact boundaries for our algorithm in order to have more information about its behavior, but, unfortunately, this is not always easy to do.Θ( n )
algorithm for which we found O( n )
as the upper boundΘ( n
2 )
algorithm for which we found O( n
3 )
as the upper boundΘ( 1 )
algorithm for which we found O( n )
as the upper boundΘ( n )
algorithm for which we found O( 1 )
as the upper boundΘ( n )
algorithm for which we found O( 2n )
as the upper boundΘ
complexity and O
complexity are the same, so the boundary is strictO
complexity of a larger scale than Θ
, therefore this boundary is not strong. In fact, the strict boundary here is O( n
2 )
. So we can write that the algorithm is o( n
3 )
O
complexity of a larger scale than Θ
, from which we conclude that the boundary is not strict. O( 1 )
strict, and O( n )
can be rewritten as o( n )
Θ( n )
algorithm cannot have an upper bound O( 1 )
, since n
is more complex than 1
. Do not forget, O
denotes the upper limit2n
and n
are the same, and O
and Θ
are associated only with asymptotics. So we have O( 2n ) = O( n )
, therefore, the boundary is strict, since the complexity is Θ
O
complexity of the algorithm is simpler than its Θ
complexity.
tells us that our code will never run slower than a certain limit. From this we get a basis for evaluation: is our program good enough? If we act in the opposite way, having made the existing code better , and we find the complexity of what happens, then we will use the Ω-notation. Thus, Ω
gives us complexity, beyond which our program cannot be. It is useful if we want to prove that the program is slow or the algorithm is bad. It can also be used when we say that the algorithm is too slow to use in this particular case. For example, the statement that the algorithm is Ω( n
3 )
means that the algorithm is no better than n
3 . It may be Θ( n
3 )
, or Θ( n
4 )
, or even worse, but we will know the limit of its “goodness”. Thus, Ω
gives us the lower limit of the complexity of our algorithm. Similarly to ο
, we can write ω
if we know that this limit is not strict. For example, Θ( n
3 )
algorithm is ο( n
4 )
and ω( n
2 )
. Ω( n )
pronounced “omega large from n”, while ω( n )
pronounced “omega small from n”.O( 1 )
and Ω( 1 )
. The non-strict O-boundary is O( n )
. I remind you that Oh gives us an upper limit. Since n
lies on a scale higher than 1
, this is a weak limit, and we can also write it as o( n )
. But we cannot find a non-strict limit for Ω
, because there is no function lower than 1
. So you have to deal with a strict boundary.O( n )
, since n
greater than √n. And since this boundary is weak, we can write o( n )
. As a lower non-strict boundary, we simply use Ω( 1 )
(or ω( 1 )
)O( n )
and Ω( n )
. ω( 1 )
and o( n
3 )
can be nonstrict. Not the best boundaries, since both are far from the original complexity, but they fit our definitionO( n
2 )
and Ω( n
2 )
. As non-strict boundaries, we use ω( 1 )
and o( n
3 )
, as in the previous exampleO( n
3 )
and Ω( n
3 )
, respectively. The two non-strict boundaries can be ω (√nn 2 ) and o (√nn 3 ). Although these limits are still lax, they are better than the ones we derived aboveO
and Ω
instead of Θ
that we may not be sure of the accuracy of the bounds we found or simply do not want to dig the algorithm deeper.O
and Θ
.Ω
gives us a lower limit for the behavior of our function (i.e., we improve the program so that it calculates fewer instructions), we still refer to the worst case analysis. This is because we feed the program's worst data set and analyze its behavior.Comparison operator for asymptotic estimates | Number comparison operator |
---|---|
The algorithm is o (something) | Number < something |
O ( - ) | ≤ - |
Θ ( - ) | = - |
Ω ( - ) | ≥ - |
ω ( - ) | > - |
O
, o
, Ω
, ω
Θ
, O
, , Θ
, , Ω
.Source: https://habr.com/ru/post/195482/
All Articles