📜 ⬆️ ⬇️

Objectives of the entrance exam at ShAD 2014



Upon admission to the ShAD, knowledge is checked within the framework of the general program, which includes the basic sections of higher algebra, mathematical analysis, combinatorics, probability theory, and the basics of programming. Under the cat, the tasks of the entrance exam for the 2014 SAD are discussed in detail. Attention! The post is quite voluminous, so sit back, arm yourself with a pencil, if necessary, reach for tea with cookies. Be sure to do everything for the evening! Chances are high that the tasks discussed below will absorb your mind for several hours, and prevent someone from going to bed on time. In any case, this evening promises to be interesting. Welcome to cut

Task 1


Find all square real matrices of order 3 satisfying the equation X 2 + E = 0 .

Decision
Since the characteristic polynomial of the matrix X is a polynomial (odd) of degree 3, it has at least one real root and, therefore, the matrix X has at least one real eigenvalue λ . Then there is a nonzero vector v such that the equality holds:
X⋅v = λ⋅v .
Then, X 2 + E = 0 means that (X 2 + E) ⋅v = 0 , or X 2 ⋅v + Ev = 0 , but
X 2 ⋅v = X⋅ (X⋅v) = X⋅ (λ⋅v) = λ⋅X⋅v = λ 2 ⋅v
in this way:
(X 2 + E) ⋅v = (λ 2 + 1) ⋅v = 0 .
Since v is a nonzero vector, to satisfy the equality, it is necessary that λ 2 + 1 = 0 , which is possible only with the complex value of λ . Consequently, a real matrix X of dimension 3 × 3 such that the equation X 2 + E = 0 is satisfied does not exist.
')
2 way to solve
Summarize the problem. Consider a matrix X of dimension n × n . There is a theorem on the product of determinants of square matrices, which is formulated as follows: the determinant of the product of two square matrices is equal to the product of the determinants of the factors. Then if the equation holds
X 2 + E = 0 , we denote this equation (1) ,
then the following equation holds
det (X 2 ) = det (−E) , we denote this equation (2) .
Obviously, det (X 2 ) = det (X) ⋅det (X)> 0 (since det (X)> 0 or det (X) <0, det (X) cannot be equal to zero, t .to det (−E) is non-zero and in this case the equality det (X 2 ) = det (−E) is not valid). Since (−E) is also an n × n matrix, det (−E) = (−1) n . Thus, we see that for even values ​​of n, equation (2) is satisfied (there are real solutions), and for odd n , not (there are only complex solutions). Therefore, for odd n , the original equation (1) will not be fulfilled, and therefore for n = 3, equality (1) is not valid. We get the same conclusion as in the first method of solving the problem: a real matrix X of dimension 3 × 3 such that the equation X 2 + E = 0 is satisfied does not exist.

The problem has been solved, and even in several ways, below we will talk about square matrices of even orders, since for them, one can find such that equation (1) holds. It is interesting to see what she will be.

Consider a 2 × 2 matrix, knowing that in the product on itself, it must give (−E) :



Hence the equations are true:
(a 11 ) 2 + a 12 · a 21 = −1
a 11 · a 12 + a 12 · a 22 = 0
a 21 · a 11 + a 22 · a 21 = 0
a 21 · a 12 + (a 22 ) 2 = −1

From the first and fourth equations we get: (a 11 ) 2 = (a 22 ) 2 , we denote this equation *
And from the second and third equations we get: (a 12 + a 21 ) · (a 11 + a 22 ) = 0 , and we denote this equation **

From the equation * it follows that it is necessary to consider two cases:
1 case : a 11 = a 22
2 case : a 11 = −a 22

1 case
a 11 = a 22 = a , then a 11 + a 22 ≠ 0 . It follows from this that for the fulfillment of the equality ** it is necessary that a 12 = −a 21 . Let a 12 = b . Now we can write the product of matrices:

Equality for real a and b will be satisfied only when a = 0 , b = ± 1 . Thus, the desired matrix will look like this:

You can make sure that by multiplying each of them by itself, we get (−E) .

2 case
a 11 = −a 22 , then equation ** also holds. Denote by a 11 = a . We write the product of matrices:

It is necessary that the equality a 2 + a 12 · a 21 = −1 holds. Express from the last equality a 21 = - (1 + a 2 ) / a 12 . Let a 12 = b . Then the required matrix will look like this:

If from the equality a 2 + a 12 · a 21 = −1, we express a 12 = - (1 + a 2 ) / a 21 and designate a 21 = b . Then the required matrix will look like this:

It is also necessary to write that a, b ∈ R , b 0 . You can also easily make sure that by multiplying each of these matrices by itself we get (−E) .

We see that the first case, discussed above, is a special case of the second (for a = 0, b = 1 ). Therefore, in general terms, what the desired matrix looks like describes exactly the second case.

Thus, we have established how real second-order square matrices should look in order to give a negative unit matrix in the product when multiplying by itself.


Task 2


Among the participants of the campaign of any four at least one is familiar with the other three. Prove that each participant of the campaign, except for a maximum of three, is familiar with all the others.

Decision
Let n be the number of participants in the hike. Let's build a dating graph G = (V, E) , in which the vertices denote the participants of the hike, and the edges - their acquaintances among themselves. It is also known that any subgraph C = (V c , E c ) , | V c | = 4 of the graph G has at least one vertex v c , the degree of which is d (v c ) = 3 (of any four, at least one is familiar with three others). It is necessary to prove that for all vertices of the graph G , except for a maximum of three, the degree d (v) = n-1 (each participant of the campaign, except for a maximum of three, is familiar with all the others). Or, in other words, it is necessary to prove that the graph G has a complete subgraph ( click ) D , the minimum possible number of vertices is | V d | for which is equal to n-3 .

In the well-known clique problem, the question is posed whether there is a clique G of a given size in the graph (a variant of the recognition problem) or what is the maximum clique size in the graph (a computational variant of the problem). The task belongs to the class of NP-complete in the field of graph theory and, strictly speaking, does not have an effective solution algorithm.

Here we have a remarkable condition (1 *), which simplifies everything greatly, therefore, to solve the problem, it is necessary to construct all possible satisfying (1 *) graphs (see further 1,2,3 ) and determine the maximum clique size in them. You can do it pretty quickly:

1. If there are no strangers among themselves, in other words, if the dating graph G is full, then the number of people familiar with everyone else is n (the maximum click has size n ).

2. Let some two vertices of the graph G a, b є V be non-adjacent, that is, a, b are unfamiliar with each other. Suppose that in the graph G there also exists another pair of nonadjacent vertices c, d є V. Then the condition is not satisfied that in a group of any four people (for example a, b, c, d ) at least one must be familiar with the other three (1 *). Consequently, there can be no other pair of strangers between them c, d with a pair a, b . Therefore, if a, b are unfamiliar with each other, then all other people ( n-2 people) are familiar with each other, the subgraph D has | V d | = n-2 vertices (circled in blue in the figure). Accordingly, if a, b are familiar with everyone else, then n-2 people are familiar with everyone (vertices corresponding to people familiar with everyone are colored green) (the maximum click has size n-2 ):



3. If a is unfamiliar also with c , then since in any group (for example a, b, c, d ) only d can be familiar with the other three (since a is unfamiliar with b, c ), a, b , c are familiar with all the other n-3 people who are also familiar with each other. Thus, the minimum number of people familiar with all n-3 (the maximum click size is n-3 ) q.t.



Below we consider a simple example for the five participants of the campaign. Accordingly, there are five options for choosing any four of them (the number of combinations of 5 to 4). In column G a is unfamiliar with b and c . Three participants are unfamiliar with everyone else. This is the maximum under the given condition (1 *), which is performed in each subgraph.




Task 3


Describe all non-degenerate real matrices A for which all elements of the matrices A and A −1 are non-negative.

Decision
Consider matrices A and B of dimension 2 × 2. Let B = A −1 be the inverse matrix. Then, the result of the product of matrices A and B is the identity matrix E :



Therefore, the following equations must be fulfilled:
a 11 b 11 + a 12 b 21 = 1 (1)
a 11 b 12 + a 12 b 22 = 0 (2)
a 21 b 11 + a 22 b 21 = 0 (3)
a 11 b 12 + a 12 b 22 = 1 (4)

Consider expressions (2) and (3) . When will they vanish if the elements are nonnegative? Since the elements of matrices A and B should be non-negative, then the fulfillment of identities (2) and (3) will be performed by t. and tt, when both terms in each expression are zero. Consider the possible options. Immediately discard the options in which the elements of a single row of the matrix A or the column of the matrix B “disappear”, since in this case, the condition of nondegeneracy of matrices will not be satisfied. Options in which the result of the product will be the zero matrix is ​​also rejected (a 11 = a 22 = b 21 = b 12 = 0, a 21 = a 12 = b 11 = b 22 = 0).

There are two options: a 12 = a 21 = b 12 = b 21 = 0 and a 11 = a 22 = b 11 = b 22 = 0. Then, it is obvious that equalities (1) and (4) and the result of the product AB are satisfied there was a unit matrix, two options are possible:


Consider the 3 × 3 matrices A and B :



Therefore, the following equations must be fulfilled:
a 11 b 11 + a 12 b 21 + a 13 b 31 = 1
a 11 b 12 + a 12 b 22 + a 13 b 32 = 0
a 11 b 13 + a 12 b 23 + a 13 b 33 = 0
a 21 b 11 + a 22 b 21 + a 23 b 31 = 0
a 21 b 12 + a 22 b 22 + a 23 b 32 = 1
a 21 b 13 + a 22 b 23 + a 23 b 33 = 0
a 31 b 11 + a 32 b 21 + a 33 b 31 = 0
a 31 b 12 + a 32 b 22 + a 33 b 32 = 0
a 31 b 13 + a 32 b 23 + a 33 b 33 = 1

Consider expressions that should go to zero, and analyze when this happens. Immediately, we discard the variants of equality of expressions to zero, in which the elements of one row (column) of matrix A (matrix B ) “disappear”. Etc. by analogy, discarding all the options in which as a result of the product AB we get the zero matrix. There will be cases in which nonzero elements in matrices A and B will be in the positions shown in red:


When multiplying matrices of this type, as shown above, take for example the option:

we will get the identity matrix.

So it can be seen that the desired non-degenerate matrix A consisting of non-negative elements will have an inverse matrix B of non-negative elements if each row and column of matrix A has one nonzero element, and the remaining elements of the row and column are zero. Such a description of the matrix reminds us of a permutation matrix , but instead of single elements there can be any non-zero values ​​in it. Such matrices are called monomial or generalized permutation matrices (you can read about them on Wikipedia ).

There is a theorem : let A be a nonnegative matrix. In this case, the matrix A will have a non-negative inverse matrix if and only if A is a generalized permutation matrix.

Knowing such a theorem in advance, the problem can be solved faster :)


Task 4


Given a numeric array of length n . Suggest an algorithm that finds the maximum value of the sum of the segments of this array. The time limit is O (n) ; for additional memory, O (1) .

Decision
Consider an array a [] of n elements
a 0 , a 1 , a 2 , ... a n-1
We will go through the array and accumulate the current partial amount in some variable sum . If at some point the sum turns out to be negative, then we simply assign sum = 0 . The maximum of all the values ​​of the sum variable that occurred during the work will be the answer to the problem.

This algorithm is called Kadan's Algorithm (You can read more about it on Wikipedia here ). Execution time - required by the condition of the problem O (n) , since we perform one pass through the array a [] of n elements. The condition for additional memory - O (1) is also met.

Implementation example on go

package main import ( "fmt" "math/rand" "time" "strconv" ) func main() { const N int = 10 //    var arr [N]int /*     random(from, to) */ for i := 0; i < N; i++ { arr[i] = random(-10, 10) } fmt.Println(arr) var ans int = arr[0] var sum int = 0 for r := 0; r < N; r++ { sum += arr[r] ans = max (ans, sum) sum = max (sum, 0) } fmt.Println("MAX subarray sum = " + strconv.Itoa(ans)) } func max(a, b int) int { if a < b { return b } return a } func random(min, max int) int { rand.Seed(time.Now().UnixNano()) return rand.Intn(max - min) + min } 

Examples of the program :

[6 -5 -5 -1 -5 6 4 -5 -7 -1]
MAX subarray sum = 10

[3 -7 -1 -3 -10 9 9 -3 6 -3]
MAX subarray sum = 21



Task 5


There are 10 coins of different weights and some scales. With the help of a single weighing on the scales, you can find out for the selected two coins, which is heavier. Is it possible for 20 weighings to find out in which order the coins go by weight?

Decision
In essence, this is an optimal sorting problem. We number the coins from 1 to 10. According to some algorithm, we perform a procedure of 20 weighings. Each procedure corresponds to a 20-bit binary number a , for example 1001 0110 1010 0111 1010, the digit in the k -th digit of which shows the result of the k -th weighting. By weight, each coin arrangement corresponds to a permutation b of numbers from 1 to 10. Let A be a set of 20-bit binary numbers (the power of the set | A | = 2 20 ), B - a set of permutations of the numbers from 1 to 10 (power | B | = 10 ! ) If the algorithm allows to determine the order, then each a corresponds to one b , and to each b , at least one a , but 2 20 <10! , therefore, for 20 weighings, it is NOT possible to know in which order the coins go by weight.

Note that in the task under question, “is it possible in 20 weighings to find out in which order the coins go by weight?” Does it mean if there is an algorithm that allows you to find out in 20 weighings in what order the coins go in weight, i.e. is it possible to achieve the result of solving the problem in a finite number of actions.
In individual cases, with incredible luck, it is possible to determine the order in which the coins follow the weight even after 19 weighings. Consider one such case. Obviously, in a group of n coins, n-1 weighings are required to uniquely identify a specific coin. So, we have 10 coins. Let the 1st coin is the lightest, the 2nd one is heavier than the 1st one, the 3rd one is heavier than the 2nd one ... The 10th one is the heaviest. We randomly take the 5th coin and for 9 weighings (the number of weighings is circled in a black square in the figure) we unambiguously determine that it is fifth in weight. Now we have two groups of coins - less than the 5th and more than the 5th by weight. In the first group we take the 3rd coin and for three weighings we unambiguously determine two whole coins of the 3rd and 4th. After that, we will uniquely determine the 1st and 2nd, also for one weighing. In the second group, we are again incredibly lucky and we unambiguously identify the 8th coin for four weighings, and then similarly for the same weighing of the 6th, 7th, and 9th, 10th. Total, summing up the total number of weighings (in black squares in the figure), we get 19 <20 .



This, of course, is a case of incredible luck and has nothing to do with solving a specific problem. Therefore, it is not necessary to write in the answer “POSSIBLE”.


Task 6


Calculate the sum of the integrals:


Decision
1 solution

Perform a change of variable in each of the integrals. In the first integral:



We get:



In the second integral:



We get:



Thus, the sum of the integrals can be written in the form:



Take the second integral in the amount above in parts :





We get rid of the opposing terms and substitute the values ​​of the integration limits in the remaining term, we get:



2 solution proposed by the user p53

It can be seen that the functions under the integrals and the limits of integration were not chosen randomly, namely:





So, the sum of the integrals is equal to the sum of the areas of curvilinear trapezium, which can be represented as the difference of the areas of two rectangles:





Task 7


The game consists of identical and independent cones, in each of which the gain occurs with probability p . When a player wins, he gets $ 1, and when he loses, he pays $ 1. As soon as his capital reaches the value of N dollars, he is declared the winner and is removed from the casino. Find the probability that the player will lose all the money sooner or later, depending on his starting capital k .

Decision
We have before us the well-known problem of the theory of probability — the “Problem of ruining a player”, which is usually formulated a little differently: players A and B have a and b $, respectively. With each game of a game, one of them wins another $ 1. The probability of winning player A in each game is p , for player B, the probability of winning is q = 1 - p . What are the probabilities P a and P b that player A and, accordingly, player B will win all the money from the opponent? In 1711 Moavre published the following results:
P a = (1– (q / p) a ) / (1– (q / p) a + b )
P b = (1– (p / q) b ) / (1– (p / q) a + b )

In our case player A is us with a starting capital of k , player B is a casino. We assume that the player is removed from the casino when he won all the money from him (he ruined him), i.e. N is the sum of all the money that the player and the casino had before the game began. Then the starting capital of player B (casino) is N – k . Thus, we need to find the probability P b (that the casino will win all the money from the player):
P b = (1– (p / q) N – k ) / (1– (p / q) N )

The solution is simple and quick, if you look at task 7, you immediately say: “This is about player ruin, classic!” If not, and you want to find out why the formulas are just like that, click on the spoiler below.

Player ruin problem
For a start, a good video explaining the task at hand.


Formulation of the problem
Two players A and B are sitting at the table. In one round, one wins (loses) from the other (mu). Let p be the probability that A wins a round, q = 1 - p be the probability that B wins the round. The starotny capital of player A is i $, player B is N − i $. Find the probability that player A wins all $ from the opponent.

Note: for player A, the starting capital designation is i , because the number of dollars is an integer (integer) number. For player B, the starting capital is N − i , so that it is convenient to say that we are looking for the probability that A will have N − i + i = N dollars ( A will win against the opponent).

Amount $ at player A can be considered as a random walk from 0 to N

then p is the probability that the point will move to the right by $ 1, q is the probability that the point will move to the left by $ 1.
Let p i (A win the game | A has i $)
By the formula of total probability we get:



Moreover, 1 ≤ i ≤ N − 1
p 0 = 0 is the probability that A will win the game with $ 0 starting capital (i.e. player A simply cannot enter the game)
p N = 1 is the probability that A will win the game with N $ starting capital (in this case, the starting capital of player B is 0 $)

For sufficiently large N , the expression above can be considered as a difference equation , and p i can be looked at as its solution. From the difference equation by the formal replacement p i = x i we get the algebraic equation:


The discriminant of the quadratic equation 1−4pq > 0 for p ≠ q . Find the roots of the quadratic equation:


Since the discriminant of a quadratic equation is greater than zero, the solution of the difference equation is sought in the form:


Then:

Substituting C 1 and C 2 into the expression for p i , we get:

To get an expression for p i with p = q , we denote x = q / p , since p = q , then x → 1 . Find the limit:


Thus, we have found a formula for calculating the probability that player A will win all the money ( N ), depending on his starting capital i .


Task 8


Let a be a real number. For any integer n ≥ 0, we denote by a n the distance from a to the nearest rational number of the form m / 2 n , where m is an integer. Find the highest possible amount of a row:



Decision
The distance a n from a real number a є R to the nearest rational number of the form m / 2 n , where m, n є Z, can be easily reduced to a function of the form "distance to the nearest integer" S (x), x є R :



Obviously, the values ​​of this function lie in the range from 0 to 1/2, in addition, it is periodic. For those who did not have time to estimate it in mind, here is a link to wolfram alpha.
Due to the periodicity of the function, in order to search for the maximum, it is enough for us to consider it on one period, namely on the interval [0,1] , instead of doing it on the whole numerical axis. A good optimization for a start)

So, we write the expression for a n , we bring the terms inside the module to a common denominator and put out 1/2 n , the value of a n will not change:



Thus, based on the definition of the function S (x) , we obtain a new expression for a n :



“It can be noticed” that the required series represents, as “obviously” for everyone reading this post, the function of Blanmange (Takagi) , similar to the dessert of the same name:





The function (curve) of Blanmange, along with the well-known Weierstrass function , is a continuous, but nowhere differentiable function. Of course, if the solving problem is familiar with it, then he can immediately write proudly that this is a Blanmange function and for her, according to the Kahane theorem (3.1) , the maximum value is 2/3 . And this is a ready answer! The trouble is that at the exam, although it is allowed to use printed reference books, there may simply not be any information on this function. Therefore, we will continue the search for an "adequate" solution.

So, we need to find the maximum possible amount of the series:



For clarity, here is a graph of the function S (x) :



We write the series for T (x) :



We select in it n-partial sums (curly brackets in the figure above), each of which corresponds to the number of iterations when constructing the Takagi curve and is given by the expression:



We write out all the partial sums and pay attention to the sums with even numbers:



Denote T 2 (x) as S 1 (x) (this is the so-called “tabletop function”) and note that for even partial sums, the expression is true:



and



Well, then we come to the aid of mathematical induction . Kahane, in his proof, carried out similar reasoning, considering even partial partial sums, constructed the first two of them T 2 (x) and T 4 (x) and by induction came to the conclusion that the maximum value for T 2n (x) is:



Then the maximum sum of the series M is calculated as the sum of the series of an infinitely decreasing geometric progression:



Well, actually, the graphics themselves for the first 6 iterations:



Even if we stop at the fourth iteration, the construction of these graphs will be a slow process. Here you can build for other iterations, who are interested. However, I would like to find a way to quickly.

Alternatively, you can apply the induction early, drawing attention to the function- "tabletop" S 1 (x) . Consider again the partial sums T 2 (x) and T 4 (x) , and construct graphs for S 1 (x) and S 1 (4x) :



Obviously, with each subsequent increase in the "frequency" ( S 1 (4x), S 1 (16x), S 1 (64x) ... ), 2 new "tabletops" will appear inside each one built at the previous iteration (with a frequency of 4 times less), and therefore there is always an argument x in which the values ​​of S 1 (4x), S 1 (16x), S 1 (64x) ... will match and will be equal to 1/2 . Those. we proved that if we divide the series into sums, the 1st and 2nd terms are S 1 (x) , the 3rd and 4th terms are (1/4) S 1 (4x) , the 5th and 6th ( 1/16) S 1 (16x) , etc., then these amounts do not exceed:



Well, the maximum amount of the series M :



This method is slightly different from the previous one, but the amount of construction in it is clearly smaller.

You can solve the problem even faster by applying induction even earlier. So, again, write down the sum of the series:



Consider the functions S (x), S (2x), S (4x) ... and construct graphs for them:



We see that all these functions with a further increase in frequency by 2 times, will always intersect at one point with the argument x = 1/3 (the values ​​of the functions will coincide and will be equal to 1/3 ), therefore the desired sum of the series will take the maximum value at this point:



The task is undoubtedly beautiful, but it’s better not to get carried away with its beauty in the exam, but rather to solve it quickly.

Source: https://habr.com/ru/post/264941/


All Articles