Good day, habra people! Often, mathematicians dwell on the existence of a solution, and something similar turns out.
An engineer, a physicist, and a mathematician settled in the hotel. Everyone has a fire in the room.
The engineer runs into the corridor, sees a fire hose on the wall, grabs it, opens the water, runs into the room and fills the fire center.
A physicist, having quickly estimated the volume of combustible substances, flame temperature, heat capacity of water and steam, atmospheric pressure, etc., pours a strictly defined amount of water into a glass from a decanter and fills the fire with this water.
The mathematician jumps out into the corridor, sees a fire extinguisher on the wall, and, happily exclaiming: “The solution exists!”, Calmly returns to the room.
And about what rake come across on the way "to the bitter end", I will tell under the cut.
')
Memories
Well, for a start, let's recall three “such” tasks and try to figure them out.
1. The equation of the n-th degree, with n> 0, has n complex roots. About how to look for these roots in the main theorem of algebra is not mentioned.
2. Almost any problem in geometry can be solved algorithmically, i.e. there is an algorithm that ascertains whether some geometric statement is true or false.
3. A continuous function reaches its highest and lowest values on a compact (in Euclidean n-dimensional space it is a bounded and closed set, for example, in R it is a segment. But the interval is no longer closed, and for example in the interval (0, 1) function 1 / x does not take its greatest value).
That is, tasks and the truth often arise.
So what's wrong with them?
In problem number 1, there is nothing bad at this stage of human development. We can say that we can solve it effectively enough. Equations of degrees <5 are solvable in radicals (i.e., one can find the exact values of the roots). It is proved that equations of degrees> 4 are not solvable in radicals (namely, specific equations are given which cannot be solved), for this purpose Galois groups are used. But it happens that we still need to find the roots of the equation, for example, the 100th degree. But this is already able to solve algorithmically, namely, to search for all the roots with an arbitrarily high accuracy. If someone does not believe -
wolframalpha.com .
Problem number 2 is a bit more interesting. Generally speaking, this is a consequence of the Tarski – Zeidenberg theorem. The proof is simple, it can be told even to schoolchildren who can take a derivative of a polynomial and divide polynomials by a column. In short and in a simple way, the essence of the theorem is that quantifiers can be removed from any expression about polynomials in real numbers (“there exist x”, “for any x”).
For example, the expression: "there is x, that x ^ 2 + px + q = 0". Those. we ask when this quadratic equation has roots. We know this when p ^ 2 - 4q> = 0 (note that there is already no quantifier here!). The idea of the theorem is that you can write an algorithm that would figure it out for us, and the degree is not important.
How does this relate to geometry? But how, almost any statement can be written by entering the coordinate system and hanging a bunch of quantifiers. The idea is that when we “kill” a quantifier, we have one variable left. And when we do not have any variables, we get an expression of the form 5> 3, about the truth of which we already know everything.
Pff, so it turns out that geometry is a trivial science? There is an algorithm that would figure out: the statement is true, or false. Probably, some mathematicians thought so, but not everything is so simple. If we consider the example with x ^ 2 + px + q = 0, then in order to “kill” one quantifier, we need a sufficiently large number of steps. And in order to prove some meaningful geometric theorem in a similar way, it will take time comparable to the time of the Earth’s existence.
But can we solve something wrong? In fact, it is proved that there is no effective algorithm that solves this problem.
Hurray, geometry is not trivial. Fuh, figured out. We go further.
Task 3, how it was
Here I will tell a little background. In general, mathematics is quite an abstract science, therefore in the summer I decided to work out finances and took an online course at
coursera.org . There the shares were thoroughly investigated, so how was it? But the teacher often said that there is always a choice, either you have a high risk and a large income, or you have a low risk and a low income. I wondered, is there a happy medium? Is it possible to choose from N companies stocks so that you have a small risk and high income? It would be nice to write an algorithm to solve this problem. The wording is clear now the details.
To a person who is not knowledgeable in this case, it will seem that risk is linear, income is linear, and what is there to decide? In any case, risks are considered to be a more cunning way, for companies are interconnected with each other. Fortunately, the final risk is considered to be a second-degree polynomial, but of course from N variables, where N is the number of companies.
Okay, how do we tie the risks and returns now? If the risk is for example 5%, and the income of $ 100 is better than the risk of 10%, and the income of $ 150, or worse? Roughly speaking, you can calculate the average risk, for example 10%, the average income, for example $ 200 and equate 1 percent risk to $ 20 (200/10). Um, and what did that give us? But what if we have N companies. Equity ratios of shares - x1, x2, ..., xN. Then the risk is considered as some polynomial (each monomial of which is not more than the second degree), and the income is considered linearly. Take and subtract one equation from another, multiplying by the correction factor (Income from "+", risk from "-"). This will be the final formula for which you need to find the maximum. By the way, we have a restriction that x1 + x2 + ... + xN = 1, x1> = 0, ..., xN> = 0.
Now we will try to analyze the puzzle. I, as a mathematician, immediately got the idea to consider N-dimensional space, there we have a compact set like this x1 + x2 + ... + xN = 1, x1> = 0, ..., xN> = 0 and a polynomial second degree. Pff, yes everything is elementary - I thought. We know that on a compact the equation takes the greatest value, and either the gradient at this point is zero or the maximum lies on the boundary, i.e. need to check both this and that. With these thoughts, I threw back the task for about a month. But something did not leave alone. That was free time, an arbitrary equation was taken, paper and pen. Oppa - yes my solution works for (N!) - factorial. If it is implemented on a computer, then for 100 companies this will work a lot longer than the existence of the solar system. Brief explanations: when we check the boundaries of a compact, we move from N-dimensional space to N pieces of (N-1) -dimensional spaces. Awful - I thought, but I did not lose hope of solving the problem.
After walking among the smart mathematicians and asking around them, I learned about the Lagrnage multipliers method for inequalities (we passed only for equations). And here he just solves this problem effectively. You can read a little on
Wikipedia .
Do not dwell on the existence of a solution, but go to the end. Good luck in the new year :)!
Thanks for reading!