📜 ⬆️ ⬇️

Parametric modeling in CAD SolveSpace: "The ways of the Solver are inscrutable" or "Newton's Wormholes"

At first glance, the task of applying dimensional constraints to a drawing seems no more difficult than an exercise from a school textbook. It was the same to me when I first learned about it. At that time I worked in a company that was developing a software system for designing individual houses with the preparation of turnkey project documentation. In this project, I was developing an algorithm for generating multi-sloped roofs, and later on the entire geometric core based on Boolean operations, so I followed the further history from afar. At some certain point, the customer wanted the designers to simply specify the size of the rooms, the corners of the bay windows and the width of the doorways, and the program would automatically calculate all other parameters of the external and internal design of the house. This idea arose at the customer spontaneously, and therefore urgently needed to be done “just like in CATIA ”. Our team leader approached the solution of the problem with enthusiasm and began to develop a prototype. He solved hundreds of equations in MathCAD, the whole office was littered with graphs of private solutions for two, three, four points ... His initial assumption that the problem could be solved analytically, failed: there was 2005 in the yard, and that meant that it was impossible was to find at least some information on this topic. As a result, after two months of intense research, this functionality had to be excluded .
Part 1: Introduction
Part 2: Sketch
Part 3: Degrees of freedom and constraint equations
Part 4: "The ways of the Solver are inscrutable" or "Newton's Wormholes"


Newton's method


This story ended sadly, since the problem of solving geometric constraints in the general case comes down to solving a system of nonlinear equations that modern mathematicians do not know how to solve in analytical form. Therefore, all that remains in the conditions of modern realities is to use a method named after the great genius of apple insights: the Newton method . This method allows you to find the zero (root) of any non-linear function (equation) by an iterative method with quadratic convergence. This means that the number of exact significant digits doubles with each iteration of the method. In simple words, calculating the zero function with a high degree of accuracy is achieved in a relatively small number of iterations. In most practical cases, 5–12 iterations are sufficient.


With the forced necessity of applying the Newton method, the situation becomes more dramatic: after all, this method does not guarantee us convergence even with solutions, and the success of finding a solution mainly depends on the initial value of the parameters at the start of the method. In addition, this method does not allow us to find all the zeros of the function or the roots of the equation, since the result converges to the first root. Therefore, the design engineer still has to draw a drawing that is closest to the final result. If the initial approximation of the parameters is too far from the solution, Newton's method may not converge, or converge to an unexpected solution.



Illustration of the discrepancy of Newton's method applied to the function f (x) = x ^ {3} -2x + 2. Take zero as the initial approximation. The first iteration gives the unit as an approximation. In turn, the second again gives zero. The method will loop and no solution will be found. In general, building a sequence of approximations can be very confusing .


In view of the foregoing, Newton's method can be treated as a magic black box that can give us an answer if he found it, but cannot tell if there is a solution if the method does not converge. The ways of making decisions are inscrutable , as can be seen on the following images of the way of reaching a solution (the trace of each point of the drawing is visualized, the coordinates of the points are recorded at each of the steps of the Newton method):




Newton's Wormholes


How to win the "Worms of Doubt"?


We improve the convergence (decomposition of the problem)


In order to improve the convergence of the method and the speed of its work, usually used clever methods of decomposition of the original problem. Analyzing the degrees of freedom and equations, one can split the drawing into various independent groups so that these parts can be solved separately, and then, fixing the values ​​of the parameters already obtained, solve the system of equations finally. The decomposition methods make it possible to greatly improve both the convergence and the speed of the algorithm due to the fact that we do not solve the entire system of equations, but several small systems of equations. The algorithmic complexity of Newton's method has a polynomial character, which means we can get significant benefits if we solve the system “piece by piece”. But the implementation of such a decomposition requires the development of an algorithm that will be tied to the geometric interpretation of the original equations, that is, it will operate with terms of the subject domain, and this limits the scope of the solver to only geometric problems. For example, decomposition for a two-dimensional case will not work very effectively for a three-dimensional one, and most likely, such a problem should be solved separately.


Increase accuracy (conditionality of the system of equations)


The basis of the implementation of Newton's method for solving systems of nonlinear equations is the linearization method . The equation is solved in steps, and at each step the nonlinear functions are represented as linear, which allows instead of a system of nonlinear equations to solve a system of linear equations. As a result, the convergence of Newton's method depends on the accuracy of solving this system of linear equations. The more accurate the solution at each step, the faster and more stable Newton's method converges, and the less other problems, leading to the fact that the solution is not found.


In mathematics, there is the concept of conditionality of a system of equations . In simple words, it is a certain value that can tell us how accurate (reliable) the solution of this system of equations will be. As an example, the solution of the simplest equation can be given, which in the abstract world of mathematics can be considered as a special case of the system of equations


ax + b = 0.


His decision:


x = -b / a


If we use a geometric interpretation, then ax + b is the equation of a line, and its intersection with the x-axis is the solution of the equation. Now imagine that the source data ( a , b ) for this equation is given with some degree of accuracy. In this case, the smaller the value of a , and the straight line is closer to the parallelism of the x-axis, the greater the ambiguity of the solution. For small relative changes of a , the x values ​​will change significantly. If we consider a system of many equations, then in the course of solving the error can accumulate, and the result of the solution will be more similar to the characteristics of the weather conditions of the remote planets than to the correct result of solving a system of linear equations. Therefore, a very important point is to control the condition number of the system of equations and take a number of measures to bring the system to a form in which you can get the best accuracy.


An example of when this problem may manifest itself may be the inconsistency of equations in magnitudes. For example, using geometric constraints, we can create a triangle by fixing the lengths of the two sides and the angle between them. The length constraint equation is written in millimeters, and the angle constraint, for example, is given through the cosine equality. The inconsistency of these values ​​can serve as a reason for implementing various loss of accuracy scenarios: when we make small triangles of the size of the order of millimeters, we can get fairly good accuracy, but when we start to solve the same equations, but with lengths of the order of kilometers, we will get completely different relative accuracy result. This happens because the lengths vary significantly, while the values ​​of the cosines of the corners remain in a constant range of values. I believe that one of the solutions to the problem of inconsistency of equations can be the formation of a system of equations in a homogeneous coordinate system , using the methods of projective geometry , but this conclusion is a premonition , in view of my very modest knowledge in the field of projective geometry in particular, and all of mathematics whole There is a possibility that this topic has already been disclosed, but it is possible that someone will be able to find in it a way to defend a couple of dissertations.


We increase productivity


The algorithmic complexity of solving a system of nonlinear equations by the Newton method is mainly determined by the complexity of solving a system of linear equations. For example, the original SolveSpace, according to Jonathan Vesthyu , author of SolveSpace , uses the Gauss method , which has the algorithmic complexity O (n ^ 3) , as the method of solution. This is not very fast if you suddenly want to solve large sketches containing thousands of equations and unknowns. But given the fact that the matrix of a system of linear equations is almost all zeros and is very sparse , algorithms with lower algorithmic complexity can be used to solve.


An additional advantage of writing equations in symbolic form is the possibility of their preliminary symbolic optimization. Such an operation, for example, is the abbreviation of such. But a serious performance increase can be achieved by symbolically solving some equations by the substitution method. For example, in SolveSpace, the method of symbolic substitution solves such constraints as coincidence of points and horizontal position / verticality. The use of this method can significantly reduce the number of equations and unknowns that serve as input data for the Newton method. You can learn more about how symbolic expressions are arranged in this video .


Conclusion


Despite the fact that the topic of solving systems of nonlinear equations seems initially simple, it contains many pitfalls and fundamental problems. I believe that there is still work to be done in this area, since, apart from geometric applications, solving systems of nonlinear equations can be useful in many areas: from solving costs optimization problems to using artificial intelligence systems.


Part 1: Introduction
Part 2: Sketch
Part 3: Degrees of freedom and constraint equations
Part 4: "The ways of the Solver are inscrutable" or "Newton's Wormholes"


')

Source: https://habr.com/ru/post/335962/


All Articles