📜 ⬆️ ⬇️

Character regression

When solving problems using machine learning methods, as a rule, we choose the most suitable algorithm in the context of the problem, as well as the method of setting its parameters.

Let's consider a slightly different approach: instead of choosing the algorithm by ourselves, we will develop a program that can automatically generate algorithms for solving problems.


A stochastic program design method based on the use of a genetic algorithm is called genetic programming.
')
A few words about the genetic algorithm

A genetic algorithm is an optimization method based on the idea of ​​natural selection as a means of achieving the best result. The generalized formulation provides the ability to use this algorithm for solving various problems. All you need is:

  1. how to solve the problem can be encoded in the form of chromosomes, which have the ability to mutate and interbreed
  2. determine the fitness function, which calculates how optimal is the solution encoded by a particular chromosome
  3. Having an initial chromosome population and fitness function, using a genetic algorithm, you can find chromosomes that correspond to optimal solutions of the problem

Of course, the genetic algorithm has certain disadvantages associated with its stochastic nature, but based on my own experience , I can conclude that it finds acceptable solutions in the context of resource-time constraints.

Genetic programming and symbolic regression


I propose to consider in more detail the concept of genetic programming on the example of symbolic regression - the task of finding a formula that describes a certain relationship. More formally, it can be formulated as the construction of a regression model in the form of a superposition of given functions .

A fairly convenient way to present a program is a syntax tree. In this case, the role of "programs" is played by algebraic formulas. Leaf nodes correspond to variable or numeric constants, and non-leaf nodes contain an operation that is performed on the child nodes.

Example syntax tree:


It should be noted that for each syntactic tree there are an infinite number of semantically equivalent trees, for example:


We will use syntactic trees as chromosomes.

Genetic operations on syntactic trees

Crossing can be done by exchanging randomly selected subtrees:


Mutations can come up with many different options, but ultimately, in its implementation, I settled on the following:

  1. Replacing a function with a randomly selected other:


  2. Replacing a subtree with a randomly generated subtree:


  3. Deleting a non-leaf intermediate node:


  4. Create a random node that is assigned to the new root of the tree:


  5. For nodes with noncommutative operations, swapping child subtrees in places:



The syntax tree's fitness function calculates how well the corresponding algebraic formula approximates the data, and, in fact, is the sum of the squares of the deviations of the values ​​of the generated function from the training data.

The initial population is a collection of randomly generated syntax trees.

Optimization

In general, the described operations of crossing and mutation should be sufficient to find a solution. But, based on my experience, I can say that in this case - a more or less acceptable solution will be long enough. Therefore, I will describe several optimizations that speed up the search for solutions.

  1. Let's consider two formulas created in the process of finding solutions (the red crosses are training data):

    It is easy to see that the blue straight line is in general quite close to the training data, unlike the green parabola - which means it has a smaller sum of squares of deviations relative to the training sample.

    But, on the other hand, the naked eye can see that the parabola is just what we need.

    It's all about the coefficients. As you may have guessed, the coefficients of each tree are optimized. And to optimize the coefficients, of course, a genetic algorithm is used!

    In this case, the chromosome is an array of numbers:


  2. Since we work only with real numbers, it is convenient to operate with functions in which the domain of values ​​is a subset of the domain, for example:

    +, -, sin (x), cos (x), ln (abs (x) +0.00001), sqrt (abs (x)), etc.

    Thus, as a result of performing genetic modifications, we will be guaranteed to receive syntax trees that do not contain errors.

  3. There are combinations in which a subtree contains leaf nodes with only numeric constants. You can calculate its value and simplify the syntax tree:


  4. Another optimization is to limit the maximum depth of the tree (of course, this imposes its own limitations on the search space for solutions, but the need for this optimization is dictated by practical considerations):


  5. To avoid a situation where all the trees in a population are similar to each other (the genetic algorithm is stuck in a local minimum), there is a non-zero probability of completely replacing one of the trees with a randomly generated one.


Enough theory, let's see how it works in practice.


Let's try to find a formula that will satisfy the following pattern:

xyzf (x, y, z)
2635one830
eight24-eleven130
20oneten477
33eleven21217
37sixteen71524


The desired formula



At 111 iterations we get the formula:



Which after transformations turns into a formula that almost coincides with the required one:



Below is the dynamics of the solution formation



View the entire log of the calculation results
No. of populationSum of squared deviationsBest formula
01669012.26766007f (x, y, z) = 544,6859000804278
one169681.53834698f (x, y, z) = (x * (28.961135980737247 + z))
2148288.46850717001f (x, y, z) = (x * (31.37500136808971 + z))
3132767.659539771f (x, y, z) = (x * (52.43484580560726 + (-16.667350145606648)))
four2985.09234959275f (x, y, z) = (x * (4.433746792798393 + x))
five2983.6495099344102f (x, y, z) = (x * (4.4641060233894825 + x))
62983.6495099344102
72983.3906931086399f (x, y, z) = (x * (4.454265913806434 + x))
eight2983.3906931086399
92983.3904005863701f (x, y, z) = (x * (4,454298469272653 + x))
ten597.81897366597798f (x, y, z) = ((x * (3,844350383063788 + x)) + y)
eleven594.94169424631298f (x, y, z) = ((x * (3.8969889609817003 + x)) + y)
12594.94169424631298
13593.73658852479002f (x, y, z) = ((x * (3.882218261498 + x)) + y)
14465.83708126862598f (x, y, z) = (((x * (4.005216664324722 + x)) + y) - z)
15465.83708126862598
sixteen465.83708126862598
17465.83708126862598
18465.83015734565402f (x, y, z) = (((x * (4.005911869833458 + x)) + y) - z)
nineteen465.83015734565402
20465.83015734565402
21415.16018053718898f (x, y, z) = (((x * (3.5125714437530116 + x)) + y) - (-11.692166173605955))
22415.16018053718898
23395.52399773897002f (x, y, z) = (((x * (3.382514048684854 + x)) + y) - (-14,647402701410202))
24395.07066142434297f (x, y, z) = (((x * (3.3745415764367164 + x)) + y) - (-14.718709944856545))
25394.26327346841998f (x, y, z) = (((x * (3.3648511983617673 + x)) + y) - (-15.239602399729787))
26392.88638158772301f (x, y, z) = (((x * (3,337209019532033 + x)) + y) - (-15.802043204356028))
27392.32411284414297f (x, y, z) = (((x * (3.3064294470317237 + x)) + y) - (-16.587523294477112))
28392.32411284414297
29392.32411284414297
thirty201.31809082052899f (x, y, z) = ((((x * (3.1436791342095485 + x)) + y) - (-3.592869973372564)) + y)
31164.61977693577799f (x, y, z) = ((((x * (3.284782293612155 + x)) + y) - 0.2814995918777923) + y)
32149.279581721206f (x, y, z) = ((((x * (3.428687042834285 + x)) + y) - 3.8492278595932117) + y)
33149.278346415192f (x, y, z) = ((((x * (3,428687042834285 + x)) + y) - 3.8397689886934776) + y)
34149.278346415192
35148.94429071530399f (x, y, z) = ((((x * (3.4733961096640367 + x)) + y) - 4.871582132520638) + y)
36148.94429071530399
37148.92149057538799f (x, y, z) = ((((x * (3.4733961096640367 + x)) + y) - 4.927910435311452) + y)
38148.842647569717f (x, y, z) = ((((x * (3.4667603760518504 + x)) + y) - 4.761715096087028) + y)
39148.842126194348f (x, y, z) = ((((x * (3.4667603760518504 + x)) + y) - 4.766164660321495) + y)
40148.83482158482201f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.761715096087028) + y)
41148.83482158482201
42148.83482158482201
43148.83482158482201
44148.83482158482201
45148.824714357071f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.723957086860974) + y)
46148.824474980844f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.719858152218609) + y)
47148.824474980844
48148.824474980844
49148.824474980844
50148.82441313535f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.717481429398491) + y)
51148.82441154759999f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.717364268937347) + y)
52148.82441154759999
53148.82441154759999
54148.82441154759999
55148.82441154759999
56148.82441154759999
57148.82441154759999
58148.824403242995f (x, y, z) = ((((x * (3.464357566836493 + x)) + y) - 4.715925249764239) + y)
59148.824403242995
60148.824403242995
61148.824403242995
62148.824403242995
63148.824403242995
64148.824403242995
65148.824403242995
66148.824403242995
67148.824403242995
68148.824403242995
69148.824403242995
70148.824403242995
71148.824403242995
72148.824403242995
73148.824403242995
74148.824403242995
75148.824403242995
76148.824403242995
77148.824403242995
78148.824403242995
79148.824403242995
80148.824403242995
81148.824403242995
82148.824403242995
83148.824403242995
84148.824403242995
85148.824403242995
86148.824403242995
87148.824403242995
88148.824403242995
89148.824403242995
90148.824403242995
91148.824403242995
92148.824403242995
93148.824403242995
94148.824403242995
95148.824403242995
96148.824403242995
97148.824403242995
98148.824403242995
99148.824403242995
100109.738448314378f (x, y, z) = ((((x * (3,6304822654527666 + x)) + y) - ((y * 0.26890083188968594) - (-4.125304601214528))) + y)
101109.738448314378
10286.7213316804214f (x, y, z) = ((((x * (3,454146360987013 + x)) + y) - ((y * 0,26890083188968594) - 0,31706243101113074)) + y)
10386.077603952275396f (x, y, z) = ((((x * (3.4532441079663054 + x)) + y) - ((y * 0.2821822285682245) - 0.5330637639727107)) + y)
10484.787024859776594f (x, y, z) = ((((x * (3.418583927986026 + x)) + y) - ((y * 0.30109799837668216) - 1.6913597460963947)) + y)
10584.787024859776594
10619.436528477129301f (x, y, z) = ((((x * (3.1298089197766688 + x)) + y) - ((y * (-1.1430183282239135)) - (-0.7601011336523038))) + z)
1079.2899337994931397f (x, y, z) = ((((x * (3.1298089197766688 + x)) + y) - ((y * (-0.9847117571207198)) - 2.0456442176615)) + z)
1089.2899337994931397
1098.5880221046539802f (x, y, z) = ((((x * (3,1002105039045555 + x)) + y) - ((y * (-0.9847117571207198)) - 2.0456442176615)) + z)
1108.5880221046539802
1110.38510898634568402f (x, y, z) = ((((x * (3,0172293562767245 + x)) + y) - ((y * (-0.9842629202499209)) - 4.912061414835644)) + z)



It is interesting to follow the stages of solution formation:


One more example

Sometimes formulas are obtained that are not so obvious. For example, once, when trying to get the formula I got the following result:



But if you take advantage of the identity:



It can be shown that the resulting formula is correct:



If you want to experiment

Console utility

  1. You must have Java runtime installed
  2. Download the file symbolic_regression_1.0.jar from here.
  3. In the same directory where symbolic_regression_1.0.jar is located, create a file (for example, data.txt ) with the following content:
    # allowed functions are: ADD SUB MUL DIV SQRT POW LN SIN COS
    # set which functions to use:
    ADD MUL SUB

    # looking for:
    f (x, y, z) -?

    # define training set:
    f (26, 35, 1) = 830
    f (8, 24, -11) = 130
    f (20, 1, 10) = 477
    f (33, 11, 2) = 1217
    f (37, 16, 7) = 1524

  4. Run a search for a solution:

    java -jar symbolic_regression_1.0.jar data.txt

    The solution is considered to be found if the sum of squared deviations is less than 10. If the solution is not yet found, then after every 50 iterations you will be asked if you want to continue.

Java programmers

You can find the source code here: github.com/lagodiuk/genetic-programming

Instead of conclusion

On a rather limited number of formulas, which I managed to check, good results were obtained, there were also successful experiments in finding antiderivatives and derivatives.

Also, genetic programming can be quite successfully applied to the automatic construction of neural networks ( here is a small demo of the application in which I implemented this approach ).

Another idea is to apply a similar approach (together with QSAR techniques) to automatically construct the structures of chemical compounds with desired properties.

Literature

On the web you can find a lot of material on the subject of genetic programming.
Separately, I want to note the following sources:
  1. Toby Segaran, Programming Collective Intelligence
  2. Site creator of the concept of genetic programming

Source: https://habr.com/ru/post/163195/


All Articles