📜 ⬆️ ⬇️

Source codes of the neural network library on F # for the .NET Framework

Periodically, I receive requests for the source codes of neural networks used in my works on the analysis of tonality , text generation , as well as in articles on Habré. Therefore, I decided to post them all in the open access, together with the library on which they rely, despite the rather crude code. In this article I will write where to get what can be done and a little about how to use. The library is written in F #, but it can be used from any .NET language.

Implementation
The library for neural networks is written by me in the F # language. Why f #? In short, F # as a language has the advantages of now popular Python (for example, such as: simple syntax, the ability to run scripts interactively, cross-platform) and is very well suited for rapid prototyping of programs. But the essential (for me) difference is static strong typing. F # has such a function as automatic type inference, i.e. the compiler, without explicitly declaring the types of variables, can determine the type at compile time. I used to use Python and I can say that I personally switch to F # saves a lot of sleepless debugging nights. Python has its advantages of course, but for me they did not outweigh the disadvantages.

One problem, if today for Python neural network libraries you can choose for every taste and color, then on F #, there are not so many of them. And at the time of development, the situation was even worse - there were several libraries for the .NET Framework of the Encog type, for example, but new architectures could not be implemented without significant changes. So I wrote mine from scratch. This is, in any case, a very useful exercise for anyone who wants to deal with the topic seriously, because it allows a deeper understanding of the basic principles.

Opportunities
To date

Minuses:

')
I really have no opportunity to bring everything into its proper form, to write and keep up-to-date beautiful documentation and all that. There is, of course, a weak hope that someone will be interested in this development and will help.

What can be useful for? As a simple library of neural networks to integrate functionality into .NET applications (I recently used to export a branded model from keras to a .NET project). For experiences. For those wishing to use F #. For educational purposes - all code is implemented from scratch on F #, nothing is hidden in various other libraries. Anyone who asked me for source codes.

The tool is suitable for extracting terms from text, classifying texts, and also for generating texts (although this example is not included in the set, but the necessary tools are in the library).

Where to get?
https://github.com/Durham/NeuThink

How to start using?
Here is an example of implementing a simple neural network for the X # function on F #:

open NeuThink.Neuron open NeuThink.NeuronTraining let() = let nn = new GeneralNetwork() nn.AddPerceptronLayer(4,2,[|1|],true,0) nn.AddPerceptronLayer(1,4,[|-1|],false,1) nn.FinalizeNet() let outputs = [|[|-1.0|];[|-1.0|];[|1.0|];[|1.0|]|] let inputs = [|[|1.0;1.0|];[|0.0;0.0|];[|1.0;0.0|];[|0.0;1.0|]|] MomentumSGD 100 nn (new NeuThink.DataSources.SimpleProvider(inputs)) (new NeuThink.DataSources.SimpleProvider(outputs)) 0.2 (Some([|0;0;0;0|])) None nn.SetInput([|1.0;1.0|]) System.Console.WriteLine(nn.Compute().[0]) 


A neural network with one hidden layer of 4 neurons and one output layer of 1 one neuron is created here. In fact, for XOR, 2 neurons in the hidden layer are enough, but the minimal network often falls into the local minimum when learning. Instead of 0 and 1, for the XOR output, -1 and 1 are used, because by default the activation function in all tanh layers, with an output range from -1 to 1.

 nn.AddPerceptronLayer(4,2,[|1|],true,0) 

Means adding one fully connected layer (also called Dense or MLP layers). 4 - the number of neurons in the layer, 2 - the number of inputs. [| 1 |] is an array of numbers of layers to which this layer will be connected (inconvenient, yes. But it works and you can define an arbitrary graph). If the data does not need to be supplied anywhere, you need to specify -1 (why not an empty array? So historically it turned out ...). The next parameter means that the input data will flow into this layer, and 0 indicates the processing order of the layer.

After adding all the necessary layers, you need to call the FinalizeNet () method; it performs some service calculations.

The input data is organized as an array of float arrays, each internal array corresponds to one learning example. The output is organized as well. To train a network from an array, you need to make a Data Source for a neural network. This should be a class that implements the IInputProvider interface (for output) or IOutputProvider. So hard to do for the case when data is generated dynamically or read from disk, because it does not fit into memory. In the simple case, there is no need for complications and we use the built-in class SimpleProvider, initializing it with data from the array.

Next, the learning function of the neural network itself is called with an initial learning rate of 0.2 and the number of iterations 100 pieces. Then the trained network can be used to predict the results on the new data.

It is worth noting that F # has an interactive console, you can work from it:



So far, in short, everything. If you have any questions or suggestions, please contact.

Source: https://habr.com/ru/post/276115/


All Articles