📜 ⬆️ ⬇️

Neural Networks: Lecture 1

Hello, community.

I started a course on neural networks at the university and I want to share information with you, at the same time I myself will be better at receiving information, which means everyone will be the winners. Go.

Literature


1. Ben Krose, Valter van de Smagt: Introduction to neural networks.
2. R. Callan, Introduction to Neural Networks.
3. Simon Heikin, Neural networks full course.
4. Gupta Jin Homma, Statical and Dynamic neural networks.
')
The first 2 books are good for introduction, the first is best, but it is in English, the second is also a little worse stated, but in Russian.

Biological bases of neural networks

.
// the topic is given for independent processing.
The theory of the art of neural networks appeared as an attempt to model the central nervous system (CNS - central nervous system) of higher mammals in the 50s of the last century.

It turned out that the models of artificial neural networks (hereinafter ANN, NN) are too simple and the modern models of neurophysiology are an order of magnitude more complex than the NN models. At the same time, it turned out that the ANN theory is an excellent tool for solving purely mathematical problems, especially for classes of difficultly formalized problems.

By not formalized tasks, we will understand tasks for which a task cannot be formulated.
These tasks include (example):


By difficultly formalized problems, we will mean such problems for which the formulation exists, but the deterministic algorithm for finding the exact solution is either unknown or too expensive in terms of resources.

The higher the dimension of the problem, the better the neural networks work and the worse the classical mathematics.

Concepts


1) The CNS of higher mammals consists of cells - a neuron connected to each other from all sorts of receptor cells (visual, auditory, etc.). The total number of such cells is about 10 ^ 10.
The connection between neurons is carried out with the help of contacts - sInapsy .

The strength of the connection between two neurons is proportional to the concentration of the substance - the neurotransmitter .

The activity of the brain of higher mammals is being in an excited state.

The neuron becomes active, then. when the number of active neurons connected by synapses exceeds a certain threshold.


2) The concept of learning.
In the process of learning with the help of sharp external stimuli, the picture of the world recorded in the head is significantly corrected.
Thus, for the implementation of learning, we must have a set of vectors characterizing the picture of the world and all possible reactions to our reality.

Such a set of vectors is called training sample .

The more adequate picture of the world recorded in the neurons of the brain, the less the true response of the environment will differ from the response of the prediction.

From a neurophysiological point of view, learning can occur in two ways, this is the dying off of connections and the change in the power of connection.

The concept of an artificial neuron


A neuron is a mathematical object that has n-inputs and 1 output.
Inputs are designed to transmit numerical information, as well as output.

Input information is denoted by Xi.
Inputs connect an artificial neuron (IC) with others or with an external source of information and are called connections.

Each link is associated with a number, which is called a scale and is denoted by - Wi. (i - number of connection)


The information coming in through the input is processed in a neuron in 2 stages:
  1. At the first stage the weighted sum is calculated: S = W1X1 + W2X2 + ... + WnXn + O.
    O is the threshold.

    Usually, to unify the recording, the threshold is considered as the weight of a blunt connection, that is, a connection that always generates 1.
    And then you can write as:
    S = W0X0 + W1X1 + W2X2 + ... + WnXn, where X0 = 1, W0 = O.
    (classic neuron)

    In the generalized QN probabilistic measures, which are the number O - cases, appear. the value of N (0, G)

    An algebraic neuron is a neuron where the sum S is non-linear.

  2. The output of the neuron is Y. is defined as a certain function of the weighted sum.
    y = f (s);
    The function f is the activation function.

    Actually CNs are classified by their activation functions.
    Example
    • Step function
    • K-step
    • Sigmoid

      (litter for the schedule :))
      Most often it looks like: f (x) = 1 / (1 + exp (-x)).


    The transition from threshold functions to sigmoidal is due to the fact that if S = -eps, then the output is 0, and if S = eps, then 1.
    That is, a small difference gives a significant result, and this is not good.

    All activation functions are divided into 2 groups:
    - symmetrical
    - asymmetrical



At the first lecture is over.
I think it was interesting.

See you again!

Source: https://habr.com/ru/post/39341/


All Articles