We present to your attention the next portion of Tekhnosfera lectures. The course studies the use of neural network algorithms in various industries, as well as practicing all the studied methods on practical problems. You will get acquainted with both classical and recently proposed, but already proven neural network algorithms. Since the course is practice-oriented, you will gain experience in the implementation of image classifiers, style transfer systems and image generation using GAN. You will learn how to implement neural networks both from scratch and based on the PyTorch library. Learn how to make your chat bot, how to teach the neural network to play a computer game and generate human faces. You will also gain experience reading scientific articles and independent research.
List of lectures:
Neural networks. The basic blocks of fully connected neural networks. Error propagation algorithm.
Error propagation algorithm for branching structures. Learning problems of neural networks. Data pre-processing, augmentation, regularization. Stochastic gradient descent. Data preparation with PyTorch.
Calculation graphs in PyTorch. Operations with tensors. Automatic differentiation. Fully connected networks. Branching architecture. Network behavior in teaching and prediction: volatile
and requires_grad
flags. Save and load the model.
Convolution. Pulling Light neural networks. Examples of the use of convolutional networks. Interpretation of the trained models.
Scale initialization: He, Xavier. Regularization: Dropout, DropConnect. Normalization: batch normalization.
Modern architecture of convolution networks. Network Inception and ResNet. Transfer learning. The use of neural networks for segmentation and localization.
Optimization task. SGD, Momentum, NAG, Adagrad, Adadelta, Adam.
The task of reducing the dimension. MDS, Isomap. Principal Component Method (PCA). The derivation of the principal components and the proof of the Lagrange multipliers method. Autocoders Denoising and sparse autocoders.
Recurrent networks. Back propagation of error through time. LSTM networks. GRU network. Multilayer recurrent architectures. Modification of dropout and batch normalization for recurrent networks.
Examples of tasks. Training views: Word2Vec. Acceleration of linear + softmax pair: hierarchical softmax, differentiated softmax. Generate offers. Model Seq2Seq. Beam search for finding the best answer. Techniques for increasing the diversity of responses.
Generative and discriminative models. Nash equilibrium. Generative competing networks (GAN). Generative autocoders (AAE). Domain adaptation technique. Domain adaptation for transferring images between domains. Wasserstein gan.
Model variational auto encoder (VAE). Interpretation of the trained models: Deep Dream. Transferring style: Artistic style. Accelerate styling.
The basic concepts of reinforcement learning: agent, environment, strategy, reward. Value function and Q-function. Bellman equations. Algorithm Policy iteration.
Q-learning algorithm. Model approaches. DQN algorithm. Alpha Go.
Playlist of all lectures is on the link . Recall that current lectures and master classes on programming from our IT specialists in Technopark, Technosphere and Tehnotrek projects are still published on Tekhnostrim channel.
Other courses of Technosphere on Habré:
Information on all our educational projects can be found in a recent article .
Source: https://habr.com/ru/post/344982/