📜 ⬆️ ⬇️

Lectures of the Technosphere. Neural networks in machine learning


We present to your attention the next portion of Tekhnosfera lectures. The course studies the use of neural network algorithms in various industries, as well as practicing all the studied methods on practical problems. You will get acquainted with both classical and recently proposed, but already proven neural network algorithms. Since the course is practice-oriented, you will gain experience in the implementation of image classifiers, style transfer systems and image generation using GAN. You will learn how to implement neural networks both from scratch and based on the PyTorch library. Learn how to make your chat bot, how to teach the neural network to play a computer game and generate human faces. You will also gain experience reading scientific articles and independent research.


List of lectures:


  1. Fundamentals of neural networks.
  2. Details of learning neural networks.
  3. Libraries for deep learning.
  4. Convolutional neural networks.
  5. Improving the convergence of neural networks.
  6. Architecture deep networks.
  7. Optimization methods.
  8. Neural networks to reduce dimensions.
  9. Recurrent networks.
  10. Natural language processing.
  11. Competing network (GAN).
  12. Variational Encoders and Artistic Style.
  13. Training with reinforcements 1.
  14. Training with reinforcements 2.

Lecture 1. Basics of Neural Networks



Neural networks. The basic blocks of fully connected neural networks. Error propagation algorithm.


Lecture 2. Details of learning neural networks



Error propagation algorithm for branching structures. Learning problems of neural networks. Data pre-processing, augmentation, regularization. Stochastic gradient descent. Data preparation with PyTorch.


Lecture 3. Libraries for deep learning



Calculation graphs in PyTorch. Operations with tensors. Automatic differentiation. Fully connected networks. Branching architecture. Network behavior in teaching and prediction: volatile and requires_grad flags. Save and load the model.


Lecture 4. Convolutional neural networks



Convolution. Pulling Light neural networks. Examples of the use of convolutional networks. Interpretation of the trained models.


Lecture 5. Improving the convergence of neural networks



Scale initialization: He, Xavier. Regularization: Dropout, DropConnect. Normalization: batch normalization.


Lecture 6. Depth networks architecture



Modern architecture of convolution networks. Network Inception and ResNet. Transfer learning. The use of neural networks for segmentation and localization.


Lecture 7. Optimization methods



Optimization task. SGD, Momentum, NAG, Adagrad, Adadelta, Adam.


Lecture 8. Neural networks to reduce dimensions



The task of reducing the dimension. MDS, Isomap. Principal Component Method (PCA). The derivation of the principal components and the proof of the Lagrange multipliers method. Autocoders Denoising and sparse autocoders.


Lecture 9. Recurrent Networks



Recurrent networks. Back propagation of error through time. LSTM networks. GRU network. Multilayer recurrent architectures. Modification of dropout and batch normalization for recurrent networks.


Lecture 10. Natural language processing



Examples of tasks. Training views: Word2Vec. Acceleration of linear + softmax pair: hierarchical softmax, differentiated softmax. Generate offers. Model Seq2Seq. Beam search for finding the best answer. Techniques for increasing the diversity of responses.


Lecture 11. Competitive Networks (GAN)



Generative and discriminative models. Nash equilibrium. Generative competing networks (GAN). Generative autocoders (AAE). Domain adaptation technique. Domain adaptation for transferring images between domains. Wasserstein gan.


Lecture 12. Variational Encoders and Artistic Style



Model variational auto encoder (VAE). Interpretation of the trained models: Deep Dream. Transferring style: Artistic style. Accelerate styling.


Lecture 13. Training with reinforcements 1



The basic concepts of reinforcement learning: agent, environment, strategy, reward. Value function and Q-function. Bellman equations. Algorithm Policy iteration.


Lecture 14. Training with reinforcements 2



Q-learning algorithm. Model approaches. DQN algorithm. Alpha Go.


Playlist of all lectures is on the link . Recall that current lectures and master classes on programming from our IT specialists in Technopark, Technosphere and Tehnotrek projects are still published on Tekhnostrim channel.


Other courses of Technosphere on Habré:



Information on all our educational projects can be found in a recent article .


')

Source: https://habr.com/ru/post/344982/


All Articles