⬆️ ⬇️

Samsung opens a free online course on neural networks in computer vision tasks

You still do not understand why ReLU is better than sigmoids, how does Rprop differ from RMSprop, why normalize signals and what is a skip connection? Why does a neural network need a graph, and what kind of mistake did it make, that it spreads back? Do you have a project with a computer vision, or maybe you are doing an intergalactic robot to fight dirty plates, and you want him to decide whether to launder or come off?



We are launching the open course “Neural networks and computer vision” , which is addressed to those who are taking the first steps in this area. Course strengths:









This course opened on June 1 - the first in a series of free online courses from Samsung on the Stepik platform. The choice of the Russian platform was made in order to provide more opportunities for the Russian-speaking audience. Courses will primarily be devoted to the field of Machine Learning (ML). The choice is not accidental: in May 2018, the Samsung Artificial Intelligence Center Samsung opened in Moscow, where ML scientific stars such as Viktor Lempitsky (the most quoted Russian scientist in the Computer Science category), Dmitry Vetrov, Anton Konushin, and many others work.

')

So, in 6 weeks of video lectures and practical tasks, doing 3-5 hours a week, you will be able to figure out how to solve basic tasks of computer vision, as well as acquire the necessary theoretical training for further independent study of the field.



Two modes of passing the course are supposed: basic and advanced. In the first case, it is enough to watch lectures, answer questions on lectures and solve seminars. In the second case, it will be necessary to solve theoretical problems in which it will be necessary to apply sufficiently extensive knowledge of mathematics from 1-2 courses of a technical university.



The course consistently sets out the terminology and principles of building neural networks, describes current tasks, optimization methods, loss functions, and basic architectures of neural networks. And at the end of the training - the solution of the visual applied problem of computer vision.



Course instructors



Mikhail Romanov



Graduate MIPT. Graduated from the Yandex Data Analysis School. Received a PhD from Technical University of Denmark.



An employee of the Moscow Center for Samsung AI. Michael is engaged in computer vision tasks for robots and loves to teach. He has many ideas and topics for further courses. One of the graduates of AI Bootcamp 2018 in the exit questionnaire on the question of assessing Michael in a 5-point scale as a teacher, wrote: “it’s a pity that there is no grade six!”.





Igor Slinko



Graduate MIPT. Graduated from the Yandex Data Analysis School. An employee of the Moscow Center for Samsung AI. Igor is also engaged in machine vision tasks for robots, and teaches Machine Learning at the Higher School of Economics. Last year and this year he was a volunteer lecturer at the Deep Learning workshop of the social and educational project Summer School .





Course program



Neural network:



  1. Mathematical model of neuron
  2. Boolean operations in the form of neurons
  3. From neuron to neural network
  4. Seminar: Basic work in PyTorch


We build the first neural network:



  1. Neural network dependency recovery
  2. Neural Network Components
  3. Theoretical Objectives: Dependency Recovery
  4. Neural network setup algorithm
  5. Theoretical problems: Calculation graphs and BackProp


Problems solved using neural networks:



  1. Binary classification? Binary cross-entropy!
  2. Multi-class classification? Softmax!
  3. Localization, detection, super-resolution
  4. Theoretical problems: Loss functions
  5. Workshop: Building the first neural network
  6. Workshop: Classification in PyTorch


Optimization methods:



  1. The most common gradient descent
  2. Gradient Descent Modifications
  3. Theoretical problems: We understand SGD with momentum
  4. Workshop: Implementing Gradient Descent with PyTorch Tools
  5. Workshop: Handwritten Classification by Full Mesh Network


Convolution networks:



  1. Convolution cascade
  2. Architectural History: LeNet (1998)
  3. Architectural History: AlexNet (2012) and VGG (2014)
  4. Architectural History: GoogLeNet and ResNet (2015)
  5. Seminar: Recognition of handwritten numbers by convolutional neural network


Regularization, normalization, maximum likelihood method:



  1. Regularization and neural networks
  2. Data normalization
  3. Seminar: We solve the classification problem in dataset CIFAR
  4. Maximum Likelihood Method
  5. Seminar: Transfer learning on the example of Kaggle competition


Requirements for students



The course is designed for students who make the first steps in the field of machine learning. What do you need from you?



  1. Have a basic knowledge of mathematical statistics.
  2. Be ready to program in Python.
  3. If you want to take a course at a difficult level, you will need a good knowledge of mathematical analysis, linear algebra, probability theory and statistics.


Challenge accepted? Then proceed to the course !

Source: https://habr.com/ru/post/454904/



All Articles