You still do not understand why ReLU is better than sigmoids, how does Rprop differ from RMSprop, why normalize signals and what is a skip connection? Why does a neural network need a graph, and what kind of mistake did it make, that it spreads back? Do you have a project with a computer vision, or maybe you are doing an intergalactic robot to fight dirty plates, and you want him to decide whether to launder or come off?
We are launching the open
course “Neural networks and computer vision” , which is addressed to those who are taking the first steps in this area. Course strengths:
- the authors of the course know what they are talking about: they are engineers of the Samsung Samsung Center of Artificial Intelligence, Mikhail Romanov and Igor Slinko;
- there is a theory with puzzles, and practice on PyTorch
- proceed to practice immediately after mastering the minimum theoretical knowledge.
- The best students will be invited for an interview at Samsung Research Russia!
This course opened on June 1 - the first in a series of free online courses from Samsung on the Stepik platform. The choice of the Russian platform was made in order to provide more opportunities for the Russian-speaking audience. Courses will primarily be devoted to the field of Machine Learning (ML). The choice is not accidental: in May 2018, the Samsung Artificial Intelligence Center Samsung opened in Moscow, where ML scientific stars such as Viktor Lempitsky (the most quoted Russian scientist in the Computer Science category), Dmitry Vetrov, Anton Konushin, and many others work.
')
So, in 6 weeks of video lectures and practical tasks, doing 3-5 hours a week, you will be able to figure out how to solve basic tasks of computer vision, as well as acquire the necessary theoretical training for further independent study of the field.
Two modes of passing the course are supposed: basic and advanced. In the first case, it is enough to watch lectures, answer questions on lectures and solve seminars. In the second case, it will be necessary to solve theoretical problems in which it will be necessary to apply sufficiently extensive knowledge of mathematics from 1-2 courses of a technical university.
The course consistently sets out the terminology and principles of building neural networks, describes current tasks, optimization methods, loss functions, and basic architectures of neural networks. And at the end of the training - the solution of the visual applied problem of computer vision.
Course instructors
Mikhail Romanov
Graduate MIPT. Graduated from the Yandex Data Analysis School. Received a PhD from Technical University of Denmark.
An employee of the Moscow Center for Samsung AI. Michael is engaged in computer vision tasks for robots and loves to teach. He has many ideas and topics for further courses. One of the graduates of AI Bootcamp 2018 in the exit questionnaire on the question of assessing Michael in a 5-point scale as a teacher, wrote: “it’s a pity that there is no grade six!”.
Igor Slinko
Graduate MIPT. Graduated from the Yandex Data Analysis School. An employee of the Moscow Center for Samsung AI. Igor is also engaged in machine vision tasks for robots, and teaches Machine Learning at the Higher School of Economics. Last year and this year he was a volunteer lecturer at the Deep Learning workshop of the social and educational project
Summer School .
Course program
Neural network:
- Mathematical model of neuron
- Boolean operations in the form of neurons
- From neuron to neural network
- Seminar: Basic work in PyTorch
We build the first neural network:
- Neural network dependency recovery
- Neural Network Components
- Theoretical Objectives: Dependency Recovery
- Neural network setup algorithm
- Theoretical problems: Calculation graphs and BackProp
Problems solved using neural networks:
- Binary classification? Binary cross-entropy!
- Multi-class classification? Softmax!
- Localization, detection, super-resolution
- Theoretical problems: Loss functions
- Workshop: Building the first neural network
- Workshop: Classification in PyTorch
Optimization methods:
- The most common gradient descent
- Gradient Descent Modifications
- Theoretical problems: We understand SGD with momentum
- Workshop: Implementing Gradient Descent with PyTorch Tools
- Workshop: Handwritten Classification by Full Mesh Network
Convolution networks:
- Convolution cascade
- Architectural History: LeNet (1998)
- Architectural History: AlexNet (2012) and VGG (2014)
- Architectural History: GoogLeNet and ResNet (2015)
- Seminar: Recognition of handwritten numbers by convolutional neural network
Regularization, normalization, maximum likelihood method:
- Regularization and neural networks
- Data normalization
- Seminar: We solve the classification problem in dataset CIFAR
- Maximum Likelihood Method
- Seminar: Transfer learning on the example of Kaggle competition
Requirements for students
The course is designed for students who make the first steps in the field of machine learning. What do you need from you?
- Have a basic knowledge of mathematical statistics.
- Be ready to program in Python.
- If you want to take a course at a difficult level, you will need a good knowledge of mathematical analysis, linear algebra, probability theory and statistics.
Challenge accepted? Then
proceed to the course !