📜 ⬆️ ⬇️

Top 10 trends in artificial intelligence (AI) technology in 2018

Of good!

Listeners of the first course “Developer BigData” entered the home straight - today began the last month, where the survivors will be engaged in a military graduation project. Accordingly, they opened a set for this rather difficult course. Therefore, let's consider one interesting article-article on current trends in AI, which are closely related to BD, ML and other things.

Go.
')
Artificial intelligence is under the scrutiny of heads of government and business leaders as the primary means of assessing the loyalty of decisions. But what happens in laboratories, where the discoveries of academic and corporate researchers will set the course for the development of AI for the next years? Our own team of researchers from PwC's AI Accelerator has focused on leading developments, which should be closely followed by both business leaders and technologists. That's what they are and why they are so important.



1. The theory of deep learning: the demystification of the work of neural networks

What it is : deep neural networks that mimic the human brain, have demonstrated their ability to “learn” from images, audio, and text data. Nevertheless, even considering that they have been used for more than ten years, we still don’t know much about in- depth training , including how neural networks are trained or why they work so well. This may change thanks to a new theory that applies the principle of a bottleneck of information for deep learning. In essence, the theory assumes that after the initial phase of adjustment, the deep neural network will “forget” and compress data-noise (that is, data sets that contain a lot of additional meaningless information), while maintaining information about what the data represents.

Why this is important : an accurate understanding of how deep learning works contributes to its wider development and use. For example, it may make more obvious the optimal choice of network design and architecture, while providing greater transparency for systems with increased reliability or control applications. Expect to see more results from the study of this theory, as applied to other types of deep neural networks and the design as a whole.

2. Capsule networks: imitation of brain processing of visual information.

What it is : capsular networks , a new type of deep neural networks, process visual information in much the same way as the brain, which means they can maintain hierarchical relationships. This contrasts sharply with convolutional neural networks, one of the most widely used neural networks that do not take into account important spatial hierarchies between simple and complex objects, which leads to erroneous classification and high error rates.

Why this is important : for typical identification tasks, capsule nets promise better accuracy by reducing errors by as much as 50%. Nor do they need so much data for training models. Expect to see widespread use of capsule networks in many problem areas and deep neural network architectures.

3. Profound learning with reinforcement: interaction with the environment to solve business problems

What it is: a type of neural network training that learns by interacting with the environment through observations, actions, and rewards. Deep reinforcement learning (DRL ) was used to study game strategies, such as Atari and Go, including the well-known AlphaGo program, which defeated man.

Why this is important: DRL is the most universal of all learning methods, so it can be used in most business applications. It requires less data than other methods for training its models. Even more remarkable is the fact that it can be trained through simulation, which completely eliminates the need to label data. Given these advantages, expect to see more business applications that combine DRL and agent modeling in the coming year.

4. Generative-competitive networks: a combination of neural networks to stimulate learning and facilitate computational load

What it is : A generative adversarial network ( GAN ) is a type of unsupervised learning system that is implemented as two competing neural networks. One network, the generator, creates fake data that looks just like a real dataset. The second network, the discriminator, processes the authentic and generated data. Over time, each network improves, allowing the pair to study the entire distribution of this data set.

Why this is important : GANs open up deep learning to a wide range of unsupervised learning tasks in which the tagged data does not exist or is too expensive to obtain. They also reduce the workload required to implement a deep neural network, because the burden is shared by two networks. Expect to see more business applications, such as detecting cyber attacks using the GAN.

5. Training on incomplete (Lean Data) and supplemented data: solving a problem with labeled data

What it is : A rather large problem in machine learning (in particular, in deep learning) is the availability of large amounts of tagged data for training the system. Two common methods can help solve this problem: (1) synthesize new data and (2) transfer the model prepared for one task or area to another. Methods such as transferring learning (transferring knowledge derived from one task / field to another) or learning from the first time (“extreme” transferring learning that occurs with only one or no relevant examples) are learning techniques on “incomplete data” ( Lean Data). Similarly, synthesizing new data through modeling or interpolation helps to get more data, thereby complementing existing data to improve learning.

Why this is important : using these methods, we can solve a wide variety of problems, especially those that do not have full-fledged input data. Expect to see more options for incomplete and augmented data, as well as different types of training used to solve a wide range of business problems.

6. Probabilistic programming: languages ​​to facilitate model development

What it is : a high-level programming language that facilitates the development of a probabilistic model, and then automatically "solves" this model. Probabilistic programming languages allow you to reuse model libraries, support interactive modeling and formal verification, and also provide the level of abstraction necessary to create a common and effective output in universal model classes.

Why this is important : probabilistic programming languages ​​have the ability to take into account the indefinite and incomplete information that is so common in the business area. We will see a wider introduction of these languages ​​and we expect that they will also be applied to in-depth training.

7. Hybrid Learning Models: Combining Approaches to Model Uncertainty

What it is : Different types of deep neural networks, such as GAN or DRL, have shown great promise in terms of their performance and widespread use with various types of data. However, deep learning models do not model uncertainty , as Bayesian or probabilistic approaches do. Hybrid learning models combine two approaches to harness the strengths of each. Some examples of hybrid models are Bayes deep learning, Bayes GAN, and Bayes conditional GAN .

Why this is important : hybrid learning models allow you to expand a variety of business objectives, including deep learning with uncertainty. This can help us achieve better performance and explanability of models, which, in turn, can contribute to a wider implementation. Expect more profound teaching methods to get Bayesian equivalents, and the layout of probabilistic programming languages ​​will begin to include deep learning.

8. Automatic Machine Learning (AutoML): Creating a Model Without Programming

What it is : the development of machine learning models requires a laborious workflow under the supervision of experts, which includes the preparation of data, the choice of functions, the choice of model or technique , training and tuning. AutoML aims to automate this workflow using a variety of different statistical and in-depth training methods.

Why this is important : AutoML is part of what is seen as the democratization of AI tools, allowing business users to develop machine-learning models without deep programming. It will also accelerate the time spent by data scientists to create models. Expect to see more commercial AutoML packages and AutoML integration on larger machine learning platforms.

9. Digital double: virtual copies outside of industrial applications

What it is : a digital twin is a virtual model used to facilitate detailed analysis and monitoring of physical or psychological systems. The concept of the digital twin originated in the industrial world , where it was widely used to analyze and monitor such things as wind farms or industrial systems. Now, using agent-based modeling (computational models for modeling actions and interactions of autonomous agents) and system dynamics (a computer approach to analyzing and modeling behavior lines), digital twins are applied to non-physical objects and processes, including prediction of customer behavior .

Why this is important : digital twins can contribute to the development and wider adoption of the Internet of Things (IoT), providing predictive diagnostic methods and supporting IoT systems. In the future, we expect greater use of digital twins, both in physical systems and in modeling consumer choice.

10. Explainable AI: black box method

What it is : today there are many machine learning algorithms that touch, think and act in a great variety of different applications. Nevertheless, many of these algorithms are considered “black boxes”, shedding very little light on how they achieved their result. Explainable AI is a move towards the development of machine learning methods that create more explicable models, while maintaining the accuracy of prediction.

Why this is important : AI, explainable, demonstrable and transparent, will be crucial for establishing confidence in this technology and will contribute to a wider adoption of machine learning methods. Businesses will use explained AI as a requirement or best practice before embarking on a large-scale deployment of AI, while governments can make an AI explainable by a regulatory standard in the future.

THE END

As always, we are waiting for comments, discussion questions here, or, for example, this can be discussed with Xenia in an open lesson .

Source: https://habr.com/ru/post/350614/


All Articles