📜 ⬆️ ⬇️

Deep Learning: Comparing Framework for Character Deep Learning

We present you a translation of a series of articles devoted to deep learning. The first part describes the choice of a framework with an open code for symbolic deep learning, between MXNET, TensorFlow, Theano. The author compares in detail the advantages and disadvantages of each of them. In the following sections, you will learn about fine tuning of deep convolutional networks, as well as the combination of a deep convolutional neural network with a recurrent neural network.



The series of articles "Deep Learning"


1. Comparison of frameworks for symbolic deep learning .
2. Transfer learning and fine tuning of deep convolutional neural networks .
3. A combination of a deep convolutional neural network with a recurrent neural network .

Note: further narration will be conducted on behalf of the author.
')

Character frameworks


Symbolic computing frameworks ( MXNET , TensorFlow , Theano ) are characterized by symbolic graphs of vector operations, such as matrix addition / multiplication or convolution. A layer is simply a set of such operations. Due to the division into small composite components (operations), users can create new complex types of layers without using low-level languages ​​(as in Caffe ).

I have experience using different frameworks for symbolic computing. As it turned out, they all have both advantages and disadvantages in the device and the current implementation, but none of them fully meets all the requirements. However, I currently prefer Theano.

Next, we compare the listed frameworks for symbolic computing.
Characteristic
Theano
Tensorflow
MXNET
SoftwareTheanoTensorflowMXNET
AuthorMontreal universityGoogle Brain TeamDistributed (Deep) Machine Learning Community
Software LicenseBSD licenseApache 2.0Apache 2.0
Open sourceYesYesYes
PlatformCross-platform solutionLinux, Mac OS X, Windows support plannedUbuntu, OS X, Windows, AWS, Android, iOS, JavaScript
Programming languagePythonC ++, PythonC ++, Python, Julia, Matlab, R, Scala
InterfacePythonC / C ++, PythonC ++, Python, Julia, Matlab, JavaScript, R, Scala
CUDA supportYesYesYes
Automatic differentiationYesYesYes
Availability of pre-trained modelsUsing model zoo in LasagneNotYes
Recurrent networksYesYesYes
Convolution networksYesYesYes
Limited Boltzmann machines / deep trust networksYesYesYes

Comparison of character and non-character frameworks


Non-character frameworks


Benefits:


Disadvantages:


Character frameworks


Benefits:


Disadvantages:


Adding new operations


In all of these frameworks, adding operations while maintaining acceptable performance is not easy.
Theano / MXNETTensorflow
You can add Python operations with support for embedded C operators.Forward in C ++, symbolic gradient in Python.

Code reuse


It takes a lot of time to train deep networks. Therefore, Caffe released several pre-trained models (model zoo) that could be used as initial samples when transferring training or fine-tuning deep networks for specific areas of knowledge or custom images.
TheanoTensorflowMXNET
Lasagne is a high-level platform based on Theano. Lasagne makes it easy to use pre-trained Caffe models.No support for pre-trained models.MXNET provides the caffe_converter tool for converting pre-trained caffe models to MXNET format.

Low Level Tensor Operators


Rather efficient implementation of low-level operators: they can be used as composite components when creating new models without spending effort on writing new operators.
TheanoTensorflowMXNET
Many simple operationsQuite goodVery little

Flow control operators


Flow control operators enhance the expressiveness and versatility of the character system.
TheanoTensorflowMXNET
SupportedIn the format of the experimentNot supported

High level support


TheanoTensorflowMXNET
A “clean” character computing framework. You can create high-level platforms as required. Successful examples include Keras , Lasagne , blocks.A good device from the point of view of learning neural networks, but at the same time, this framework is not focused exclusively on neural networks, which is very good. You can use graph collections , queues, and image additions as composite components for high-level shells.In addition to the symbolic part, MXNET also provides all the necessary components for classifying images, from loading data to building models with methods to start learning.

Performance


Single-GPU Performance Measurement


In my tests, the performance of the LeNet model for the MNIST dataset is measured for a single-GPU configuration (NVIDIA Quadro K1200 GPU).
TheanoTensorflowMXNET
FineAverageExcellent

Memory


The amount of GPU memory is limited, so using for large models can be problematic.
TheanoTensorflowMXNET
FineAverageExcellent

Single-GPU speed


Theano compiles graphs for a very long time, especially in complex models. TensorFlow is still a little slower.
Theano / MXNETTensorflow
Compare to CuDNNv4Approximately twice as slow

Support parallel and distributed computing


TheanoTensorflowMXNET
Experimental Multi-GPU SupportMulti-GPUDistributed

Conclusion


Theano (with high-level Lasagne and Keras solutions) is an excellent choice for deep learning models. Using Lasagne / Keras is very easy to create new networks and modify existing ones. I prefer Python, so I choose Lasagne / Keras because of the very advanced Python interface. However, these solutions do not support R. The possibilities of transferring training and fine-tuning in Lasagne / Keras show that it is very easy to modify existing networks, as well as to customize for subject-oriented user data.

After comparing the frameworks, we can conclude that the most optimal solution will be MXNET (better performance, efficient memory use). In addition, it has excellent R support. Actually, this is the only platform that supports all functions on R. In MXNET, transfer of training and fine-tuning of networks are possible, but they are quite difficult to perform (compared to Lasagne / Keras). Because of this, it will be difficult not only to modify the existing training networks, but also to configure it for subject-oriented user data.

If you see an inaccuracy of the translation, please let us know in your private messages.

Source: https://habr.com/ru/post/313318/


All Articles