Caffe2 - Introduction



Most recent few years, Deep Learning has become a major trend in Machine Learning. It has been successfully applied to solve previously unsolvable issues in Vision, Speech Recognition and Natural Language Processing (NLP). There are a lot more domains in which Deep Learning is being applied and has indicated its usefulness.

Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework developed at Berkeley Vision and Learning Center (BVLC). The Caffe project was made by Yangqing Jia during his Ph.D. at University of California - Berkeley. Caffe provides an simple method to experiment with deep learning. It is written in C++ and gives bindings for Python and Matlab.

It supports various different types of deep learning architectures such as CNN (Convolutional Neural Network), LSTM (Long Short Term Memory) and FC (Fully Connected). It supports GPU and is thus, ideally suited for production environments involving deep neural networks. It also supports CPU-based kernel libraries such as NVIDIA, CUDA Deep Neural Network library (cuDNN) and Intel Math Kernel Library (Intel MKL).

In April 2017, U.S. based social networking service organization Facebook announced Caffe2, which now includes RNN (Recurrent Neural Networks) and in March 2018, Caffe2 was merged into PyTorch. Caffe2 creators and community members have created models for solving different issues. These models are available to the public as pre-trained models. Caffe2 helps the creators in using these models and creating one’s own organization for making predictions on the dataset.

Before we go into the details of Caffe2, let us understand the difference between machine learning and deep learning. This is important to understand how models are created and utilized in Caffe2.

Machine Learning v/s Deep Learning

In any machine learning algorithm, be it a traditional one or a deep learning one, the selection of features in the dataset plays an extremely important role in getting the desired prediction accuracy. In traditional machine learning techniques, the feature selection is done generally by human inspection, judgement and deep domain knowledge. Sometimes, you may seek help from a couple of tested algorithms for feature selection.

The traditional machine learning flow is depicted in the figure below −

machine_learning

In deep learning, the feature selection is automatic and is a part of deep learning algorithm itself. This is appeared in the figure below −

deep_learning

In deep learning algorithms, feature engineering is done naturally. Generally, feature engineering is time-consuming and requires a decent expertise in domain. To implement the automatic feature extraction, the deep learning algorithms typically ask for huge amount of data, so if you have just thousands and tens of thousands of data points, the deep learning technique may neglect to give you satisfactory results.

With larger data, the deep learning algorithms produce better results compared to traditional ML algorithms with an additional advantage of less or no feature engineering.





Input your Topic Name and press Enter.