Artificial Neural Network (ANN) it is an efficient computing system, whose central theme is borrowed from the analogy of biological neural networks. Neural networks are one kind of model for machine learning. In the mid-1980s and early 1990s, much significant architectural advancements were made in neural networks. In this section, you will learn more about Deep Learning, an approach of AI.
Deep learning emerged from a decade’s explosive computational growth as a genuine contender in the field. Accordingly, deep learning is a specific kind of machine learning whose algorithms are inspired by the structure and function of human brain.
Deep learning is the most powerful machine learning technique these days. It is so amazing because they learn the best way to represent the problem while learning how to solve the issue. A comparison of Deep learning and Machine learning is given below -
The first point of difference is based upon the performance of DL and ML when the scale of data increases. When the data is large, deep learning algorithms perform very well.
Deep learning algorithms need high-end machines to work perfectly. On the other hand, machine learning algorithms can work on low-end machines as well.
Deep learning algorithms can extract high level features and attempt to learn from the same too. On the other hand, a specialist is required to identify most of the features extracted by machine learning.
Execution time depends upon the numerous parameters utilized in an algorithm. Deep learning has more parameters than machine learning algorithms. Hence, the execution time of DL algorithms, specially the training time, is much more than ML algorithms. But the testing time of DL algorithms is less than ML algorithms.
Deep learning solves the problem end-to-end while machine learning uses the traditional way of solving the issue i.e. by breaking down it into parts.
Convolutional neural networks are equivalent to ordinary neural networks because they are also made up of neurons that have learnable weights and biases. Ordinary neural networks ignore the structure of input data and all the data is converted into 1-D array before feeding it into the network. This cycle suits the regular data, however if the data contains pictures, the process may be cumbersome.
CNN solves this issue easily. It takes the 2D structure of the pictures into account when they process them, which permits them to extract the properties specific to pictures. In this way, the primary goal of CNNs is to go from the raw image data in the input layer to the correct class in the output layer. The only difference between an ordinary NNs and CNNs is in the treatment of input data and in the kind of layers.
Architecturally, the ordinary neural networks receive an input and transform it through a series of hidden layer. Every layer is associated to the other layer with the assistance of neurons. The main disadvantage of ordinary neural networks is that they don't scale well to full pictures.
The architecture of CNNs have neurons arranged in 3 dimensions called width, height and depth. Each neuron in the current layer is associated to a small patch of the output from the previous layer. It is like overlaying a N X N filter on the input image. It utilizes M filters to be sure about getting all the details. These M filters are feature extractors which extract features like edges, corners, etc.
Following layers are used to construct CNNs -
Input Layer - It takes the raw picture data as it is.
Convolutional Layer - This layer is the core building block of CNNs that does most of the computations. This layer figures the convolutions between the neurons and the different patches in the input.
Rectified Linear Unit Layer - It applies an activation function to the output of the past layer. It adds non-linearity to the network so that it can generalize well to any type of function.
Pooling Layer - Pooling helps us to keep only the significant parts as we progress in the network. Pooling layer operates independently on each depth slice of the input and resizes it spatially. It utilizes the MAX function.
Completely Connected layer/Output layer - This layer computes the output scores in the last layer. The resulting output is of the size 1X1XL , where L is the number training dataset classes.
You can utilize Keras, which is an high level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK or Theno. It is compatible with Python 2.7-3.6. You can learn more about it from https://keras.io/.
Use the following commands to install keras -
pip install keras
On conda environment, you can utilize the following command -
conda install –c conda-forge keras
In this chapter, you will learn how to build a linear regressor utilizing artificial neural networks. You can utilize KerasRegressor to achieve this. In this example, we are utilizing the Boston house price dataset with 13 numerical for properties in Boston. The Python code for the same is appeared here -
Import all the necessary packages as shown -
import numpy import pandas from keras.models import Sequential from keras.layers import Dense from keras.wrappers.scikit_learn import KerasRegressor from sklearn.model_selection import cross_val_score from sklearn.model_selection import KFold
Now, load our dataset which is saved in nearby directory.
dataframe = pandas.read_csv("/Usrrs/admin/data.csv", delim_whitespace = True, header = None) dataset = dataframe.values
Now, divide the data into input and output variables i.e. X and Y -
X = dataset[:,0:13] Y = dataset[:,13]
Since we use baseline neural networks, characterize the model -
Now, create the model as follows -
model_regressor = Sequential() model_regressor.add(Dense(13, input_dim = 13, kernel_initializer = 'normal', activation = 'relu')) model_regressor.add(Dense(1, kernel_initializer = 'normal'))
Next, compile the model -
model_regressor.compile(loss='mean_squared_error', optimizer='adam') return model_regressor
Now, fix the random seed for reproducibility as follows -
seed = 7 numpy.random.seed(seed)
The Keras wrapper object for use in scikit-learn as a regression estimator is called KerasRegressor. In this chapter, we shall evaluate this model with standardize data set.
estimator = KerasRegressor(build_fn = baseline_model, nb_epoch = 100, batch_size = 5, verbose = 0) kfold = KFold(n_splits = 10, random_state = seed) baseline_result = cross_val_score(estimator, X, Y, cv = kfold) print("Baseline: %.2f (%.2f) MSE" % (Baseline_result.mean(),Baseline_result.std()))
The output of the code appeared above would be the estimate of the model’s performance on the issue for unseen data. It will be the mean squared error, including the average and standard deviation across all 10 folds of the cross validation evaluation.
Convolutional Neural Networks (CNNs) solve a picture classification issue, that is to which class the input image belongs to. You can utilize Keras deep learning library. Note that we are using the training and testing data set of images of cats and dogs from following link https://www.kaggle.com/c/dogs-vs-cats/data.
Import the significant keras libraries and packages as appeared -
The following package called sequential will initialize the neural networks as sequential network.
from keras.models import Sequential
The following package called Conv2D is utilized to perform the convolution operation, the initial step of CNN.
from keras.layers import Conv2D
The following package called MaxPoling2D is utilized to perform the pooling operation, the second step of CNN.
from keras.layers import MaxPooling2D
The following package called Flatten is the process of converting all the resultant 2D arrays into a single long continuous linear vector.
from keras.layers import Flatten
The following package called Dense is utilized to perform the full connection of the neural network, the fourth step of CNN.
from keras.layers import Dense
Presently, create an object of the sequential class.
S_classifier = Sequential()
Now, next step is coding the convolution part.
S_classifier.add(Conv2D(32, (3, 3), input_shape = (64, 64, 3), activation = 'relu'))
Here relu is the rectifier function.
Now, the following step of CNN is the pooling operation on the resultant feature maps after convolution part.
S-classifier.add(MaxPooling2D(pool_size = (2, 2)))
Now, convert all the pooled pictures into a continuous vector by utilizing flattering -
Next, make a fully associated layer.
S_classifier.add(Dense(units = 128, activation = 'relu'))
Here, 128 is the number of hidden units. It is a typical practice to characterize the number of hidden units as the power of 2.
Now, initialize the output layer as follows -
S_classifier.add(Dense(units = 1, activation = 'sigmoid'))
Now, compile the CNN, we have built -
S_classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
Here optimizer parameter is to choose the stochastic gradient descent algorithm, loss parameter is to pick the loss function and metrics parameter is to choose the performance metric.
Now, perform image augmentations and then fit the pictures to the neural networks -
train_datagen = ImageDataGenerator(rescale = 1./255,shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory(”/Users/admin/training_set”,target_size = (64, 64),batch_size = 32,class_mode = 'binary') test_set = test_datagen.flow_from_directory('test_set',target_size = (64, 64),batch_size = 32,class_mode = 'binary')
Now, fit the data to the model we have created -
classifier.fit_generator(training_set,steps_per_epoch = 8000,epochs = 25,validation_data = test_set,validation_steps = 2000)
Here steps_per_epoch have the number of training pictures.
Now as the model has been trained, we can utilize it for prediction as follows -
from keras.preprocessing import image test_image = image.load_img('dataset/single_prediction/cat_or_dog_1.jpg', target_size = (64, 64)) test_image = image.img_to_array(test_image) test_image = np.expand_dims(test_image, axis = 0) result = classifier.predict(test_image) training_set.class_indices if result == 1: prediction = 'dog' else: prediction = 'cat'