MSDS458 Research Assignment 02 - Part 1¶

More Technical: Throughout the notebook. This types of boxes provide more technical details and extra references about what you are seeing. They contain helpful tips, but you can safely skip them the first time you run through the code.

The CIFAR-10 dataset (Canadian Institute For Advanced Research) is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.

The CIFAR-10 dataset
https://www.cs.toronto.edu/~kriz/cifar.html

Imports¶

In [44]:
import numpy as np
import pandas as pd
from packaging import version

from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error as MSE
from sklearn.model_selection import train_test_split

import matplotlib.pyplot as plt
import seaborn as sns

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import models, layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPool2D, BatchNormalization, Dropout, Flatten, Dense
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
from tensorflow.keras.preprocessing import image
from tensorflow.keras.utils import to_categorical
In [45]:
%matplotlib inline
np.set_printoptions(precision=3, suppress=True)

Verify TensorFlow Version and Keras Version¶

In [46]:
print("This notebook requires TensorFlow 2.0 or above")
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >=2
This notebook requires TensorFlow 2.0 or above
TensorFlow version:  2.10.0
In [47]:
print("Keras version: ", keras.__version__)
Keras version:  2.10.0

Mount Google Drive to Colab Environment¶

In [48]:
# from google.colab import drive
# drive.mount('/content/gdrive')

EDA Functions¶

In [49]:
def get_three_classes(x, y):
    def indices_of(class_id):
        indices, _ = np.where(y == float(class_id))
        return indices

    indices = np.concatenate([indices_of(0), indices_of(1), indices_of(2)], axis=0)
    
    x = x[indices]
    y = y[indices]
    
    count = x.shape[0]
    indices = np.random.choice(range(count), count, replace=False)
    
    x = x[indices]
    y = y[indices]
    
    y = tf.keras.utils.to_categorical(y)
    
    return x, y
In [50]:
def show_random_examples(x, y, p):
    indices = np.random.choice(range(x.shape[0]), 10, replace=False)
    
    x = x[indices]
    y = y[indices]
    p = p[indices]
    
    plt.figure(figsize=(10, 5))
    for i in range(10):
        plt.subplot(2, 5, i + 1)
        plt.imshow(x[i])
        plt.xticks([])
        plt.yticks([])
        col = 'green' if np.argmax(y[i]) == np.argmax(p[i]) else 'red'
        plt.xlabel(class_names_preview[np.argmax(p[i])], color=col)
    plt.show()

Research Assignment Reporting Functions¶

In [51]:
def plot_history(history):
  losses = history.history['loss']
  accs = history.history['accuracy']
  val_losses = history.history['val_loss']
  val_accs = history.history['val_accuracy']
  epochs = len(losses)

  plt.figure(figsize=(16, 4))
  for i, metrics in enumerate(zip([losses, accs], [val_losses, val_accs], ['Loss', 'Accuracy'])):
    plt.subplot(1, 2, i + 1)
    plt.plot(range(epochs), metrics[0], label='Training {}'.format(metrics[2]))
    plt.plot(range(epochs), metrics[1], label='Validation {}'.format(metrics[2]))
    plt.legend()
  plt.show()
In [52]:
def print_validation_report(y_test, predictions):
    print("Classification Report")
    print(classification_report(y_test, predictions))
    print('Accuracy Score: {}'.format(accuracy_score(y_test, predictions)))
    print('Root Mean Square Error: {}'.format(np.sqrt(MSE(y_test, predictions)))) 
In [53]:
def plot_confusion_matrix(y_true, y_pred):
    mtx = confusion_matrix(y_true, y_pred)
    fig, ax = plt.subplots(figsize=(8,8))
    sns.heatmap(mtx, annot=True, fmt='d', linewidths=.75,  cbar=False, ax=ax,cmap='Blues',linecolor='white')
    #  square=True,
    plt.ylabel('true label')
    plt.xlabel('predicted label')

Loading cifar10 Dataset¶

The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.

In [54]:
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
  • Tuple of Numpy arrays: (x_train, y_train), (x_test, y_test).
  • x_train, x_test: uint8 arrays of color image data with shapes (num_samples, 32, 32).
  • y_train, y_test: uint8 arrays of digit labels (integers in range 0-9)

EDA Training and Test Datasets¶

  • Imported 50000 examples for training and 10000 examples for test
  • Imported 50000 labels for training and 10000 labels for test
In [55]:
print('train_images:\t{}'.format(x_train.shape))
print('train_labels:\t{}'.format(y_train.shape))
print('test_images:\t\t{}'.format(x_test.shape))
print('test_labels:\t\t{}'.format(y_test.shape))
train_images:	(50000, 32, 32, 3)
train_labels:	(50000, 1)
test_images:		(10000, 32, 32, 3)
test_labels:		(10000, 1)

Review Labels¶

In [56]:
print("First ten labels training dataset:\n {}\n".format(y_train[0:10]))
print("This output the numeric label, need to convert to item description")
First ten labels training dataset:
 [[6]
 [9]
 [9]
 [4]
 [1]
 [1]
 [2]
 [7]
 [8]
 [3]]

This output the numeric label, need to convert to item description

Plot Subset of Examples¶

In [57]:
(train_images, train_labels),(test_images, test_labels)= tf.keras.datasets.cifar10.load_data()
In [58]:
x_preview, y_preview = get_three_classes(train_images, train_labels)
x_preview, y_preview = get_three_classes(test_images, test_labels)
In [59]:
class_names_preview = ['aeroplane', 'car', 'bird']

show_random_examples(x_preview, y_preview, y_preview)

Preprocessing Data for Model Development¶

The labels are an array of integers, ranging from 0 to 9. These correspond to the class of clothing the image represents:

Label Class_
0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
In [60]:
class_names = ['airplane'
,'automobile'
,'bird'
,'cat'
,'deer'
,'dog'
,'frog' 
,'horse'
,'ship'
,'truck']

Create Validation Data Set¶

In [61]:
x_train_split, x_valid_split, y_train_split, y_valid_split = train_test_split(x_train
                                                                              ,y_train
                                                                              ,test_size=.1
                                                                              ,random_state=42
                                                                              ,shuffle=True)

Confirm Datasets {Train, Validation, Test}¶

In [62]:
print(x_train_split.shape, x_valid_split.shape, x_test.shape)
(45000, 32, 32, 3) (5000, 32, 32, 3) (10000, 32, 32, 3)

Rescale Examples {Train, Validation, Test}¶

The images are 28x28 NumPy arrays, with pixel values ranging from 0 to 255

  1. Each element in each example is a pixel value
  2. Pixel values range from 0 to 255
  3. 0 = black
  4. 255 = white
In [63]:
x_train_norm = x_train_split/255
x_valid_norm = x_valid_split/255
x_test_norm = x_test/255

Create the Model¶

Build CNN Model¶

We use a Sequential class defined in Keras to create our model. The first 9 layers Conv2D MaxPooling handle feature learning. The last 3 layers, handle classification

In [64]:
model = Sequential([
  Conv2D(filters=128, kernel_size=(3, 3), strides=(1, 1), activation=tf.nn.relu,input_shape=x_train_norm.shape[1:]),
  MaxPool2D((2, 2),strides=2),
  Dropout(0.3),
  Conv2D(filters=256, kernel_size=(3, 3), strides=(1, 1), activation=tf.nn.relu),
  MaxPool2D((2, 2),strides=2),
  Dropout(0.3),
  Conv2D(filters=512, kernel_size=(3, 3), strides=(1, 1), activation=tf.nn.relu),
  MaxPool2D((2, 2),strides=2),
  Dropout(0.3),
  Flatten(),
  Dense(units=10, activation=tf.nn.softmax)       
])
In [65]:
model.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_3 (Conv2D)           (None, 30, 30, 128)       3584      
                                                                 
 max_pooling2d_3 (MaxPooling  (None, 15, 15, 128)      0         
 2D)                                                             
                                                                 
 dropout_3 (Dropout)         (None, 15, 15, 128)       0         
                                                                 
 conv2d_4 (Conv2D)           (None, 13, 13, 256)       295168    
                                                                 
 max_pooling2d_4 (MaxPooling  (None, 6, 6, 256)        0         
 2D)                                                             
                                                                 
 dropout_4 (Dropout)         (None, 6, 6, 256)         0         
                                                                 
 conv2d_5 (Conv2D)           (None, 4, 4, 512)         1180160   
                                                                 
 max_pooling2d_5 (MaxPooling  (None, 2, 2, 512)        0         
 2D)                                                             
                                                                 
 dropout_5 (Dropout)         (None, 2, 2, 512)         0         
                                                                 
 flatten_1 (Flatten)         (None, 2048)              0         
                                                                 
 dense_1 (Dense)             (None, 10)                20490     
                                                                 
=================================================================
Total params: 1,499,402
Trainable params: 1,499,402
Non-trainable params: 0
_________________________________________________________________
In [66]:
keras.utils.plot_model(model, "CIFAR10.png", show_shapes=True) 
You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) for plot_model to work.

Compiling the model¶

In addition to setting up our model architecture, we also need to define which algorithm should the model use in order to optimize the weights and biases as per the given data. We will use stochastic gradient descent.

We also need to define a loss function. Think of this function as the difference between the predicted outputs and the actual outputs given in the dataset. This loss needs to be minimised in order to have a higher model accuracy. That's what the optimization algorithm essentially does - it minimises the loss during model training. For our multi-class classification problem, categorical cross entropy is commonly used.

Finally, we will use the accuracy during training as a metric to keep track of as the model trains.

tf.keras.losses.SparseCategoricalCrossentropy
https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy
In [67]:
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['accuracy'])

Training the model¶

Module: tf.keras.callbacks
tf.keras.callbacks.EarlyStopping
https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping
tf.keras.callbacks.ModelCheckpoint
https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
In [68]:
history = model.fit(x_train_norm
                    ,y_train_split
                    ,epochs=200
                    ,batch_size=64
                    ,validation_data=(x_valid_norm, y_valid_split)
                    ,callbacks=[
                     tf.keras.callbacks.ModelCheckpoint("CNN_model.h5",save_best_only=True,save_weights_only=False) 
                     ,tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=3),
                    ]                                                                                                           
                   )
Epoch 1/200
704/704 [==============================] - 167s 236ms/step - loss: 1.5553 - accuracy: 0.4327 - val_loss: 1.2150 - val_accuracy: 0.5766
Epoch 2/200
704/704 [==============================] - 218s 310ms/step - loss: 1.1624 - accuracy: 0.5928 - val_loss: 1.0584 - val_accuracy: 0.6292
Epoch 3/200
704/704 [==============================] - 227s 322ms/step - loss: 1.0064 - accuracy: 0.6488 - val_loss: 0.9096 - val_accuracy: 0.6834
Epoch 4/200
704/704 [==============================] - 432s 614ms/step - loss: 0.9092 - accuracy: 0.6849 - val_loss: 0.8376 - val_accuracy: 0.7074
Epoch 5/200
704/704 [==============================] - 583s 829ms/step - loss: 0.8299 - accuracy: 0.7110 - val_loss: 0.8463 - val_accuracy: 0.7098
Epoch 6/200
704/704 [==============================] - 419s 595ms/step - loss: 0.7784 - accuracy: 0.7275 - val_loss: 0.7523 - val_accuracy: 0.7348
Epoch 7/200
704/704 [==============================] - 237s 337ms/step - loss: 0.7298 - accuracy: 0.7466 - val_loss: 0.7203 - val_accuracy: 0.7472
Epoch 8/200
704/704 [==============================] - 212s 301ms/step - loss: 0.6851 - accuracy: 0.7607 - val_loss: 0.6880 - val_accuracy: 0.7646
Epoch 9/200
704/704 [==============================] - 228s 323ms/step - loss: 0.6516 - accuracy: 0.7717 - val_loss: 0.6832 - val_accuracy: 0.7672
Epoch 10/200
704/704 [==============================] - 227s 323ms/step - loss: 0.6264 - accuracy: 0.7813 - val_loss: 0.6669 - val_accuracy: 0.7648
Epoch 11/200
704/704 [==============================] - 227s 322ms/step - loss: 0.5970 - accuracy: 0.7892 - val_loss: 0.6558 - val_accuracy: 0.7760
Epoch 12/200
704/704 [==============================] - 254s 361ms/step - loss: 0.5806 - accuracy: 0.7949 - val_loss: 0.6484 - val_accuracy: 0.7764
Epoch 13/200
704/704 [==============================] - 381s 542ms/step - loss: 0.5593 - accuracy: 0.8018 - val_loss: 0.6774 - val_accuracy: 0.7636
Epoch 14/200
704/704 [==============================] - 284s 404ms/step - loss: 0.5397 - accuracy: 0.8108 - val_loss: 0.6539 - val_accuracy: 0.7780
Epoch 15/200
704/704 [==============================] - 315s 448ms/step - loss: 0.5253 - accuracy: 0.8153 - val_loss: 0.6789 - val_accuracy: 0.7738
Epoch 16/200
704/704 [==============================] - 336s 477ms/step - loss: 0.5099 - accuracy: 0.8203 - val_loss: 0.6637 - val_accuracy: 0.7694
Epoch 17/200
704/704 [==============================] - 340s 483ms/step - loss: 0.4970 - accuracy: 0.8247 - val_loss: 0.6596 - val_accuracy: 0.7790
Epoch 18/200
704/704 [==============================] - 339s 481ms/step - loss: 0.4787 - accuracy: 0.8327 - val_loss: 0.6311 - val_accuracy: 0.7904
Epoch 19/200
704/704 [==============================] - 340s 483ms/step - loss: 0.4694 - accuracy: 0.8340 - val_loss: 0.6548 - val_accuracy: 0.7796
Epoch 20/200
704/704 [==============================] - 283s 402ms/step - loss: 0.4588 - accuracy: 0.8390 - val_loss: 0.6519 - val_accuracy: 0.7852
Epoch 21/200
704/704 [==============================] - 226s 321ms/step - loss: 0.4500 - accuracy: 0.8413 - val_loss: 0.6420 - val_accuracy: 0.7840

Evaluate the model¶

In order to ensure that this is not a simple "memorization" by the machine, we should evaluate the performance on the test set. This is easy to do, we simply use the evaluate method on our model.

In [69]:
model = tf.keras.models.load_model("CNN_model.h5")
print(f"Test acc: {model.evaluate(x_test_norm, y_test)[1]:.3f}")
313/313 [==============================] - 29s 92ms/step - loss: 1.0382 - accuracy: 0.6430
Test acc: 0.643

Predictions¶

In [70]:
preds = model.predict(x_test_norm)
print('shape of preds: ', preds.shape)
313/313 [==============================] - 29s 93ms/step
shape of preds:  (10000, 10)

Plotting Performance Metrics¶

We use Matplotlib to create 2 plots--displaying the training and validation loss (resp. accuracy) for each (training) epoch side by side.

In [71]:
history_dict = history.history
history_dict.keys()
Out[71]:
dict_keys(['loss', 'accuracy', 'val_loss', 'val_accuracy'])
In [72]:
history_df=pd.DataFrame(history_dict)
history_df.tail().round(3)
Out[72]:
loss accuracy val_loss val_accuracy
16 0.497 0.825 0.660 0.779
17 0.479 0.833 0.631 0.790
18 0.469 0.834 0.655 0.780
19 0.459 0.839 0.652 0.785
20 0.450 0.841 0.642 0.784

Plot Training Metrics (Loss and Accuracy)¶

In [73]:
plot_history(history)

Confusion matrices¶

Using both sklearn.metrics. Then we visualize the confusion matrix and see what that tells us.

In [74]:
pred1= model.predict(x_test_norm)
pred1=np.argmax(pred1, axis=1)
313/313 [==============================] - 31s 98ms/step
In [75]:
print_validation_report(y_test, pred1)
Classification Report
              precision    recall  f1-score   support

           0       0.77      0.66      0.71      1000
           1       0.80      0.81      0.80      1000
           2       0.51      0.53      0.52      1000
           3       0.48      0.42      0.45      1000
           4       0.51      0.66      0.58      1000
           5       0.66      0.42      0.51      1000
           6       0.53      0.86      0.66      1000
           7       0.71      0.67      0.69      1000
           8       0.80      0.76      0.78      1000
           9       0.85      0.64      0.73      1000

    accuracy                           0.64     10000
   macro avg       0.66      0.64      0.64     10000
weighted avg       0.66      0.64      0.64     10000

Accuracy Score: 0.643
Root Mean Square Error: 2.363070037049262
In [76]:
plot_confusion_matrix(y_test,pred1)

Load HDF5 Model Format¶

tf.keras.models.load_model
https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model
In [77]:
model = tf.keras.models.load_model('CNN_model.h5')
In [78]:
preds = model.predict(x_test_norm)
313/313 [==============================] - 30s 95ms/step
In [79]:
preds.shape
Out[79]:
(10000, 10)

Predictions¶

In [80]:
cm = sns.light_palette((260, 75, 60), input="husl", as_cmap=True)
In [81]:
df = pd.DataFrame(preds[0:20], columns = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'])
df.style.format("{:.2%}").background_gradient(cmap=cm)
Out[81]:
  airplane automobile bird cat deer dog frog horse ship truck
0 0.67% 0.14% 10.86% 53.14% 1.59% 9.39% 17.82% 0.46% 5.33% 0.60%
1 7.08% 37.81% 0.04% 0.01% 0.00% 0.00% 0.00% 0.00% 54.27% 0.79%
2 11.98% 45.43% 1.56% 0.99% 0.70% 0.20% 0.35% 0.32% 29.08% 9.40%
3 44.73% 14.77% 6.12% 1.96% 0.93% 0.05% 0.36% 0.06% 29.83% 1.19%
4 0.01% 0.01% 2.94% 1.62% 23.69% 0.40% 71.28% 0.04% 0.01% 0.00%
5 0.02% 0.02% 1.54% 2.99% 1.58% 0.76% 92.82% 0.18% 0.05% 0.04%
6 4.35% 68.28% 5.02% 10.98% 0.14% 6.16% 0.53% 0.26% 0.14% 4.14%
7 0.59% 0.06% 25.85% 1.64% 14.65% 0.53% 56.36% 0.18% 0.08% 0.05%
8 0.06% 0.03% 8.57% 39.85% 13.53% 16.30% 18.92% 2.65% 0.04% 0.06%
9 3.51% 80.55% 0.42% 0.12% 0.29% 0.04% 0.12% 0.09% 0.80% 14.05%
10 18.13% 0.18% 10.94% 5.66% 36.58% 4.59% 1.30% 2.54% 19.42% 0.66%
11 0.10% 2.13% 0.02% 0.04% 0.02% 0.05% 0.01% 0.18% 0.54% 96.92%
12 0.06% 0.30% 4.68% 6.24% 26.45% 15.37% 38.77% 7.78% 0.21% 0.14%
13 0.02% 0.04% 0.02% 0.05% 0.53% 0.75% 0.01% 98.53% 0.00% 0.05%
14 0.91% 3.90% 0.67% 0.55% 0.08% 0.09% 0.09% 0.46% 0.53% 92.73%
15 3.19% 0.79% 17.07% 7.98% 7.11% 0.89% 58.24% 0.23% 4.21% 0.28%
16 0.04% 0.19% 8.92% 33.41% 0.75% 45.63% 3.88% 6.85% 0.12% 0.21%
17 0.43% 0.29% 14.14% 17.25% 26.46% 9.33% 7.88% 23.54% 0.19% 0.49%
18 9.54% 46.31% 0.08% 0.21% 0.26% 0.02% 0.19% 0.09% 21.56% 21.73%
19 0.01% 0.03% 1.37% 0.92% 22.61% 0.70% 72.88% 1.47% 0.00% 0.01%
In [82]:
(_,_), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()

img = test_images[2000]
img_tensor = image.img_to_array(img)
img_tensor = np.expand_dims(img_tensor, axis=0)

class_names = ['airplane'
,'automobile'
,'bird'
,'cat'
,'deer'
,'dog'
,'frog' 
,'horse'
,'ship'
,'truck']

plt.imshow(img, cmap='viridis')
plt.axis('off')
plt.show()
In [83]:
# Extracts the outputs of the top 8 layers:
layer_outputs = [layer.output for layer in model.layers[:8]]
# Creates a model that will return these outputs, given the model input:
activation_model = models.Model(inputs=model.input, outputs=layer_outputs)
In [84]:
activations = activation_model.predict(img_tensor)
len(activations)
1/1 [==============================] - 0s 201ms/step
Out[84]:
8
In [85]:
layer_names = []
for layer in model.layers:
    layer_names.append(layer.name)
    
layer_names
Out[85]:
['conv2d',
 'max_pooling2d',
 'dropout',
 'conv2d_1',
 'max_pooling2d_1',
 'dropout_1',
 'conv2d_2',
 'max_pooling2d_2',
 'dropout_2',
 'flatten',
 'dense']
In [86]:
# These are the names of the layers, so can have them as part of our plot
layer_names = []
for layer in model.layers[:3]:
    layer_names.append(layer.name)

images_per_row = 16

# Now let's display our feature maps
for layer_name, layer_activation in zip(layer_names, activations):
    # This is the number of features in the feature map
    n_features = layer_activation.shape[-1]

    # The feature map has shape (1, size, size, n_features)
    size = layer_activation.shape[1]

    # We will tile the activation channels in this matrix
    n_cols = n_features // images_per_row
    display_grid = np.zeros((size * n_cols, images_per_row * size))

    # We'll tile each filter into this big horizontal grid
    for col in range(n_cols):
        for row in range(images_per_row):
            channel_image = layer_activation[0,
                                             :, :,
                                             col * images_per_row + row]
            # Post-process the feature to make it visually palatable
            channel_image -= channel_image.mean()
            channel_image /= channel_image.std()
            channel_image *= 64
            channel_image += 128
            channel_image = np.clip(channel_image, 0, 255).astype('uint8')
            display_grid[col * size : (col + 1) * size,
                         row * size : (row + 1) * size] = channel_image

    # Display the grid
    scale = 1. / size
    plt.figure(figsize=(scale * display_grid.shape[1],
                        scale * display_grid.shape[0]))
    plt.title(layer_name)
    plt.grid(False)
    plt.imshow(display_grid, aspect='auto', cmap='viridis')
    
plt.show();
/var/folders/zz/wfk650fx7lx3mmyd6_y3yxgr0000gr/T/ipykernel_32477/1872159762.py:28: RuntimeWarning: invalid value encountered in true_divide
  channel_image /= channel_image.std()