Python convolutional neural network exercise

USE Jupyter Notebook

NB: Please only do Tasks 1-2

We will combine what we’ve learned about convolution, max-pooling and feed-forward layers, to build a ConvNet classifier for images.

Given Code:

in[]from __future__ import absolute_import, division, print_function

# Prerequisits
!pip install pydot_ng
!pip install graphviz
!apt install graphviz > /dev/null

# import statements
import tensorflow as tf
import tensorflow.contrib.eager as tfe
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
%matplotlib inline

# Enable the interactive TensorFlow interface, which is easier to understand as a beginner.
print(‘Running in Eager mode.’)
except ValueError:
print(‘Already running in Eager mode’)

in[]cifar = tf.keras.datasets.cifar10
(train_images, train_labels), (test_images, test_labels) = cifar.load_data()
cifar_labels = [‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’]

in[]# Take the last 10000 images from the training set to form a validation set
train_labels = train_labels.squeeze()
validation_images = train_images[-10000:, :, :]
validation_labels = train_labels[-10000:]
train_images = train_images[:-10000, :, :]
train_labels = train_labels[:-10000]

in[]print(‘train_images.shape = {}, data-type = {}’.format(train_images.shape, train_images.dtype))
print(‘train_labels.shape = {}, data-type = {}’.format(train_labels.shape, train_labels.dtype))

print(‘validation_images.shape = {}, data-type = {}’.format(validation_images.shape, validation_images.dtype))
print(‘validation_labels.shape = {}, data-type = {}’.format(validation_labels.shape, validation_labels.dtype))

for i in range(25):

in[]# Define the convolutinal part of the model architecture using Keras Layers.
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=48, kernel_size=(3, 3), activation=tf.nn.relu, input_shape=(32, 32, 3), padding=’same’),
tf.keras.layers.MaxPooling2D(pool_size=(3, 3)),
tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation=tf.nn.relu, padding=’same’),
tf.keras.layers.MaxPooling2D(pool_size=(3, 3)),
tf.keras.layers.Conv2D(filters=192, kernel_size=(3, 3), activation=tf.nn.relu, padding=’same’),
tf.keras.layers.Conv2D(filters=192, kernel_size=(3, 3), activation=tf.nn.relu, padding=’same’),
tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation=tf.nn.relu, padding=’same’),
tf.keras.layers.MaxPooling2D(pool_size=(3, 3)),


in[]model.add(tf.keras.layers.Flatten()) # Flatten “squeezes” a 3-D volume down into a single vector.
model.add(tf.keras.layers.Dense(1024, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(1024, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))

in[]tf.keras.utils.plot_model(model, to_file=”small_lenet.png”, show_shapes=True, show_layer_names=True)

in[]batch_size = 128
num_epochs = 10 # The number of epochs (full passes through the data) to train for

# Compiling the model adds a loss function, optimiser and metrics to track during training

# The fit function allows you to fit the compiled model to some training data,
validation_data=(validation_images, validation_labels.astype(np.float32)))

print(‘Training complete’)

in[]metric_values = model.evaluate(x=test_images, y=test_labels)

print(‘Final TEST performance’)
for metric_value, metric_name in zip(metric_values, model.metrics_names):
print(‘{}: {}’.format(metric_name, metric_value))

in[]img_indices = np.random.randint(0, len(test_images), size=[25])
sample_test_images = test_images[img_indices]
sample_test_labels = [cifar_labels[i] for i in test_labels[img_indices].squeeze()]

predictions = model.predict(sample_test_images)
max_prediction = np.argmax(predictions, axis=1)
prediction_probs = np.max(predictions, axis=1)

for i, (img, prediction, prob, true_label) in enumerate(
zip(sample_test_images, max_prediction, prediction_probs, sample_test_labels)):

plt.xlabel(‘{} ({:0.3f})’.format(cifar_labels[prediction], prob))

NB: Please only do Tasks 1-2

Tensorflow documentation: from ‘’ website: search: tf.keras.layers.BatchNormalization’

research paper: search on web: ‘’

Your Tasks 1. Experiment with the network architecture, try changing the numbers, types and sizes of layers, the sizes of fil

NB: Please only do Tasks 1-2

Order a unique copy of this paper
(550 words)

Approximate price: $22

Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
The price is based on these factors:
Academic level
Number of pages