About Me
Blog

Demystifying Deep Learning: Part 8

Convolutional Neural Networks

September 04, 2018

10 min read


Introduction

In this blog post, we’ll look at our first specialised neural network architecture - the convolutional neural network. There is a Jupyter notebook accompanying the blog post containing the code for classifying handwritten digits using a CNN written from scratch.

Motivation for a specialised architecture

Recall in our feedforward neural network post that the weight matrices for layer ll were stored in a nln_l x nl1n_{l-1} matrix. Therefore, the number of parameters in the first layer of the network is determined by n0n_0 the number of features in layer 0 (the input layer).

Images are stored in a large multidimensional array of pixels, whose dimensions are (width x height) for greyscale and (width x height x 3) array for RGB images. So consider a small 256x256 RGB image - this results in 256x256x3= 196608 pixels! Each pixel is an input feature, which means that even if we only have 128 units in the first layer, the number of parameters in our weight matrix is 128*196608 = 25 million weights!

Given that most images are much more high-res than 256x256, the number of input features will increase dramatically - for a 1024x1024 RGB image the number of pixels increases by a factor of 16.

This poses a scalability issue - our model has far too many weights! This requires a huge amount of memory and computational power to train. How then do we train a neural network on an image?

Intuition

The key insight is as follows: If we are trying to classify an object in an image, the position of that object in the image does not matter.

So the features we should be learning ought to be independent of position.

This simplifying assumption is at the heart of the convolutional neural network (CNN).

Convolution layer

Interpretation 1: Weight sharing between equivalent neurons

Equivalent neuron representation

The main difference between the convolution layer and the standard layer we used in a feedforward network is weight sharing.

Each neuron in the layer looks at a square patch of the image (the receptive field) - looking at all of the input channels for that image (e.g. RGB channels for the original input image). The neurons take in input from each pixel in that patch and share weights regardless of the position of their patch in the image, so are equivalent.

Interpretation 2: A sliding filter over the image

Sliding filter representation

Another interpretation of the convolution layer is, rather than a bunch of equivalent neurons, that of one neuron called the filter / kernel that scans over the image, patch by patch.

Much like the standard neuron we have seen in a feedforward network, the filter weights its inputs (the pixels in that patch) and then applies an activation function to it, to output a single value for that patch.

To compute the weighted input, the filter computes an element-wise product between the pixels in the patch and the corresponding value in its weight matrix - then sums up these to produce an output.

Intuitively, the filter can be seem as a feature detector which produces a high activation when it detects that feature.

So for a single patch starting at position (i,j)(i,j) in the image the equation is ( * is an element-wise product):

Z=abcXi+a,j+b,c\*W_a,b,c+bZ = \sum*a \sum_b \sum_c X*{i+a,j+b,c}\*W\_{a,b,c} +b

We can take all the activations from each patch across the image to form a 2D feature map / activation map.

The name convolution layer, comes from the mathematical “sliding” convolution operation that we perform to get our output. As an aside, technically the convolution operation flips the weight matrix horizontally and vertically before applying the sliding operation, and is applied to a 2D slice of the image, not to all of the input channels. So the equation for a 2D mathematical convolution is (note a,b-a, -b not a,ba, b indicating the flipped weights):

abXia,jb\*W_a,b\sum*a \sum_b X*{i-a,j-b} \* W\_{a,b}

Again, this is a case where the deep learning terminology differs slightly from the maths, however for our purposes we will refer to the first (deep learning) equation as a convolution.

Hyperparameters

Now that we have discussed the convolution operation, let’s talk about the nuances and “tuning knobs” at our disposal in a convolution layer.

Number of filters:

First, rather than the one filter, we can have multiple filters, each with their own weights and bias, and then take all the activation maps and stack them in the depth dimension.

We thus store the input and output in a 4D array with dimensions (m,w,h,nc)(m, w, h, n_c) , where ncn_c is the number of channels/ activation maps.

So the equation for position (i,j)(i,j) in the kthk^{th} activation map in the convolution layer is:

Zi,j,k(mi)=abcXi+a,j+b,c(mi)Wa,b,c,k+bkZ^{(m_i)}_{i,j,k}= \sum_a \sum_b \sum_c X^{(m_i)}_{i+a,j+b,c} * W_{a,b,c,k} + b_k

Note we use mim_i here to denote the mithm_i^{th} training example (not using ii due to clash of notation). We also have a stride of 1 (see the Stride section below).

Filter Size:

We can also alter the size of the receptive field of the neuron. The convention is that the receptive field is square. Typical filter sizes we use are:

  • 3x3
  • 5x5
  • 7x7 (less common - only really was used in early days of CNNs)
  • 1x1

1x1 convolutions

The last filter size (1x1) may seem a little surprising. This seems counter-intuitive because there is no spatial information, since you are only looking at one pixel in the image. However, remember we weight and sum across input channels - so we can weight the relative significance each activation map at that point.

The 1x1 convolution is also a cheap way to add depth and non-linearities to the network, since in essence you just have a vector of feature activations and you are doing a matrix multiplication rather like a generalised version of logistic regression across input channels (although using ReLU not sigmoid as the activation function).

1x1 convolutions also have another purpose - depth reduction. Since we stack our feature maps along the depth axis, if we have kk filters in the previous convolution layer, the depth of the input to this layer will be kk . If kk is large, say 256 or 512, then the input becomes rather large. By using fewer filters in this 1x1 convolution layer, the output will have a much smaller depth as a result.

Stride

The stride of the filter is the number of steps the filter slides along between scanning one patch and scanning the next. In nearly all cases, we set a stride of 1 pixel, i.e. scan the next patch. The advantage of having a larger stride is that we can reduce the height/width of the output and thus reduce the size of the output’s dimensions.

However, in practice having a stride of 2 pixels or more is seen as too aggressive and the dimensions reduction is at the expense of a lot of information lost.

For our purposes, when we write equations for the convolution layer, we will assume a stride of 1 pixel.

Padding

Notice how when we scan the image, the pixels in the middle are scanned over multiple times, whereas the pixels at the edge are not scanned over as many times, so we could lose information around the edges of the input.

To ensure we scan these edges an equal number of times as the pixels in the image, we can zero-pad the border of the image - i.e. add a border of zeros around the image. We use zeros to ensure we don’t add any extra information (since the elementwise product over the padded part will be zero).

Let’s calculate the expected output width NN of the activation map, given an input width WW and a filter size of FF. (The calculation for height is identical assuming a square image)

The filter starts off at the left-hand side of the image computing one activation. There are W-1 pixels left to place the top left weight of the filter. However, since the furthest the filter can go is when the right edge of the filter reaches the right edge of the image, this means we cannot place the top left weight in the last F1F-1 pixels. So therefore there are another potential (W1)(F1)=WF(W-1)-(F-1) = W-F patches to scan.

If we add a padding of PP pixels to the border of the image, then the effective width is now W+2PW+2P since we pad each side. So there are WF+2PW-F+2P patches to scan, excluding the first one it scans.

If we now consider the effect of a stride of SS pixels, we skip pixels so only scan 1 out of every PP potential patches. So there are now WF+2PS\lfloor{\frac{W-F+2P}{S}} \rfloor patches the filter can scan over, excluding the first one it scans.

So the output width: N=1+WF+2PSN = 1 + \lfloor{\frac{W-F+2P}{S}} \rfloor.

This leads us to the values of the hyperparameter PP. One option is to not have padding at all - we call this setting a valid convolution.

If we look at the complicated expression for the output width, it is hard to keep track of the dimensionality of the input. So, rather than letting the convolution layer reduce the dimensionality, we zero-pad to maintain the dimensionality (set P=F1P=F-1) - we call this a same convolution.

Code:

ndimage.convolve() takes inputs two n-d arrays and performs an n-dimensional convolution operation on it.

Since we want to perform a 2d convolution, we pass in 2d slices of the input, and sum across the depth of the input. Since in a mathematical convolution, the filter is flipped, we flip the filter beforehand to cancel out the reflection done by ndimage.convolve.

One subtlety is that ndimage.convolve starts convolution from centre of kernel and zero pads but we don’t want this since we want to manually decide if we want to pad or not. So we ignore the edges of the output with a slice [ f//2:-(f//2), f//2:-(f//2) ], where f is the filter size

def conv_forward(x,w,b,padding="same"):
f = w.shape[0] #size of filter
if padding=="same":
pad = (f-1)//2
else: #padding is valid - i.e no zero padding
pad =0
n = (x.shape[1]-f+2*pad) +1 #ouput width/height
y = np.zeros((x.shape[0],n,n,w.shape[3])) #output array
#pad input
x_padded = np.pad(x,((0,0),(pad,pad),(pad,pad),(0,0)),'constant', constant_values = 0)
#flip filter to cancel out reflection
w = np.flip(w,0)
w = np.flip(w,1)
for m_i in range(x.shape[0]): #each of the training examples
for k in range(w.shape[3]): #each of the filters
for d in range(x.shape[3]): #each slice of the input
y[m_i,:,:,k]+= ndimage.convolve(x_padded[m_i,:,:,d],w[:,:,d,k])[f//2:-(f//2),f//2:-(f//2)] #sum across depth
y = y + b #add bias (this broadcasts across image)
return y

ReLU layer

This layer simply takes the convolution layer output and applies the ReLU activation function to it. Having the ReLU operation as a separate layer makes our backpropagation calculations easier, since we can split it into chunks.

We use ReLU because the gradient is either 0 or 1 - if 1 it means the gradient doesn’t diminish as we propagate it through the ReLU. If we were to use sigmoid or tanh, the gradient diminishes (since σ(x)=σ(x)(1σ(x))\sigma'(x) = \sigma(x)(1-\sigma(x)) , the max value of σ(x)\sigma'(x) is 0.25). This is the vanishing gradient problem: the gradient gets smaller as we go back through the network and thus the earlier layers’ weights aren’t updated significantly. We’ll look at this problem more in the context of recurrent neural networks later in the series.

In a typical CNN we tend to refer to a convolution layer with the underlying assumption that a ReLU layer follows, rather than explicitly refer to CONV-RELU layers.

Code:

def relu(x, deriv=False):
if deriv: #this is for backward pass
return (x>0)
return np.multiply(x, x>0) #so this sets x<=0 to 0.

Pooling layer

Max Pool The pooling layer is another new layer that we use specifically to reduce width/height dimensions - and thus reduce the number of parameters in subsequent layers.

Like the convolution layer, we have a filter, and stride and potential padding, but unlike the convolution layers there are no parameters - instead we apply a function to the patch. NB: unlike in the convolution layer we consider each 2D activation map separately, so we preserve the depth dimension.

The pooling layer intuitively stems from the idea of translational invariance - we don’t care about the precise pixel position of a feature in the patch - it doesn’t matter where in the patch it is so long as it is present somewhere.

With the pooling layer, we almost always set F=2F=2 , S=2S=2 with no padding- this has the effect of halving the input width and height dimensions.

There are two functions we could apply to the patch in pooling:

  • Max-pooling - we take the maximum pixel value for that patch.
  • Average-pooling - we average the pixel values in that patch.

In practice we tend to use max-pooling more often than average-pooling as empirically it works better on most computer vision tasks.

Code:

To compute the max/mean of each 2x2 patch we use a neat trick - we reshape the array then take the max/mean along the new axes - this is more efficient that scanning through the image with a for loop. We keep track of the max value in a mask to route gradients through in the backward pass.

def pool_forward(x,mode="max"):
x_patches = x.reshape(x.shape[0],x.shape[1]//2, 2,x.shape[2]//2, 2,x.shape[3])
if mode=="max":
out = x_patches.max(axis=2).max(axis=3)
mask =np.isclose(x,np.repeat(np.repeat(out,2,axis=1),2,axis=2)).astype(int)
elif mode=="average":
out = x_patches.mean(axis=3).mean(axis=4)
mask = np.ones_like(x)*0.25
return out,mask

Fully-Connected Layer

This is just the standard feedforward neural network layer we have seen. The layer is called fully-connected because each neuron is connected to all the input features. Contrast this with a convolution layer, where (using the equivalent neuron representation) each neuron is only connected to the input pixels in the patch it is looking at.

Again, we typically use ReLU for the fully-connected layers’ activation functions.

Code:

def fc_forward(x,w,b):
return relu(w.dot(x)+b)

Softmax Layer

This is a fully-connected layer with the softmax activation function and is the output layer of a neural network for multiclass classification (where we’re predicted if an input is one of multiple classses, not just whether it is a class or not, as in binary classification).

The softmax function is a generalisation of the sigmoid function, in that it too is also assigning a probabilitity to each of the neurons (one neuron for each class).

Given the weighted input of neuron ii , ziz_i, the softmax function is as follows:

softmax(zi)=ezijezjsoftmax(z_i) = \frac{e^{z_i}}{\sum_j e^{z_j}}

i.e. we exponentiate the weighted input (think of this as a non-linear scaling), then we divide by the sum of the exponentiated outputs of all the neurons in the layer to normalise the input. This ensures our probabilities sum to 1.

The sigmoid function can be seen as the specific case when we have two units in the output layer - one of which is used for the prediction and the other is a “control” unit with z=0z=0 (to normalise the output). So the output for the prediction neuron is:

softmax(z)=ezez+e0=ezez+1==11+ez=σ(z)softmax(z) = \frac{e^{z}}{e^{z}+e^{0}} = \frac{e^{z}}{e^{z}+1} == \frac{1}{1+e^{-z}} = \sigma(z)

Code:

def softmax_forward(x,w,b):
z = w.dot(x)+b
z -= np.max(z,axis=0,keepdims=True) #to prevent overflow
a = np.exp(z)
a = a/np.sum(a,axis=0,keepdims=True)
return a+1e-8 #add 1e-8 to ensure no 0 values - since log 0 is undefined

A typical CNN architecture

A typical CNN will consist of a few CONV-RELU layers, followed by a POOL layer, and then repeat this a few times, finally having one or two fully-connected layers, followed by the output (softmax) layer. See the above image for an example.

As we go through the network the number of filters typically increases, and like a neural network the features learnt by the filters are hierarchical. The first layer’s filters learn simple features like edges, and subsequent layers build upon this to detect shapes and more complex and abstract objects. We will visualise these in the next blog post.

Since the pooling reduces the width/height and the number of filters increases, we start with a wide, shallow image and end up with a feature vector learnt by the convolution layers that is narrow but deep. We can then flatten this feature vector (so just 1D array) and pass it to the fully-connected layer.

So rather than passing raw pixels to the network as input we’ve in essence constructed a dense encoding of the input instead, which is why intuitively why CNNs vastly outperform feedforward neural networks. The CNN only has FFnC+1F*F*n_C+1 (bias) parameters for a convolution layer, rather than the WHnCW*H*n_C parameters needed for a fully-connected layer.

Conclusion

In this post we’ve looked at the motivation behind a CNN and the different specialist layers found in the CNN.

We can code up the CNN here, and train it on a motivating example - the classification of handwritten digits (MNIST dataset). This dataset is used as a benchmark for computer vision models.

In the next post we will look at the backpropagation algorithm for a convolutional neural network, and also gain some intuition about the internals of a CNN by visualising the activation maps.

Share This On Twitter

If you liked this post, please consider sharing it with your network. If you have any questions, tweet away and I’ll answer :) I also tweet when new posts drop!

PS: I also share helpful tips and links as I'm learning - so you get them well before they make their way into a post!

Mukul Rathi
© Mukul Rathi 2023