About Me
Blog

Demystifying Deep Learning: Part 3

Learning Through Gradient Descent

August 03, 2018

5 min read


Having looked at linear and logistic regression, we will look at how these algorithms “learn”.

Loss Functions:

For a machine algorithm to learn, it needs some way of evaluating its performance. The loss function (aka cost function) is a function that quantifies the errors in its predictions - lower error is better. Thus, a way for the algorithm to improve is to minimise this loss function, which, if chosen correctly, will also result in the model learning. The algorithm can tweak its weights and biases to get the value of the loss function as low as possible. When thinking of the correct loss function to use, it helps to think about the problem we are trying to solve:

Intuition: For linear regression, we want to plot a straight line of best fit - i.e one that is as close to as many of the points as possible. So one good idea is to minimise the mean distance between the straight line and the points. In nn dimensions, we consider the distance between the point and the hyperplane - i.e. take the difference between the prediction (projection of the value onto the plane) and the actual value. We then square it (so it ensures the value is always positive), and then do this for all predictions and take the mean.

Maths: We denote the loss function as J(W,b)J(W,b) . Note the factor of 12\frac{1}{2} is for convenience.

J(W,b)=12m_i=1m(y^(i)y(i))2J(W,b) = \frac{1}{2m} \sum\_{i=1}^{m} (\hat{y}^{(i)} - y^{(i)})^2

Code: As before we are continuing with the same motivating examples of housing prices for linear regression and breast cancer for logistic regression. You can access the full code in the accompanying Jupyter notebook.

def MSE_loss(Y, Y_pred):
m = Y.shape[1]
return (1.0/(2*m))*np.sum(np.square(Y_pred-Y))

Next, moving onto logistic regression:

Intuition: The aim with logistic regression is to maximise the probability of getting the correct label across all the examples.

Maths: Here, it is easier to present the intuition interspersed with the maths:

For a single example, we want:

_W,bargmaxP(yx;W,b)\_{W,b}^{argmax} P(y|x;W,b)

This translates to finding W,bW, b that maximise the probability of getting the correct label yy given the value of xx , parameterised by W,bW, b. Since each training example is independent of each other, to get the total probability we multiply the probabilities together:

W,bargmaxi=1mP(y(i)x(i);W,b)_{W,b}^{argmax} \prod_{i=1}^{m} P(y^{(i)}|x^{(i)};W,b)

This could lead to underflow, when multiplying tiny probabilities together, so instead we take the log and maximise that. Since log(xy)=logx+logy\log{(xy)}=\log{x+\log{y}} we now have:

W,bargmaxi=1mlogP(y(i)x(i);W,b)_{W,b}^{argmax} \sum_{i=1}^{m} \log{P(y^{(i)}|x^{(i)};W,b)}

If we recall our interpretation of the output of logistic regression, we have that:

P(y(i)x(i);W,b)={y^(i)if y(i)=11y^(i)if y(i)=0P(y^{(i)}|x^{(i)};W,b) = \begin{cases} \hat{y}^{(i)} &\text{if } y^{(i)}=1 \\ 1 -\hat{y}^{(i)} &\text{if } y^{(i)}=0 \end{cases}

Putting all these pieces together, it is time to look at the loss function for logistic regression:

J(W,b)=1m_i=1my(i)log(y^(i))+(1y(i))log(1y^(i))J(W,b) = \frac{-1}{m} \sum\_{i=1}^{m} y^{(i)} \log(\hat{y}^{(i)}) + (1-y^{(i)}) \log(1-\hat{y}^{(i)})

Breaking it down:

  • First, we have a negative sign in front of the entire expression: this is because we want to maximise the log probability, but we can only minimise the loss function.

  • We divide the sum by mm - this corresponds to the “mean” probability, and since mm is the same for all W,bW,b it doesn’t affect our objective.

  • There are two terms in the summation: the first is logP(y(i)=1x(i);W,b)\log{P(y^{(i)}=1|x^{(i)};W,b)} and the second is logP(y(i)=0x(i);W,b)\log{P(y^{(i)}=0|x^{(i)};W,b)} . These combine to give logP(y(i)x(i);W,b)\log{P(y^{(i)}|x^{(i)};W,b)} since when y=0y=0 only the second term contributes, and likewise when y=1y=1 only the first term contributes, since the y(i)y^{(i)} and 1y(i)1 - y^{(i)} terms set the other to zero respectively.

Code:

def log_loss(Y, Y_pred):
m = Y.shape[1]
return (-1.0/m)*np.sum(Y*np.log(Y_pred) + (1-Y)*np.log(1-Y_pred))

So now that we have defined our loss function, let’s look at how we minimise it:

Gradient Descent:

Intuition:

Think of the loss function as a surface with hills and valleys, and the current value of the loss for the algorithm as a point on this surface (see above image). We want to minimise it, so we want to keep taking steps down the slope of the surface into the valley.

The slope of the surface is the gradient at that point, and so we want to keep taking steps in the direction down the gradient to the minimum, where the gradient is 0. This is the gradient descent algorithm.

Maths:

The gradient descent update equation is as follows:

W=WαJWW= W - \alpha \frac{\partial \mathcal{J} }{\partial W}

b=bαJbb = b - \alpha \frac{\partial \mathcal{J} }{\partial b}

α\alpha is the learning rate hyperparameter - this controls the size of the step we take each iteration. If α\alpha is too large we may overshoot the minimum and diverge, whereas if α\alpha is too small it will take too long to converge to the minimum.

Now it’s time to compute JW\frac{\partial{J} }{\partial W} and Jb\frac{\partial{J} }{\partial b}.

When thinking about computing partial derivatives, it helps to have an intuitive understanding, to make sense of the maths expression that results, especially when we later try to take partial derivatives with respect to each value in a matrix.

Some ideas worth bearing in mind as we go into this calculation and more complicated expressions later on:

  • Partial Derivative Intuition: Think of yx\frac{\partial{y}}{\partial{x}} loosely as quantifying how much yy would change if you gave the value of xx a little “nudge” at that point.

  • Breaking down computations - we can use the chain rule to aid us in our computation - rather than trying to compute the derivative in one fell swoop, we break up the computation into smaller intermediate steps.

  • Computing the chain rule - when thinking about which intermediate values to include in our chain rule expression, think about the immediate outputs of equations involving xx - which other values get directly affected when I slightly nudge xx?

Let’s apply these ideas in the context of linear and logistic regression - the key here is not so much the equations themselves, but to build up this intuition, since we can boil down most of the more advanced networks with this same reasoning:

First, linear regression - we’ll break it down into two steps:

y^(i)=_j=1nWjx(i)_j+b\hat{y}^{(i)} = \sum\_{j=1}^{n} W_jx^{(i)}\_j + b

J(W,b)=12m_i=1m(y^(i)y(i))2J(W,b)=\frac{1}{2m} \sum\_{i=1}^{m}(\hat{y}^{(i)} - y^{(i)})^2

Let’s consider one particular prediction y^(i)\hat{y}^{(i)} Intuitively, nudging it will only affect the squared error for that example ii and indeed taking the partial derivative:

Jy^(i)=1m(y^(i)y(i))\frac{\partial{J}}{\partial{\hat{y}^{(i)}}} =\frac{1}{m}(\hat{y}^{(i)} - y^{(i)}) (note the factor of 12\frac{1}{2} in J has cancelled to leave 1m\frac{1}{m}.)

For this ithi^{th} example let’s consider the effect of nudging a particular weight WjW_j - intuitively since it is multiplied by xj(i)x^{(i)}_j in the calculation, the corresponding amount y^(i)\hat{y}^{(i)} will move is xj(i)x^{(i)}_j times that nudge. And since the bias bb isn’t multiplied by anything, the nudge should be of the same magnitude. Indeed the maths reflects this:

y^(i)Wj=x(i)_j\frac{\partial{\hat{y}^{(i)}}}{\partial{W_j}} = x^{(i)}\_j

y^(i)b=1\frac{\partial{\hat{y}^{(i)}}}{\partial{b}} = 1

Now we’ve looked at the partial derivative intuition, let’s look at computing the chain rule. If we nudge WjW_j or bb intuitively we will affect all predictions, so therefore we need to sum across the examples ii. So we have that:

JWj=i=1mJy^(i)y^(i)Wj=1mi=1m(y^(i)y(i))xj(i)\frac{\partial{J}}{\partial{W_j}} = \sum_{i=1}^{m}\frac{\partial{J}}{\partial{\hat{y}^{(i)}}}*\frac{\partial{\hat{y}^{(i)}}}{\partial{W_j}} = \frac{1}{m} \sum_{i=1}^{m}(\hat{y}^{(i)} - y^{(i)})*x^{(i)}_j

and:

Jb=i=1mJy^(i)y^(i)b=1mi=1m(y^(i)y(i))\frac{\partial{J}}{\partial{b}} = \sum_{i=1}^{m}\frac{\partial{J}}{\partial{\hat{y}^{(i)}}}*\frac{\partial{\hat{y}^{(i)}}}{\partial{b}} = \frac{1}{m} \sum_{i=1}^{m}(\hat{y}^{(i)} - y^{(i)})

Now we can move onto the case of logistic regression - we’ll introduce an intermediate step z(i)z^{(i)}:

J(W,b)=1m_i=1my(i)logy^(i)+(1y(i))log(1y^(i))J(W,b)=\frac{-1}{m} \sum\_{i=1}^{m}y^{(i)}\log\hat{y}^{(i)} + (1-y^{(i)})\log(1-\hat{y}^{(i)})

y^(i)=σ(z(i))=11+ez(i)\hat{y}^{(i)} = \sigma(z^{(i)}) = \frac{1}{1+e^{-z^{(i)}}}

z(i)=_j=1nWjx(i)_j+bz^{(i)} = \sum\_{j=1}^{n} W_jx^{(i)}\_j + b

Again, nudging y^(i)\hat{y}^{(i)} will only affect the error for that example ii:

Jy^(i)=1m(y(i)y^(i)1y(i)1y^(i))\frac{\partial{J}}{\partial{\hat{y}^{(i)}}} =\frac{-1}{m}(\frac{y^{(i)}}{\hat{y}^{(i)}} - \frac{1- y^{(i)}}{1- \hat{y}^{(i)}})

The derivative of y^(i)\hat{y}^{(i)} with respect to z(i)z^{(i)} can be rearranged:

dy^(i)dz(i)=ez(i)(1+ez(i))2=11+ez(i)(ez(i)+11+ez(i)11+ez(i))=σ(z(i))(1σ(z(i)))=y^(i)(1y^(i))\frac{d\hat{y}^{(i)}}{dz^{(i)}} = \frac{e^{-z^{(i)}}}{(1+e^{-z^{(i)}})^2} = \frac{1}{1+e^{-z^{(i)}}}(\frac{e^{-z^{(i)}}+1}{1+e^{-z^{(i)}}}-\frac{1}{1+e^{-z^{(i)}}}) = \sigma(z^{(i)})(1-\sigma(z^{(i)})) =\hat{y}^{(i)}(1-\hat{y}^{(i)})

This is a neat result to remember: σ(x)=σ(x)(1σ(x))\sigma'(x) = \sigma(x)(1-\sigma(x))

Great, after that algebraic rearrangement, we have a nice result, so let’s compute chain rule - note a nudge in z(i)z^{(i)} only affects y^(i)\hat{y}^{(i)} so:

Jz(i)=Jy^(i)\*dy^(i)dz(i)=1m(y(i)y^(i)1y(i)1y^(i))y^(i)(1y^(i))\frac{\partial{J}}{\partial{z^{(i)}}} = \frac{\partial{J}}{\partial{\hat{y}^{(i)}}}\*\frac{d\hat{y}^{(i)}}{dz^{(i)}} = \frac{-1}{m}(\frac{y^{(i)}}{\hat{y}^{(i)}} - \frac{1- y^{(i)}}{1- \hat{y}^{(i)}})\hat{y}^{(i)}(1-\hat{y}^{(i)})

So we have, multiplying out:

Jz(i)=1m[y(i)(1y^(i))y^(i)(1y(i))]=1m(y^(i)y(i))\frac{\partial{J}}{\partial{z^{(i)}}} = \frac{-1}{m}[ y^{(i)}(1- \hat{y}^{(i)}) - \hat{y}^{(i)}(1- y^{(i)}) ] = \frac{1}{m}(\hat{y}^{(i)} - y^{(i)})

If we compare with linear regression, it turns out that the equations for the partial derivatives for WjW_j and bb are the same, since the equation for z(i)z^{(i)} in logistic regression is the same as y^(i)\hat{y}^{(i)} in linear regression.

We just have one final step to finish things off! Just like in the last post, we can rewrite the equation for JW\frac{\partial{J}}{\partial{W}} as a matrix multiplication: note that WW is a 1 x n matrix, XX is a m x n matrix and YY is a 1 x m matrix. So:

JW1j=1mi=1m(Y^Y)1i\*Xji=1mi=1m(Y^Y)1i\*XT_ij\frac{\partial{J}}{\partial{W*1j}} = \frac{1}{m}\sum*{i=1}^{m}(\hat{Y} - Y)_{1i} \* X_{ji} = \frac{1}{m}\sum*{i=1}^{m}(\hat{Y} - Y)*{1i} \* X^T\_{ij}

JW=1m(Y^Y).XT\frac{\partial{J}}{\partial{W}} = \frac{1}{m}(\hat{Y} - Y).X^T

The equation for the Jb\frac{\partial{J}}{\partial{b}} is the same, only that we use subscripts for the matrix Y1iY_{1i} instead of y(i)y^{(i)} and ditto for Y^\hat{Y}.

So by breaking it down into many small steps, we have our final equations (for both linear and logistic regression):

JW=1m(Y^Y).XT\frac{\partial{J}}{\partial{W}} = \frac{1}{m}(\hat{Y} - Y).X^T

Jb=1mi=1m(Y^1iY1i)\frac{\partial{J}}{\partial{b}} = \frac{1}{m} \sum_{i=1}^{m}(\hat{Y}_{1i} -Y_{1i})

Code:

def grads(X, Y, Y_pred):
m = Y.shape[1]
dW = (1.0/m)*np.dot(Y_pred-Y,X.T)
db = (1.0/m)*np.sum((Y_pred-Y),axis=1,keepdims=True)
return dW, db
dW, db = grads(X, Y, Y_pred)
#gradient descent update
W = W - alpha*dW
b = b - alpha*db

Conclusion:

With the last two posts, you have now coded up your first two machine learning algorithms, and trained them! Machine learning can be applied to diverse datasets - the motivating examples that you trained your algorithms - housing prices and breast cancer classification - being two great examples of that!

If you made it through the maths, you’ve understood the fundamentals of most learning algorithms - we’ll use this as building blocks for neural networks next.

Share This On Twitter

If you liked this post, please consider sharing it with your network. If you have any questions, tweet away and I’ll answer :) I also tweet when new posts drop!

PS: I also share helpful tips and links as I'm learning - so you get them well before they make their way into a post!

Mukul Rathi
© Mukul Rathi 2023