- Published on
Basics of Neural Networks: Backpropagation Through Multi-variable Functions
- Authors
- Name
- Qi Wang
Overview
Table of Contents
Introduction
One function that neural networks commonly use to evaluate it's performance on a training dataset is the loss function. For binary classification this function may look like:
Let me quickly explain this function for anyone who doesn't know. In binary classification, there are two classes: 0 or 1. Therefore, only half of the expression inside of the summation actually matters. The further away the predicted value (which is a probability) is from the correct class, the closer the component inside the log goes to zero which means the actual log value goes to negative infinity. There is a negative sign in front to ensure that the loss is always non-negative. Thus, the more accurate the model was, the lower the loss value. We divide the summation by the sample size to normalize it.
Now, intuitively, we want to minimize this value because that would mean we are accurately predicting data in our training set. But how do we do that?
A Perceptron? What's that??
Image Source
The diagram show above is a perceptron, or one unit in a layer in a multi-layer neural network. For each node, there are a number of inputs . Each of the input is multiplied by a parameter , summed together, and lastly added to a bias parameter, .
The result of this is then squished by an activation function, for this example we will be using the sigmoid activation function:
The output of the perceptron, or what the sigmoid function returns is then passed into the next layer of the neural network until it reaches the final perceptron that outputs the answer of the machine learning tasks. After predicting, the answer across all of the training samples are evaluated on the loss function, which tells us the amount our model can be improved by.
Please note that there are a wide variety of different activation functions: ReLU, Leaky ReLU, tanh function, etc. For more information about these, feel free to read this wiki post!
Updating Parameters
As seen in the perceptron model, the neural network determines the answer based on the parameters and . So how would the neural network update these parameters to improve its performance?
Let's think back to some calculus. The derivative of a curve at point tells us how much the value of changes. For example, given this function:
We get:
Therefore, tells us that increasing by a small value at also increases the value of .
So what does this have to do with a loss function? Well, let's talk about partials a little. A partial is the equivalent of a derivative but only for multi-variable functions. Specifically, we want to take the partial of the loss function with respect to each of our parameters to find out how much we should update each parameter in every episode.
Suppose we have a multivariable function:
Let's take the partial of this function with respect to z:
By evaluating , we know that by increasing , the value of the function will also increase.
This same idea applies to neural networks, if we calculate , we can easily find how much to update each parameter by. The hard part... is how... The loss function is the result of tons of parameters, making it impossible to calculate all parameters by hand.
Backpropagation, finally
To learn the concept behind recursively backpropagating through the loss function, we will use a way way way more simplified example. In this example, we will use as the parameters to train on. The series of steps that our watered down "neural network" takes:
Expanding the whole expression, we get:
The above image splits function into a series of basic computations, which is helpful in visualizing how the gradients of each variable is backpropagated through the function.
Let's start with variable . What is ? Unsurprisingly, the answer is 1. This means if we increase the value of by amount, will also increase by amount. Wow.
Now let's find . We know , thus, .
Everything has been fairly simple so far, but this next one get's a little more complicated. We need to find . So how do we do that? Recall the chain rule in calculus? It states:
To account for multiple variables, we simply exchange them with partials to get:
Since we know from the previous step, we can easily find , which turns out to be .
We know that , so , which gives .
There might be a pattern that you are noticing! When the operation is multiplication, the partial derivative is simply the value of the other value multipled by the previous step's partial derivative. When the operation is addition, the partial derivative is just 1 times the previous step's partial derivative.
There are a few more operations to figure out, which I will leave as an exercise to you! Try to find the rules for powers, subtraction, and division (which is really just a power by to the -1)! To start, try finding the partial derivatives of with respect to the rest of the variables (the diagram of the expression above may help).
Why is it called Gradient Descent?
You maybe have heard that the process of updating parameters in neural networks is called Gradient Descent, but why is that? To start, let's write the definition of a gradient:
Does this look similar to what we have already done? Yes!
Of course, this is the direction in the greatest change, which would be a increase in the loss function. We, however, want to minimize the value from , so instead of adding in this direction, we add the negative of this direction to our parameters.
Conclusion
That's it for this time! Hopefully, that was helpful towards your understanding of neural networks and the underlying process behind them. Make sure to do the little exercise I left to solidify your knowledge (you might need to brush up on some calculus rules!). If there are any questions, feel free to contact me. I'll see you all in the next post!