Back propagation
From Rice Wiki
Back propagation is a error calculation technique. It consists of passing a loss function backwards through a neural network layer-by-layer to update its weights.
Procedure
Consider the following RSS loss function (halved for easier derivative).
After each feed-forward pass where one data point is passed through the neural network, the gradient of the loss function is computed.
We compute gradient separately at each layer due to difference in activation function by computing the loss function for each neuron of that layer and add up the result.
Then, we compute the gradient of the loss function on that layer w.r.t. the weights that created that layer.