Section 1 - Instruction

Last time, we learned how to measure a network's error with a loss function and find the right direction with gradients.

But how do we adjust the weights of the hidden layers, which don't see the final error directly? This is where backpropagation comes in.

Engagement Message

In one word, what does backpropagation help us update?

Section 2 - Instruction

Backpropagation is the core algorithm for training neural networks. It works by propagating the error signal from the output layer all the way back to the input layer.

Think of it as a chain of command, but for assigning responsibility for mistakes.

Engagement Message

Does this make sense so far?

Section 3 - Instruction

The process starts right after we calculate the loss. The output layer is first. It calculates the gradients for its weights, figuring out how it directly contributed to the final error.

This first step is the most straightforward, as it's closest to the mistake.

Engagement Message

Where in the network does the backpropagation process begin?

Section 4 - Instruction

Next, the error signal is passed from the output layer to the last hidden layer. This hidden layer uses that incoming error to calculate the gradients for its own weights.

It essentially asks, "How did my outputs cause the error in the next layer?"

Engagement Message

What information does a hidden layer need from the layer in front of it?

Section 5 - Instruction

This process repeats, layer by layer, moving backward through the network. Each layer receives an error signal from the layer ahead of it and calculates its own gradients.

This is why it's called "back"-propagation—the error flows in reverse!

Engagement Message

Do you understand why it is called backpropagation instead of forwardpropagation?

Section 6 - Instruction

Once the error has been propagated all the way back, the network knows how every single weight contributed to the total loss.

Finally, it uses these gradients to update all the weights slightly, completing one full training cycle. The network is now a little bit smarter!

Engagement Message

What is the final step after all gradients are calculated?

Section 7 - Practice

Type

Sort Into Boxes

Practice Question

Let's sort the steps of training a neural network into the correct phase.

Labels

  • First Box Label: Forward Pass
  • Second Box Label: Backward Pass

First Box Items

  • Making a prediction
  • Input flows forward

Second Box Items

  • Calculating loss
  • Calculating gradients
  • Updating weights
  • Error flows backward
Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal