Section 1 - Instruction

Great work building neurons and understanding activation functions! Now let's scale up from one neuron to entire networks.

Imagine you have not just one neuron, but dozens working together. Each neuron still does its job: weights × inputs + bias, then activation function.

Engagement Message

How do you think multiple neurons might work better than just one?

Section 2 - Instruction

When we group neurons together that all receive the same inputs, we call this a layer. Think of a layer like a team of specialists all looking at the same problem.

Each neuron in the layer has its own weights and bias, so they each focus on different patterns in the data.

Engagement Message

Why might having multiple "specialists" be better than one generalist?

Section 3 - Instruction

Here's where it gets interesting: we can stack layers on top of each other! The outputs from one layer become the inputs for the next layer.

Layer 1 → Layer 2 → Layer 3

Each layer processes information and passes refined results to the next layer.

Engagement Message

What's an example from real life where information gets processed in stages?

Section 4 - Instruction

Let's name the layers in a typical network:

  • Input layer: receives your original data
  • Hidden layer(s): the middle processing layers
  • Output layer: gives you the final answer

The "hidden" layers are called that because you can't directly see what they're learning—they're hidden inside the network.

Engagement Message

Does this make sense?

Section 5 - Example

Let's see how stacking layers looks in code using PyTorch. Here's a simple neural network with an input layer, one hidden layer, and an output layer:

nn.Linear creates a layer of neurons that does the weights × inputs + bias calculation we learned about. The numbers (784, 128) mean 784 inputs going to 128 neurons.

Notice how data flows through each layer.

Engagement Message

Can you spot where the activation function is used?

Section 6 - Instruction

Information flows in one direction: input → hidden → output. Each layer transforms the data a bit more, like an assembly line.

The first hidden layer might detect simple patterns. Deeper layers combine these into complex patterns. The output layer makes the final decision.

Engagement Message

Can you think of a recognition task that might work this way?

Section 7 - Instruction

Why use multiple layers instead of one giant layer? Layered networks are remarkably efficient at learning hierarchical patterns.

It's like learning to read: first you recognize lines and curves, then letters, then words, then meaning. Each layer builds on the previous one's discoveries.

Engagement Message

What's another skill where simple steps build into a complex understanding?

Section 8 - Practice

Type

Multiple Choice

Practice Question

Let's test your understanding of network architecture! In a 3-layer network for recognizing handwritten digits, which layer would most likely detect basic edges and lines?

A. Input layer B. Hidden layer
C. Output layer D. All layers equally

Suggested Answers

  • A
  • B - Correct
  • C
  • D
Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal