Welcome back to the third lesson of "Building and Applying Your Neural Network Library"! You've already made great progress modularizing your neural network codebase. In lesson 1, you organized your core components — dense layers and activation functions — using a modern JavaScript project structure. In lesson 2, you separated your training components by creating dedicated modules for loss functions and optimizers. Now, your codebase is clean, maintainable, and ready for the next step.
However, as you may have noticed, our training scripts still involve a lot of repetitive code. Every time we want to train a model, we have to manually create layers, set up the optimizer, define the loss function, write the training loop, and coordinate all these parts ourselves. This is not only tedious but also error-prone.
In this lesson, you'll learn how to orchestrate all these components into a unified, high-level interface. We'll build a powerful Model
base class that acts as the conductor of our neural network, coordinating layers, optimizers, and loss functions through clean, intuitive methods like compile()
, fit()
, and predict()
. We'll also implement a SequentialModel
subclass that provides a much more elegant and maintainable API for building and training neural networks. Let's get started!
Think of a symphony orchestra: each musician is skilled at their instrument, but the magic happens when a conductor brings them together for a unified performance. Similarly, we've built excellent individual components (layers, optimizers, losses), but we need a conductor to orchestrate them into a seamless training experience.
Currently, our training process requires us to manually coordinate several moving parts:
- Instantiating layers and building the network architecture
- Creating an optimizer with specific parameters
- Defining the loss function
- Implementing the training loop with forward passes, loss calculations, backward passes, and weight updates
This manual orchestration is repetitive and error-prone. What we need is a Model class that serves as the conductor, providing a high-level API that handles the complexities of training while still giving us the flexibility to customize our network architecture, choose different optimizers and loss functions, and control training parameters.
Here's the kind of interface we're aiming for:
Let's implement our Model
orchestrator. This class will define the common interface and shared functionality for all types of neural networks. In JavaScript, we don't have abstract classes, but we can still design a base class that expects certain methods to be implemented by subclasses. To make this expectation clear and to help catch mistakes early, we'll provide default implementations of _forward
and _backward
in the base class that throw an error if they're not overridden. This way, if a subclass forgets to implement these methods, you'll get a clear error message at runtime.
neuralnets/models/model.js
This foundation establishes all the essential components our model will need to coordinate:
- Layer stack management: The
layers
array holds our network architecture - Optimizer coordination: Centralized optimizer configuration and management
- Loss function setup: Unified loss function and derivative handling
- Compilation validation: The
isCompiled
flag prevents training misconfiguration - Extensibility: The string-based configuration system makes adding new optimizers and loss functions straightforward
- Error signaling: The base
_forward
and_backward
methods throw errors to ensure subclasses implement them
Now let's add the core training and prediction functionality. The base class provides predict
and fit
methods that rely on the subclass implementations of _forward
and _backward
. If these methods are not implemented, the error-throwing base methods will make the problem obvious.
neuralnets/models/model.js
(continued)
The fit()
method orchestrates the entire training process with several key responsibilities:
- Validation: Ensures the model is properly compiled before training
- Batch processing: Handles flexible batch sizes for memory efficiency and training stability
- Data management: Shuffles data each epoch to prevent overfitting to data order
- Training coordination: Manages the forward-backward-update cycle automatically
- Progress tracking: Provides informative training progress feedback
- Architecture independence: Works with any model architecture, as long as the subclass provides
_forward
and_backward
methods (otherwise, a clear error is thrown)
This design ensures our training system can scale to handle different model types, training strategies, and dataset sizes without requiring code changes to the core training logic.
Now we can create our concrete SequentialModel
class that implements the expected methods from our base Model
. This class represents the familiar stack-of-layers architecture we've been working with, but now with a much cleaner interface and full integration into our extensible framework.
neuralnets/models/sequential.js
The SequentialModel
demonstrates excellent separation of concerns:
- Initialization flexibility: Supports both empty initialization and pre-built layer lists
- Architecture building: The
add()
method provides the familiar interface for building networks layer by layer - Sequential processing: The forward pass flows data through layers in order, and the backward pass propagates gradients in reverse
- Clean responsibility: Focuses solely on managing a linear sequence of layers while inheriting all training orchestration from the parent class
This design makes our library highly extensible — we can easily add other model types by implementing the expected interface, while all the training logic remains reusable.
Now let's see our orchestration in action! Here's how we can solve the XOR problem using our new high-level API, which is much cleaner and more maintainable than our previous manual approach. Notice how this same interface will work seamlessly when we extend our library with new layers, optimizers, or loss functions.
neuralnets/main.js
Notice how much cleaner this is compared to our previous training scripts! The beauty of this approach is that it maintains the flexibility to customize every aspect of our network while eliminating the repetitive boilerplate code. The extensible design means we can easily experiment with different architectures, optimizers, or hyperparameters without rewriting the training logic each time.
When you run your orchestrated model, you should see output similar to the following:
The results demonstrate excellent performance — our orchestrated model achieves the same learning quality as our previous manual implementations, but with much cleaner, more maintainable code. The loss decreases smoothly from 0.25 to 0.0014, and the final predictions correctly solve the XOR problem with high confidence. More importantly, the training process is now fully automated, consistently implemented, and ready to scale to more complex problems and architectures.
Outstanding work! You've successfully built the orchestration layer for your neural network library, creating a powerful and elegant high-level API that coordinates all your modular components. The Model
and SequentialModel
classes demonstrate how good software architecture can transform complex, error-prone manual processes into clean, automated workflows.
The orchestration pattern you've implemented provides simplified user interfaces, reduced code duplication, improved maintainability, enhanced extensibility for future development, and a solid foundation for scaling to production-level neural network applications.
In the next lesson, you'll put your complete neural network library to work on a real-world dataset, demonstrating how your library handles practical machine learning problems with multiple features and realistic data challenges.
