Introduction

Welcome to this next lesson on Saving and Loading a TensorFlow Model. By the end of this lesson, you’ll be able to understand the importance of saving and loading models, how to save a trained TensorFlow model, load it from the saved file format, and validate the loaded model. This will give you a full cycle understanding and hands-on knowledge on how to handle models when training is done. With the provided code examples that train, save, load, and test a model, let's start our lesson!

The Importance of Saving and Loading Models

When building machine learning models, it's important to save your models for various reasons. The most obvious one is efficiency - once you trained an intricate model that could take hours or even days to train, you want to keep the learned weights to avoid re-training. So, you’d save the model for reuse later without the need to retrain.

Not only that, but the saved model can be shared with others - if you're collaborating with other professionals or even publishing your results, it aids in reproducibility of your results by others. Finally, when deploying a model to production you'll need to load the trained model to make predictions on new data.

In the previous lessons, we trained a TensorFlow model. Now, let's save it!

Quick Refresh: Loading Data and Training the Model

Before we focus on saving our model, let's briefly revisit the key steps we took to load our data and train the model. Here's the code snippet we used to preprocess the Iris dataset:

And to train a model with our preprocessed data:

In summary:

  1. We began by loading the preprocessed data using the load_preprocessed_data() function from our data_preprocessing.py file to get our datasets: X_train, X_test, y_train, and y_test.
  2. We constructed a sequential model with TensorFlow's Keras API, with the input shape tailored to our dataset, including several dense layers with ReLU and Softmax activations.
  3. The model was then compiled using the Adam optimizer and the categorical crossentropy loss function, with accuracy as a metric.
  4. Finally, we trained the model for 150 epochs with a batch size of 5, validating its performance on the test data throughout the training process.

With our training steps revisited, let's move on to saving our well-trained model.

Saving TensorFlow Model

After training your model, you can save the model's architecture, learned weights, and the configuration of the model's optimizer, so you can resume training exactly where you left off.

To save a model in TensorFlow, the save() method of the Model class is used. This method accepts one argument: filepath, a string that specifies the path and the filename where the model should be saved.

Using the provided solution code as guidance, let's save our model:

By running this code, TensorFlow will write a file named iris_model.keras in the current working directory. This file is saved with the .keras extension, which is a standard format used by TensorFlow for saving complete models. The file contains everything we need to use the model: its architecture, its learned parameters, and the configuration of its optimizer.

Loading TensorFlow Model

Now that the model is saved, we can load it at any time without needing to train it again or write its architecture manually. To load a model in TensorFlow, we use the load_model() function from tensorflow.keras.models module.

This function accepts one argument: filepath, a string specifying the path and the filename of the model file. The function returns the loaded model. Let's load the model using the code from the script:

And voila! You've just loaded a pretrained model. If you're thinking about using this model to make predictions, you're on the right track. But wait a second! Before using the loaded model for predictions, let's verify it first.

Verifying the Loaded Model

To verify whether the loaded model works as expected, we evaluate it using the same test data that we used when training the original model. In TensorFlow, we can use the evaluate() method of the Model class to evaluate a model. This method accepts test data and labels as arguments and returns the loss value and metrics values for the model in test mode.

Let's evaluate the loaded model using the test data:

The output of the above code will be:

This indicates that the loaded model performs just as well as the initial model before it was saved, demonstrating successful model saving and loading operations.

Lesson Summary and Practice

Congratulations! We traversed the unique topic of saving and loading TensorFlow models. We started by understanding why saving and loading models is important, moved onto the process of saving a trained TensorFlow model, and loaded it back only to validate the loaded model against meaningful criteria.

This is pivotal in a real-world context, since saving and loading models allows us to reuse trained models without having to retrain them, enhancing efficiency, reproducibility, and sharing of models. A reminder that practice makes perfect, so get ready to dive into some hands-on activities to solidify the concepts covered in this lesson. Happy learning!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal