Lesson Introduction

Imagine you are cleaning your room and organizing items step-by-step. Data preprocessing is similar! In this lesson, we'll prepare a dataset for analysis by integrating multiple preprocessing techniques. Our goal is to make the data clean and ready for useful insights.

Drop Unnecessary Columns

Not all columns are useful for our analysis. Some might be redundant or irrelevant. For example, columns like deck, embark_town, alive, class, who, adult_male, and alone may not add much value. Let's drop these columns.

We use the .drop() function, which takes a list of columns names to drop as an argument columns.

Handle Missing Values

Data often has missing values, which are problematic for many algorithms. In our Titanic dataset, we can fill missing values with reasonable substitutes like the median for numerical columns and the mode for categorical columns.

Here, we use the fillna method to replace missing values (NaN) in a DataFrame with a specified value. You can provide a single value, a dictionary of values specifying different substitutes for different columns, or use aggregations like median or mode for more meaningful replacements, like we do here.

Let's check if it worked.

This line outputs the count of missing values for each column in the titanic DataFrame. isnull() function returns a new dataframe of the same size, containing True instead of the missing values, and False instead of the present values. If we find the sum of these boolean values, True will be taken as 1, and False – as 0. Thus, if there are any missing values, the sum will be positive.

The output is:

We see zeros everywhere, indicating there is no more missing values in the dataframe.

Encode Categorical Values

Categorical values need to be converted into numbers for most algorithms. For example, the sex and embarked columns in our dataset are categorical. We'll use the get_dummies function to encode these columns.

Note the dtype=int parameter. It specifies that we expect our new encoding columns to hold either 0 or 1. Otherwise, they will hold False or True.

Scale Numerical Values

Scaling numerical values is crucial, especially for algorithms that rely on the distance between data points. We will standardize the age and fare columns so they have a mean of 0 and a standard deviation of 1.

Lesson Summary

Congratulations! You've cleaned and prepared the Titanic dataset using multiple preprocessing techniques. Here's a quick recap:

  • Loaded and inspected the dataset.
  • Dropped unnecessary columns to focus on valuable data.
  • Handled missing values to ensure the dataset is complete.
  • Encoded categorical values to make them usable by algorithms.
  • Scaled numerical values to improve model performance.

Now it's time to put your newfound skills to the test! In the upcoming practice session, you'll apply these preprocessing techniques to another dataset. This hands-on experience will solidify your understanding and give you confidence in tackling data preprocessing in real-world scenarios. Let's get started!

Sign up
Join the 1M+ learners on CodeSignal
Be a part of our community of 1M+ users who develop and demonstrate their skills on CodeSignal