Welcome back to the fascinating world of machine learning! Today's mission is to enhance model performance through the technique of hyperparameter tuning. Let's start with a quick refresher - what exactly are hyperparameters?
In machine learning, hyperparameters are the parameters whose values are set upfront, before the commencement of the training process. They are external to the model.
Consider a simple analogy. If you think of your machine learning model as a car, the model parameters might represent the internal mechanisms - such as the engine, gears, and tires that get determined by the mechanics of the car - while the hyperparameters represent external settings like the angle of your steering wheel or the position of your seat, which you adjust according to a personal preference or a specific journey.
In the realm of machine learning algorithms, hyperparameters might include the K in the K-Nearest Neighbors, the kernel in Support Vector Machines, or the C
and max_iter
in Logistic Regression. Conversely, weights or coefficients in Linear Regression or Logistic Regression algorithms are examples of model parameters.
Let's look at how to define a hyperparameter, C
, in a Logistic Regression instance using sklearn
.
In the above code snippet, C
is a hyperparameter we manually choose during the creation of the Logistic Regression model. This C
is set before the Logistic Regression model is fit to the data and is the inverse of the regularization strength.
