This is the **fourth** post in my scikit-learn tutorial series. If you didn’t catch them, I strongly recommend my first two posts — it’ll be way easier to follow along:

## Sklearn tutorial

This 4th module introduces the concept of **linear models**, using the infamous **linear regression** and **logistic regression** models as working examples.

In addition to these basic linear models, we show how to use feature engineering to **handle nonlinear problems using only linear models,** as well as the concept of **regularization** in order to prevent overfitting.

Altogether, these concepts enable us to create very simple yet powerful models, capable of handling a lot of ML problems with fine-tuned hyperparameters, without overfitting, while handling nonlinear problems.

*All graphs and images are made by the author.*

**Linear models are models that “fit” or “learn” by setting coefficients such that they eventually only rely on a linear combination of the input features.** In other words, if the input data is made of N features f_1 to f_N, the model at some point is based on the linear combination:

The coefficients the model learns are the N+1 coefficients beta. The coefficient beta_0 represent an offset, a constant value in the output whatever the values in the input. The idea behind such models is that the “truth” can be approximated with a linear relationship between the inputs and the output.

In the case of regression problems where we want to predict a numerical value from the inputs, one of the simplest and well known linear model is the linear regression. You most likely have done hundreds of linear regression already (by hand, in excel or python).

In the case of classification problem, where we want to predit a category from the inputs, the simplest and well known linear model is the logistic regression (don’t get fooled…

## Be the first to comment