What is Logistic Regression?

Logistic Regression is a statistical approach to model the relationship between one or more explanatory variables (independent variables) and a categorical target variable.

The name is a misnomer because Logistic “Regression” is actually used for classification tasks.

Logistic Regression is a natural extension of Linear Regression. The difference lies in the final output.

Since Logistic Regression is used for classification, the outputs of the function have to be probabilities of the data point belonging to the positive class. As a result, these values would range between 0-1 and not take on any real number (like in Linear Regression).

For Logistic Regression, we pass the output of the function through a Sigmoid function, which maps real-numbered values between -∞ to +∞ to values between 0 and 1.

How does Logistic Regression work?

Logistic Regression uses a framework similar to Linear Regression but also uses the Sigmoid function. What is a Sigmoid function? 🤨 Let's take a look.

If y is the target variable and the xᵢ’s are the explanatory variables with m such explanatory variables, then, by assuming a linear relationship:

y = sigmoid(w₀ + w₁x₁ + … wnxn)


sigmoid(n) = 1/(1+e⁻ⁿ)

The wᵢ’s are called coefficients. The optimal values of these coefficients are found using the available training data such that the predicted y is closest to the true value of y. The true y’s (in this case) will be binary values (0 or 1) for a two-class problem.

What are the advantages of Logistic Regression?

Logistic Regression is easy to interpret since the best coefficients associated with the explanatory variable show the relevance of that variable to the final output. This helps justify the classifications made by the model and the model is not treated as a black box.

Logistic Regression trains very quickly, even on huge amounts of data. As a result, the algorithm is used in several real world classification problems. One such use case is that of predicting heart diseases.

What are the drawbacks of Logistic Regression?

If the positive and negative classes in a data set cannot be separated using a straight line (or a flat hyperplane) in the appropriate, high-dimensional feature space, then Logistic Regression will fail to classify them accurately. Being a simple model, it will not be able to figure complicated non-linear patterns to distinguish one class from the other.

Similar to Linear Regression, Logistic Regression makes an assumption that each explanatory variable is independent of the others, which is violated in most real world scenarios.

Think we're missing something? 🧐 Help us update this article by sending us your suggestions here. 🙏

See also

Articles you might be interested in

  1. Logistic Regression — Detailed overview
  2. Sklearn Logistic Regression documentation
  3. How to implement Logistic Regression from scratch in Python