7 Common Loss Functions in Machine Learning

Every machine learning engineer should know about these common loss functions and when to use them.

Written by Sparsh Gupta
common-loss-functions
Image: Shutterstock / Built In
Brand Studio Logo
UPDATED BY
Brennan Whitfield | Dec 13, 2024

A loss function is a method of evaluating how well your machine learning algorithm models your featured data set. In other words, loss functions are a measurement of how good your model is at predicting outcomes. 

What Are Loss Functions in Machine Learning?

A loss function (or error function) in machine learning is a mathematical function that measures the difference between a model’s predicted outputs and the actual target values of a featured data set.

The cost function and loss function refer to the same context (i.e. the training process that uses backpropagation to minimize the error between the actual and predicted outcome). We calculate the cost function as the average of all loss function values, whereas we calculate the loss function for each sample output compared to its actual value.

The loss function is directly related to the predictions of the model you’ve built. If your loss function value is low, your model will provide good results. The loss function (or rather, the cost function) you use to evaluate the model performance needs to be minimized to improve its performance.

 

Loss Functions, Explained. | Video: Siraj Raval

Loss Functions

Broadly speaking, loss functions can be grouped into two major categories concerning the types of problems we come across in the real world: classification and regression. In classification problems, our task is to predict the respective probabilities of all classes the problem is dealing with. On the other hand, when it comes to regression, our task is to predict the continuous value concerning a given set of independent features to the learning algorithm.

Assumptions of Loss Functions

  • n/m — number of training samples
  • i — ith training sample in a data set
  • y(i) — Actual value for the ith training sample
  • ŷ(i) — Predicted value for the ith training sample

 

Loss Functions for Classification

Types of Classification Losses

  1. Binary Cross-Entropy Loss / Log Loss
  2. Hinge Loss

1. Binary Cross-Entropy Loss / Log Loss

This is the most common loss function used in classification problems. The cross-entropy loss decreases as the predicted probability converges to the actual label. It measures the performance of a classification model whose predicted output is a probability value between 0 and 1.

When the number of classes is 2, it’s binary classification.

common-loss-functions

When the number of classes is more than 2, it’s multi-class classification.

common-loss-functionscommon-loss-functions

We derive the cross-entropy loss formula from the regular likelihood function, but with logarithms added in.

2. Hinge Loss

The second most common loss function used for classification problems and an alternative to the cross-entropy loss function is hinge loss, primarily developed for support vector machine (SVM) model evaluation.

common-loss-functionscommon-loss-functions

Hinge loss penalizes the wrong predictions and the right predictions that are not confident. It’s primarily used with SVM classifiers with class labels as -1 and 1. Make sure you change your malignant class labels from 0 to -1.

RelatedAnscombe’s Quartet: What Is It and Why Do We Care?

 

Loss Functions for Regression 

Types of Regression Losses

  1. Mean Squared Error / Quadratic Loss / L2 Loss
  2. Mean Absolute Error / L1 Loss
  3. Huber Loss / Smooth Mean Absolute Error
  4. Log-Cosh Loss
  5. Quantile Loss

1. Mean Squared Error / Quadratic Loss / L2 Loss

We define the mean squared error (MSE) loss function, or L2 loss, as the average of squared differences between the actual value (Y) and the predicted value (Ŷ). It’s the most commonly used regression loss function.

common-loss-functionscommon-loss-functions

The corresponding cost function is the mean of these squared errors (MSE). The MSE loss function penalizes the model for making large errors by squaring them,making the MSE cost function less robust to outliers. Therefore, you shouldn’t use it if the data is prone to many outliers.

2. Mean Absolute Error / L1 Loss

We define the mean absolute error (MAE) loss function, or L1 loss, as the average of absolute differences between the actual and the predicted value. It’s the second most commonly used regression loss function. It measures the average magnitude of errors in a set of predictions, without considering their directions.

common-loss-functionscommon-loss-functions

The corresponding cost function is the mean of these absolute errors (MAE). The MAE loss function is more robust to outliers compared to the MSE loss function. Therefore, you should use it if the data is prone to many outliers.

3. Huber Loss / Smooth Mean Absolute Error

The Huber loss function is defined as the combination of MSE and MAE loss functions because it approaches MSE when ? ~ 0 and MAE when ? ~ ∞ (large numbers). It is mean absolute error, which becomes quadratic when the error is small. To make the error quadratic depends on how small that error could be, which is controlled by a hyperparameter, ? (delta) that you can tune.

common-loss-functionscommon-loss-functions

The choice of the delta value is critical because it determines what you’re willing to consider an outlier. Hence, the Huber loss function could be less sensitive to outliers than the MSE loss function, depending on the hyperparameter value. Therefore, you can use the Huber loss function if  the data is prone to outliers. In addition, we might need to train hyperparameter delta, which is an iterative process.

4. Log-Cosh Loss

The log-cosh loss function is defined as the logarithm of the hyperbolic cosine of the prediction error. It’s another function used in regression tasks that’s much smoother than MSE loss. It has all the advantages of Huber loss because it’s twice differentiable everywhere, unlike Huber loss, because some learning algorithms like XGBoost use Newton’s method to find the optimum, and hence the second derivative (Hessian).

common-loss-functionscommon-loss-functions

Log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that ‘logcosh’ works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.”

5. Quantile Loss

A quantile is a value below which a fraction of samples in a group falls. Machine learning models work by minimizing (or maximizing) an objective function. As the name suggests, we apply the quantile regression loss function to predict quantiles. For a set of predictions, the loss will be its average.

common-loss-functionscommon-loss-functions

Quantile loss function turns out to be useful when we’re interested in predicting an interval instead of only point predictions.

Related5 Deep Learning Activation Functions You Need to Know

 

Why Loss Functions in Machine Learning Are Important

Loss functions help gauge how a machine learning model is performing with its given data, and how well it’s able to predict an expected outcome. Many machine learning algorithms use loss functions in the optimization process during training to evaluate and improve its output accuracy. Also, by minimizing a chosen loss function during optimization, this can help determine the best model parameters needed for given data.

Frequently Asked Questions

A loss function is a mathematical function that evaluates how well a machine learning algorithm models a featured data set. Loss functions measure the degree of error between a model’s outputs and the actual target values of the featured data set.

Mean squared error (MSE) is a common example of a loss function used in machine learning, often to evaluate regression tasks. MSE calculates the mean squared difference between actual values and predicted values. The MSE loss function increases quadratically with the difference, where as model error increases, the MSE value also increases.

The formula for the mean squared error (MSE) loss function is:

MSE = (1/n) * Σ(yᵢ - ŷᵢ)²

Explore Job Matches.