UPDATED BY
Brennan Whitfield | Jun 30, 2023

The loss function is a method of evaluating how well your machine learning algorithm models your featured data set. In other words, loss functions are a measurement of how good your model is in terms of predicting the expected outcome.  

The cost function and loss function refer to the same context (i.e. the training process that uses backpropagation to minimize the error between the actual and predicted outcome). We calculate the cost function as the average of all loss function values whereas we calculate the loss function for each sample output compared to its actual value.

The loss function is directly related to the predictions of the model you’ve built. If your loss function value is low, your model will provide good results. The loss function (or rather, the cost function) you use to evaluate the model performance needs to be minimized to improve its performance.

What Are Loss Functions in Machine Learning?

The loss function is a method of evaluating how well your machine learning algorithm models your featured data set. In other words, loss functions are a measurement of how good your model is at predicting the expected outcome.

 

Loss Functions

Broadly speaking, loss functions can be grouped into two major categories concerning the types of problems we come across in the real world: classification and regression. In classification problems, our task is to predict the respective probabilities of all classes the problem is dealing with. On the other hand, when it comes to regression, our task is to predict the continuous value concerning a given set of independent features to the learning algorithm.

 

Assumptions of Loss Functions

  • n/m — number of training samples
  • i — ith training sample in a data set
  • y(i) — Actual value for the ith training sample
  • y_hat(i) — Predicted value for the ith training sample

More on Loss Functions From Built In Expert ContributorsThink You Don’t Need Loss Functions in Deep Learning? Think Again.

 

Loss Functions for Classification

Types of Classification Losses

  1. Binary Cross-Entropy Loss / Log Loss
  2. Hinge Loss

 

1. Binary Cross-Entropy Loss / Log Loss

This is the most common loss function used in classification problems. The cross-entropy loss decreases as the predicted probability converges to the actual label. It measures the performance of a classification model whose predicted output is a probability value between 0 and 1.

When the number of classes is 2, it’s binary classification.

common-loss-functions

When the number of classes is more than 2, it’s multi-class classification.

common-loss-functionscommon-loss-functions

We derive the cross-entropy loss formula from the regular likelihood function, but with logarithms added in.

More From SparshAnscombe’s Quartet: What Is It and Why Do We Care?

 

2. Hinge Loss

The second most common loss function used for classification problems and an alternative to the cross-entropy loss function is hinge loss, primarily developed for support vector machine (SVM) model evaluation.

common-loss-functionscommon-loss-functions

Hinge loss penalizes the wrong predictions and the right predictions that are not confident. It’s primarily used with SVM classifiers with class labels as -1 and 1. Make sure you change your malignant class labels from 0 to -1.

 

Loss Functions for Regression 

Types of Regression Losses

  1. Mean Square Error / Quadratic Loss / L2 Loss
  2. Mean Absolute Error / L1 Loss
  3. Huber Loss / Smooth Mean Absolute Error
  4. Log-Cosh Loss
  5. Quantile Loss

 

1. Mean Square Error / Quadratic Loss / L2 Loss

We define MSE loss function as the average of squared differences between the actual and the predicted value. It’s the most commonly used regression loss function.

common-loss-functionscommon-loss-functions

The corresponding cost function is the mean of these squared errors (MSE). The MSE loss function penalizes the model for making large errors by squaring them and this property makes the MSE cost function less robust to outliers. Therefore, you shouldn’t use it if the data is prone to many outliers.

Looking for More Machine Learning Help? We Got You.5 Open-Source Machine Learning Libraries Worth Checking Out

 

2. Mean Absolute Error / L1 Loss

We define MAE loss function as the average of absolute differences between the actual and the predicted value. It’s the second most commonly used regression loss function. It measures the average magnitude of errors in a set of predictions, without considering their directions.

common-loss-functionscommon-loss-functions

The corresponding cost function is the mean of these absolute errors (MAE). The MAE loss function is more robust to outliers compared to the MSE loss function. Therefore, you should use it if the data is prone to many outliers.

 

3. Huber Loss / Smooth Mean Absolute Error

The Huber loss function is defined as the combination of MSE and MAE loss functions because it approaches MSE when ? ~ 0 and MAE when ? ~ ∞ (large numbers). It is mean absolute error, which becomes quadratic when the error is small. To make the error quadratic depends on how small that error could be, which is controlled by a hyperparameter, ? (delta) that you can tune.

common-loss-functionscommon-loss-functions

The choice of the delta value is critical because it determines what you’re willing to consider an outlier. Hence, the Huber loss function could be less sensitive to outliers than the MSE loss function, depending on the hyperparameter value. Therefore, you can use the Huber loss function if  the data is prone to outliers. In addition, we might need to train hyperparameter delta, which is an iterative process.

Looking for More Tutorials? Yeah, We Have Those.5 Deep Learning Activation Functions You Need to Know

 

4. Log-Cosh Loss

The log-cosh loss function is defined as the logarithm of the hyperbolic cosine of the prediction error. It’s another function used in regression tasks that’s much smoother than MSE loss. It has all the advantages of Huber loss because it’s twice differentiable everywhere, unlike Huber loss, because some learning algorithms like XGBoost use Newton’s method to find the optimum, and hence the second derivative (Hessian).

common-loss-functionscommon-loss-functions

Log(cosh(x)) is approximately equal to (x ** 2) / 2 for small x and to abs(x) - log(2) for large x. This means that ‘logcosh’ works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.” 

 

5. Quantile Loss

A quantile is a value below which a fraction of samples in a group falls. Machine learning models work by minimizing (or maximizing) an objective function. As the name suggests, we apply the quantile regression loss function to predict quantiles. For a set of predictions, the loss will be its average.

common-loss-functionscommon-loss-functions

Quantile loss function turns out to be useful when we’re interested in predicting an interval instead of only point predictions.

Loss Functions, Explained. | Video: Siraj Raval

 

Why Loss Functions in Machine Learning Are Important

As mentioned, loss functions help gauge how a machine learning model is performing with its given data, and how well it’s able to predict an expected outcome. Many machine learning algorithms use loss functions in the optimization process during training to evaluate and improve its output accuracy. Also, by minimizing a chosen loss function during optimization, this can help determine the best model parameters needed for given data.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us