This is loss metric for machine learning models.
Has two names:
- Root mean squared error
- L2 normalization (math jargon)
Computing RMSE
To compute RMSE we need to
- calculate difference between predictions and real values.
- square it.
- compute the average.
- take the square root of the resulting average.
Actual | Predicted | Error | Powered error |
---|---|---|---|
10 | 20 | -10 | 100 |
3 | 8 | -5 | 25 |
6 | 1 | 5 | 25 |
Total powered error: 150
Mean powered error: 50
Root mean squared error: 7.07
Python code to compute RMSE – SciKit
from sklearn.metrics import mean_squared_error
rmse = np.sqrt(mean_squared_error(targets, predictions))
print('RMSE: %f' % (rmse))
Python code to compute RMSE – PyTorch
F.mse_loss(predictions, targets).sqrt()
Actually PyTorch
computes mse
and we should add square root by ourself.
Module torch.nn.functional
, which the PyTorch team recommends importing as F
Python code to compute RMSE from scratch
import math
def rmse(predictions, targets):
# Calculate the squared differences
squared_differences = [(p - t) ** 2 for p, t in zip(predictions, targets)]
# Calculate the mean of the squared differences
mean_squared_error = sum(squared_differences) / len(squared_differences)
# Return the square root of the mean squared error
return math.sqrt(mean_squared_error)
# Example usage
predictions = [3.0, 5.0, 2.5, 6.0]
targets = [3.5, 4.5, 2.0, 5.0]
result = rmse(predictions, targets)
print("RMSE:", result)
Math formula
$$\text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i – \hat{y}_i)^2}$$
Specifics
Because all differences are squared, function strongly punish standalone big differences.
In order to apply a softer penalty, one can use MAE / L1 normalization
Leave a Reply