This is loss metric for machine learning models.
It has different names:
- Mean absolute error
- Mean absolute difference
- L1 normalization (math jargon)
Computing MAE
To compute MAE we need to:
- calculate absolute difference between predictions and real values.
- compute average of it.
Actual | Predicted | Error |
---|---|---|
10 | 20 | -10 |
23 | 13 | 10 |
5 | 6 | -1 |
Total absolute error: 21
Mean absolute error: 7
Why absolute values?
If we compute non absolute difference, then negative errors can cancel out positive errors:
Actual | Predicted | Error |
---|---|---|
10 | 20 | -10 |
23 | 13 | 10 |
Total error: 0
Mean error: 0
Python code to compute MAE from scratch
def mae(predictions, targets):
# Calculate the absolute differences
absolute_differences = [abs(p - t) for p, t in zip(predictions, targets)]
# Calculate the mean of the absolute differences
mean_absolute_error = sum(absolute_differences) / len(absolute_differences)
return mean_absolute_error
# Example usage
predictions = [3.0, 5.0, 2.5, 6.0]
targets = [3.5, 4.5, 2.0, 5.0]
result = mae(predictions, targets)
print("MAE:", result)
Python code to compute MAE with PyTorch
F.l1_loss(predictions, targets)
Module torch.nn.functional
, which the PyTorch
team recommends importing as F
Math formula
$$\text{MAE} = \frac{1}{n} \sum_{i=1}^{n} |y_i – \hat{y}_i|$$
Specifics
Doesn’t have such strong punishment for big differences as RMSE / L2 normalization
MAE is used less frequently than RMSE / L2 normalization
Leave a Reply