Precision
Of all the items the model predicted as positive, the proportion that were actually positive. Precision measures how trustworthy the model's positive predictions are.
Why It Matters
High precision means when the model says 'yes,' you can trust it. This is critical for applications where false alarms are costly, like spam filtering or fraud alerts.
Example
A fraud detection model that flags 100 transactions as fraudulent, of which 90 are actually fraud — that is 90% precision.
Think of it like...
Like a metal detector at the beach — high precision means when it beeps, there is almost always actually metal there, rather than constantly giving false alarms.
Related Terms
Recall
Of all the actually positive items in the dataset, the proportion that the model correctly identified. Recall measures how completely the model finds all relevant items.
Accuracy
The percentage of correct predictions out of all predictions made by a model. While intuitive, accuracy can be misleading for imbalanced datasets.
F1 Score
The harmonic mean of precision and recall, providing a single metric that balances both. F1 scores range from 0 to 1, with 1 being perfect precision and recall.
Confusion Matrix
A table that summarizes the performance of a classification model by showing true positives, true negatives, false positives, and false negatives. It reveals the types of errors a model makes.