recall#
- ComparisonReport.metrics.recall(*, data_source='test', average=None, aggregate=('mean', 'std'))[source]#
Compute the recall score.
- Parameters:
- data_source{“test”, “train”}, default=”test”
The data source to use.
“test” : use the test set provided when creating the report.
“train” : use the train set provided when creating the report.
- average{“binary”,”macro”, “micro”, “weighted”, “samples”} or None, default=None
Used with multiclass problems. If
None, the metrics for each class are returned. Otherwise, this determines the type of averaging performed on the data:“binary”: Only report results for the class specified by the report’s
pos_label. This is applicable only if targets (y_{true,pred}) are binary.“micro”: Calculate metrics globally by counting the total true positives, false negatives and false positives.
“macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account.
“weighted”: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. Weighted recall is equal to accuracy.
“samples”: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from
accuracy_score()).
Note
If the report’s
pos_labelis specified andaverageis None, then we report only the statistics of the positive class (i.e. equivalent toaverage="binary").- aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)
Function to aggregate the scores across the cross-validation splits. None will return the scores for each split. Ignored when comparison is between
EstimatorReportinstances
- Returns:
- pd.DataFrame
The recall score.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from skore import evaluate >>> X, y = load_breast_cancer(return_X_y=True) >>> estimator_1 = LogisticRegression(max_iter=10000, random_state=42) >>> estimator_2 = LogisticRegression(max_iter=10000, random_state=43) >>> comparison_report = evaluate([estimator_1, estimator_2], X, y, splitter=0.2) >>> comparison_report.metrics.recall() Estimator LogisticRegression_1 LogisticRegression_2 Metric Label / Average Recall 0 0.978... 0.978... 1 0.925... 0.925...