CrossValidationReport.metrics.recall#
- CrossValidationReport.metrics.recall(*, data_source='test', X=None, y=None, average=None, pos_label=None, aggregate=('mean', 'std'))[source]#
- Compute the recall score. - Parameters:
- data_source{“test”, “train”, “X_y”}, default=”test”
- The data source to use. - “test” : use the test set provided when creating the report. 
- “train” : use the train set provided when creating the report. 
- “X_y” : use the provided - Xand- yto compute the metric.
 
- Xarray-like of shape (n_samples, n_features), default=None
- New data on which to compute the metric. By default, we use the validation set provided when creating the report. 
- yarray-like of shape (n_samples,), default=None
- New target on which to compute the metric. By default, we use the target provided when creating the report. 
- average{“binary”,”macro”, “micro”, “weighted”, “samples”} or None, default=None
- Used with multiclass problems. If - None, the metrics for each class are returned. Otherwise, this determines the type of averaging performed on the data:- “binary”: Only report results for the class specified by - pos_label. This is applicable only if targets (- y_{true,pred}) are binary.
- “micro”: Calculate metrics globally by counting the total true positives, false negatives and false positives. 
- “macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 
- “weighted”: Calculate metrics for each label, and find their average weighted by support (the number of true instances for each label). This alters ‘macro’ to account for label imbalance; it can result in an F-score that is not between precision and recall. Weighted recall is equal to accuracy. 
- “samples”: Calculate metrics for each instance, and find their average (only meaningful for multilabel classification where this differs from - accuracy_score()).
 - Note - If - pos_labelis specified and- averageis None, then we report only the statistics of the positive class (i.e. equivalent to- average="binary").
- pos_labelint, float, bool or str, default=None
- The positive class. 
- aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)
- Function to aggregate the scores across the cross-validation splits. None will return the scores for each split. 
 
- Returns:
- pd.DataFrame
- The recall score. 
 
 - Examples - >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from skore import CrossValidationReport >>> X, y = load_breast_cancer(return_X_y=True) >>> classifier = LogisticRegression(max_iter=10_000) >>> report = CrossValidationReport(classifier, X=X, y=y, cv_splitter=2) >>> report.metrics.recall() LogisticRegression mean std Metric Label / Average Recall 0 0.91... 0.04... 1 0.96... 0.02...