ComparisonReport.metrics.roc_auc#
- ComparisonReport.metrics.roc_auc(*, data_source='test', X=None, y=None, average=None, multi_class='ovr', aggregate=('mean', 'std'))[source]#
- Compute the ROC AUC score. - Parameters:
- data_source{“test”, “train”}, default=”test”
- The data source to use. - “test” : use the test set provided when creating the report. 
- “train” : use the train set provided when creating the report. 
- “X_y” : use the provided - Xand- yto compute the metric.
 
- Xarray-like of shape (n_samples, n_features), default=None
- New data on which to compute the metric. By default, we use the validation set provided when creating the report. 
- yarray-like of shape (n_samples,), default=None
- New target on which to compute the metric. By default, we use the target provided when creating the report. 
- average{“auto”, “macro”, “micro”, “weighted”, “samples”}, default=None
- Average to compute the ROC AUC score in a multiclass setting. By default, no average is computed. Otherwise, this determines the type of averaging performed on the data. - “micro”: Calculate metrics globally by considering each element of the label indicator matrix as a label. 
- “macro”: Calculate metrics for each label, and find their unweighted mean. This does not take label imbalance into account. 
- “weighted”: Calculate metrics for each label, and find their average, weighted by support (the number of true instances for each label). 
- “samples”: Calculate metrics for each instance, and find their average. 
 - Note - Multiclass ROC AUC currently only handles the “macro” and “weighted” averages. For multiclass targets, - average=Noneis only implemented for- multi_class="ovr"and- average="micro"is only implemented for- multi_class="ovr".
- multi_class{“raise”, “ovr”, “ovo”}, default=”ovr”
- The multi-class strategy to use. - “raise”: Raise an error if the data is multiclass. 
- “ovr”: Stands for One-vs-rest. Computes the AUC of each class against the rest. This treats the multiclass case in the same way as the multilabel case. Sensitive to class imbalance even when - average == "macro", because class imbalance affects the composition of each of the “rest” groupings.
- “ovo”: Stands for One-vs-one. Computes the average AUC of all possible pairwise combinations of classes. Insensitive to class imbalance when - average == "macro".
 
- aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)
- Function to aggregate the scores across the cross-validation splits. None will return the scores for each split. Ignored when comparison is between :class:`~skore.EstimatorReport`s. 
 
- Returns:
- pd.DataFrame
- The ROC AUC score. 
 
 - Examples - >>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.model_selection import train_test_split >>> from skore import ComparisonReport, EstimatorReport >>> X, y = load_breast_cancer(return_X_y=True) >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) >>> estimator_1 = LogisticRegression(max_iter=10000, random_state=42) >>> estimator_report_1 = EstimatorReport( ... estimator_1, ... X_train=X_train, ... y_train=y_train, ... X_test=X_test, ... y_test=y_test, ... ) >>> estimator_2 = LogisticRegression(max_iter=10000, random_state=43) >>> estimator_report_2 = EstimatorReport( ... estimator_2, ... X_train=X_train, ... y_train=y_train, ... X_test=X_test, ... y_test=y_test, ... ) >>> comparison_report = ComparisonReport( ... [estimator_report_1, estimator_report_2] ... ) >>> comparison_report.metrics.roc_auc() Estimator LogisticRegression_1 LogisticRegression_2 Metric ROC AUC 0.99... 0.99...