summarize#

ComparisonReport.metrics.summarize(*, data_source='test', metric=None)[source]#

Report a set of metrics for the estimators.

Parameters:
data_source{“test”, “train”, “both”}, default=”test”

The data source to use.

  • “test” : use the test set provided when creating the report.

  • “train” : use the train set provided when creating the report.

  • “both” : use both the train and test sets to compute the metrics and present them side-by-side.

metricstr or list of str or None, default=None

The metrics to report, from the list of registered metrics. None means show all registered metrics. To add a custom metric, see add().

Returns:
MetricsSummaryDisplay

A display containing the statistics for the metrics.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from skore import evaluate
>>> X, y = load_breast_cancer(return_X_y=True)
>>> estimator_1 = LogisticRegression(max_iter=10000, random_state=42)
>>> estimator_2 = LogisticRegression(max_iter=10000, random_state=43)
>>> comparison_report = evaluate(
...     [estimator_1, estimator_2], X, y, splitter=0.2, pos_label=1
... )
>>> comparison_report.metrics.summarize(metric=["precision", "recall"]).frame()
Estimator       LogisticRegression_1  LogisticRegression_2
Metric
Precision                    0.98...               0.98...
Recall                       0.92...               0.92...