add#

EstimatorReport.metrics.add(metric, *, name=None, response_method='predict', greater_is_better=True, **kwargs)[source]#

Add a custom metric to be included in summarize() by default.

Parameters:
metricstr, sklearn scorer, or callable

The metric to add.

  • If a string, it will be run through sklearn.metrics.get_scorer(). Metrics that require a neg_ prefix (e.g. "neg_mean_squared_error") may also be passed without it (e.g. "mean_squared_error"); the alias is resolved automatically.

  • If a callable, it must have the signature (y_true, y_pred, **kw) -> float. It may also return a dict mapping class labels to floats (e.g. {0: 0.9, 1: 0.85}), in which case summarize() will show one row per class label under the metric name.

namestr, optional

Custom name for the metric. If not provided, the name is inferred from the metric (e.g. the function’s __name__).

response_methodstr or list of str, default=”predict”

Estimator method to get predictions (only for callables).

greater_is_betterbool, default=True

Whether higher values are better (only for callables).

**kwargsAny

Default keyword arguments passed to the score function at call time. Only used when metric is a plain callable.

Examples

>>> from sklearn.datasets import load_breast_cancer
>>> from sklearn.linear_model import LogisticRegression
>>> from sklearn.metrics import make_scorer, mean_absolute_error
>>> from skore import evaluate
>>> X, y = load_breast_cancer(return_X_y=True)
>>> classifier = LogisticRegression(max_iter=10_000)
>>> report = evaluate(classifier, X, y, splitter=0.2, pos_label=1)
>>> report.metrics.add(
...     make_scorer(mean_absolute_error, response_method="predict")
... )
>>> report.metrics.summarize().frame()
                    LogisticRegression
Metric
                                   ...
Mean Absolute Error                ...