add#
- CrossValidationReport.metrics.add(metric, *, name=None, response_method='predict', greater_is_better=True, **kwargs)[source]#
Add a custom metric to be included in
summarize()by default.- Parameters:
- metricstr, sklearn scorer, or callable
The metric to add.
- namestr, optional
Custom name for the metric. If not provided, the name is inferred from the metric (e.g. the function’s
__name__).- response_methodstr or list of str, default=”predict”
Estimator method to get predictions (only for callables).
- greater_is_betterbool, default=True
Whether higher values are better (only for callables).
- **kwargsAny
Default keyword arguments passed to the score function at call time. Only used when metric is a plain callable.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.metrics import make_scorer, mean_absolute_error >>> from skore import evaluate >>> X, y = load_breast_cancer(return_X_y=True) >>> classifier = LogisticRegression(max_iter=10_000) >>> report = evaluate(classifier, X, y, splitter=2, pos_label=1) >>> report.metrics.add( ... make_scorer(mean_absolute_error, response_method="predict") ... ) >>> report.metrics.summarize().frame() LogisticRegression mean std Metric ... Mean Absolute Error ... ...