add#
- EstimatorReport.metrics.add(metric, *, name=None, greater_is_better=True, position='first', **kwargs)[source]#
Add a custom metric to be included in
summarize()by default.- Parameters:
- metricstr, sklearn scorer, or callable
The metric to add.
If a string, it will be run through
sklearn.metrics.get_scorer(). Metrics that require aneg_prefix (e.g."neg_mean_squared_error") may also be passed without it (e.g."mean_squared_error"); the alias is resolved automatically.If a callable, it must have the signature
(estimator, X, y_true, **kw) -> float. It may also return adictmapping class labels to floats (e.g.{0: 0.9, 1: 0.85}), in which casesummarize()will show one row per class label under the metric name. If your metric has the form(y_true, y_pred, **kw) -> float, seesklearn.metrics.make_scorer()to convert it to a scorer.
- namestr, optional
Custom name for the metric. If not provided, the name is inferred from the metric (e.g. the function’s
__name__).- greater_is_betterbool, default=True
Whether higher values are better (only for callables).
- position{“first”, “last”}, default=”first”
Where to place the metric in default
summarize()ordering."first"inserts at the front; repeated"first"adds stack newest-first."last"appends at the end.- **kwargsAny
Default keyword arguments passed to the score function at call time. Only used when metric is a plain callable.
Examples
>>> from sklearn.datasets import load_breast_cancer >>> from sklearn.linear_model import LogisticRegression >>> from sklearn.metrics import make_scorer, mean_absolute_error >>> from skore import evaluate >>> X, y = load_breast_cancer(return_X_y=True) >>> classifier = LogisticRegression(max_iter=10_000) >>> report = evaluate(classifier, X, y, splitter=0.2, pos_label=1) >>> report.metrics.add( ... make_scorer(mean_absolute_error, response_method="predict") ... ) >>> report.metrics.summarize().frame() LogisticRegression Metric ... Mean Absolute Error ...