.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/model_evaluation/plot_custom_metrics.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_model_evaluation_plot_custom_metrics.py: .. _example_custom_metrics: ======================================================= Adapt skore to your use-case by adding your own metrics ======================================================= By default, :meth:`~skore.EstimatorReport.metrics.summarize` reports a curated set of metrics for your ML task. In practice you often need domain-specific scores: a business cost function, a custom fairness measure, an F-beta with a particular beta, etc. This example walks through how to register such metrics with :meth:`~skore.EstimatorReport.metrics.add` so they are computed and displayed alongside the built-in ones. .. GENERATED FROM PYTHON SOURCE LINES 19-21 Setting up a classification problem =================================== .. GENERATED FROM PYTHON SOURCE LINES 23-29 .. code-block:: Python import skore from sklearn.datasets import load_breast_cancer from sklearn.linear_model import LogisticRegression X, y = load_breast_cancer(return_X_y=True) .. GENERATED FROM PYTHON SOURCE LINES 30-33 We create an :class:`~skore.EstimatorReport` through :func:`~skore.evaluate` using a simple train/test split. ``pos_label=1`` marks the *malignant* class as the positive class. .. GENERATED FROM PYTHON SOURCE LINES 35-39 .. code-block:: Python report = skore.evaluate( LogisticRegression(max_iter=10_000), X, y, pos_label=1, splitter=0.2 ) .. GENERATED FROM PYTHON SOURCE LINES 40-41 Let's look at the default metrics: .. GENERATED FROM PYTHON SOURCE LINES 41-43 .. code-block:: Python report.metrics.summarize().frame() .. raw:: html
LogisticRegression
Metric
Accuracy 0.947368
Precision 0.984127
Recall 0.925373
ROC AUC 0.993649
Log loss 0.110247
Brier score 0.036154
Fit time (s) 0.166261
Predict time (s) 0.000093


.. GENERATED FROM PYTHON SOURCE LINES 44-53 Adding a plain callable ======================= Any function with the signature ``(estimator, X, y, **kwargs) -> score`` can be registered with :meth:`~skore.EstimatorReport.metrics.add`. The function name is used as the metric name by default. If your metric can be expressed as a callable with the signature ``(y_true, y_pred, **kwargs) -> score``, then you can use sklearn's ``make_scorer`` utility function to convert it. .. GENERATED FROM PYTHON SOURCE LINES 56-68 .. code-block:: Python from sklearn.metrics import make_scorer def specificity(y_true, y_pred): """Proportion of true negatives among actual negatives.""" tn = ((y_true == 0) & (y_pred == 0)).sum() fp = ((y_true == 0) & (y_pred == 1)).sum() return tn / (tn + fp) report.metrics.add(make_scorer(specificity)) .. GENERATED FROM PYTHON SOURCE LINES 69-71 .. code-block:: Python report.metrics.summarize().frame() .. raw:: html
LogisticRegression
Metric
Specificity 0.978723
Accuracy 0.947368
Precision 0.984127
Recall 0.925373
ROC AUC 0.993649
Log loss 0.110247
Brier score 0.036154
Fit time (s) 0.166261
Predict time (s) 0.000093


.. GENERATED FROM PYTHON SOURCE LINES 72-73 ``specificity`` now appears alongside the built-in metrics. .. GENERATED FROM PYTHON SOURCE LINES 75-84 Passing extra keyword arguments =============================== If your metric needs extra data at scoring time (e.g. sample-level amounts, a cost matrix, ...), they can be passed as keyword arguments to :meth:`~skore.EstimatorReport.metrics.add`; they will be forwarded to the metric function when it is computed. Alternatively, if the metric takes ``y_true`` and ``y_pred``, the keyword arguments can be passed to ``make_scorer``: .. GENERATED FROM PYTHON SOURCE LINES 86-93 .. code-block:: Python from sklearn.metrics import fbeta_score, make_scorer f2_scorer = make_scorer(fbeta_score, beta=2, pos_label=1) report.metrics.add(f2_scorer, name="f2") report.metrics.summarize().frame() .. raw:: html
LogisticRegression
Metric
F2 0.936556
Specificity 0.978723
Accuracy 0.947368
Precision 0.984127
Recall 0.925373
ROC AUC 0.993649
Log loss 0.110247
Brier score 0.036154
Fit time (s) 0.166261
Predict time (s) 0.000093


.. GENERATED FROM PYTHON SOURCE LINES 94-99 Cherry-picking metrics to display ================================== Once registered, custom metrics can be selected by name in :meth:`~skore.EstimatorReport.metrics.summarize`: .. GENERATED FROM PYTHON SOURCE LINES 101-103 .. code-block:: Python report.metrics.summarize(metric=["specificity", "f2"]).frame() .. raw:: html
LogisticRegression
Metric
Specificity 0.978723
F2 0.936556


.. GENERATED FROM PYTHON SOURCE LINES 104-105 Selecting ``data_source="both"`` lets you compare train vs. test in one call: .. GENERATED FROM PYTHON SOURCE LINES 107-109 .. code-block:: Python report.metrics.summarize(metric=["specificity", "f2"], data_source="both").frame() .. raw:: html
LogisticRegression (train) LogisticRegression (test)
Metric
Specificity 0.933333 0.978723
F2 0.975945 0.936556


.. GENERATED FROM PYTHON SOURCE LINES 110-115 Using a different response method ================================== By default, callables receive the output of ``estimator.predict(X)``. If your metric needs probabilities instead, set ``response_method="predict_proba"``. .. GENERATED FROM PYTHON SOURCE LINES 117-128 .. code-block:: Python import numpy as np def mean_confidence(y_true, y_proba): """Average predicted probability assigned to the true class.""" return np.where(y_true == 1, y_proba[:, 1], y_proba[:, 0]).mean() report.metrics.add(make_scorer(mean_confidence, response_method="predict_proba")) report.metrics.summarize(metric="mean_confidence").frame() .. raw:: html
LogisticRegression
Metric
Mean Confidence 0.931087


.. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 0.209 seconds) .. _sphx_glr_download_auto_examples_model_evaluation_plot_custom_metrics.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_custom_metrics.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_custom_metrics.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_custom_metrics.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_