{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "\n\n# Adapt skore to your use-case by adding your own metrics\n\nBy default, :meth:`~skore.EstimatorReport.metrics.summarize` reports a curated\nset of metrics for your ML task. In practice you often need domain-specific\nscores: a business cost function, a custom fairness measure, an F-beta with a\nparticular beta, etc.\n\nThis example walks through how to register such metrics with\n:meth:`~skore.EstimatorReport.metrics.add` so they are computed and displayed\nalongside the built-in ones.\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Setting up a classification problem\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import skore\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.linear_model import LogisticRegression\n\nX, y = load_breast_cancer(return_X_y=True)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "We create an :class:`~skore.EstimatorReport` through :func:`~skore.evaluate`\nusing a simple train/test split. ``pos_label=1`` marks the *malignant* class\nas the positive class.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "report = skore.evaluate(\n    LogisticRegression(max_iter=10_000), X, y, pos_label=1, splitter=0.2\n)"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Let's look at the default metrics:\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "report.metrics.summarize().frame()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Adding a plain callable\n\nAny function with the signature ``(estimator, X, y, **kwargs) -> score`` can be\nregistered with :meth:`~skore.EstimatorReport.metrics.add`. The function name\nis used as the metric name by default.\nIf your metric can be expressed as a callable with the signature\n``(y_true, y_pred, **kwargs) -> score``, then you can use sklearn's ``make_scorer``\nutility function to convert it.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from sklearn.metrics import make_scorer\n\n\ndef specificity(y_true, y_pred):\n    \"\"\"Proportion of true negatives among actual negatives.\"\"\"\n    tn = ((y_true == 0) & (y_pred == 0)).sum()\n    fp = ((y_true == 0) & (y_pred == 1)).sum()\n    return tn / (tn + fp)\n\n\nreport.metrics.add(make_scorer(specificity))"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "report.metrics.summarize().frame()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "``specificity`` now appears alongside the built-in metrics.\n\n"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Passing extra keyword arguments\n\nIf your metric needs extra data at scoring time (e.g. sample-level amounts,\na cost matrix, ...), they can be passed as keyword arguments to\n:meth:`~skore.EstimatorReport.metrics.add`; they will be forwarded to the\nmetric function when it is computed.\nAlternatively, if the metric takes ``y_true`` and ``y_pred``, the keyword\narguments can be passed to ``make_scorer``:\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "from sklearn.metrics import fbeta_score, make_scorer\n\nf2_scorer = make_scorer(fbeta_score, beta=2, pos_label=1)\nreport.metrics.add(f2_scorer, name=\"f2\")\n\nreport.metrics.summarize().frame()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Cherry-picking metrics to display\n\nOnce registered, custom metrics can be selected by name in\n:meth:`~skore.EstimatorReport.metrics.summarize`:\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "report.metrics.summarize(metric=[\"specificity\", \"f2\"]).frame()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "Selecting ``data_source=\"both\"`` lets you compare train vs. test in one call:\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "report.metrics.summarize(metric=[\"specificity\", \"f2\"], data_source=\"both\").frame()"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {},
      "source": [
        "## Using a different response method\n\nBy default, callables receive the output of ``estimator.predict(X)``. If your\nmetric needs probabilities instead, set ``response_method=\"predict_proba\"``.\n\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "collapsed": false
      },
      "outputs": [],
      "source": [
        "import numpy as np\n\n\ndef mean_confidence(y_true, y_proba):\n    \"\"\"Average predicted probability assigned to the true class.\"\"\"\n    return np.where(y_true == 1, y_proba[:, 1], y_proba[:, 0]).mean()\n\n\nreport.metrics.add(make_scorer(mean_confidence, response_method=\"predict_proba\"))\n\nreport.metrics.summarize(metric=\"mean_confidence\").frame()"
      ]
    }
  ],
  "metadata": {
    "kernelspec": {
      "display_name": "Python 3",
      "language": "python",
      "name": "python3"
    },
    "language_info": {
      "codemirror_mode": {
        "name": "ipython",
        "version": 3
      },
      "file_extension": ".py",
      "mimetype": "text/x-python",
      "name": "python",
      "nbconvert_exporter": "python",
      "pygments_lexer": "ipython3",
      "version": "3.14.4"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}