Automatic detection of modelling issues#

skore can automatically detect common modeling pitfalls such as overfitting and underfitting. This example walks through the diagnostics API: how to trigger diagnostics, how to read the results, and how to mute specific checks.

We use a purely non-linear regression target and deliberately pick models that fail in known ways:

  • a linear model that cannot capture the non-linearity → underfitting,

  • a single deep decision tree that memorizes the training set perfectly and fails to generalize → overfitting.

Setup#

The target is a product of trigonometric functions of the first two features: completely invisible to a linear model, yet easy to memorize for an unconstrained tree.

import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor

rng = np.random.default_rng(42)
n_samples = 500
X = rng.uniform(0, 1, (n_samples, 5))
y = np.sin(2 * np.pi * X[:, 0]) * np.cos(2 * np.pi * X[:, 1]) + rng.normal(
    0, 0.1, n_samples
)

linear = LinearRegression()
deep_tree = DecisionTreeRegressor(random_state=42)

Calling diagnose() explicitly#

Every report exposes a diagnose() method. Diagnostics are computed lazily and cached, so calling diagnose() is always cheap after the first call.

LinearRegression()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").

1 issue(s) across 2 check(s).


Diagnostics: 1 issue(s) detected, 2 check(s) ran, 0 ignored.
  • [SKD002] Potential underfitting. Train/test scores are on par and not significantly better than the dummy baseline for 2/2 comparable metrics. Read our documentation for more details. Mute with ignore=['SKD002'].


linear_report.metrics.summarize(data_source="both").frame()
LinearRegression (train) LinearRegression (test)
Metric
0.001906 -0.015818
RMSE 0.522214 0.504156
Fit time (s) 0.000950 0.000950
Predict time (s) 0.000136 0.000234


The linear model is flagged for underfitting: its scores are on par between train and test, and not significantly better than a dummy baseline.

Diagnostics: 1 issue(s) detected, 2 check(s) ran, 0 ignored.
  • [SKD001] Potential overfitting. Significant train/test gaps were found for 2/2 default predictive metrics. Read our documentation for more details. Mute with ignore=['SKD001'].


tree_report.metrics.summarize(data_source="both").frame()
DecisionTreeRegressor (train) DecisionTreeRegressor (test)
Metric
1.000000 0.783887
RMSE 0.000000 0.232540
Fit time (s) 0.003311 0.003311
Predict time (s) 0.000340 0.000200


The deep tree is flagged for overfitting: it achieves a perfect score on train but degrades on test.

Ignoring specific checks#

Each diagnostic has a stable code (e.g. SKD001, SKD002). You can mute individual checks per call:

tree_report.diagnose(ignore=["SKD001"])
Diagnostics: 0 issue(s) detected, 1 check(s) ran, 1 ignored.
  • No issues were detected in your report!


Or globally, so that every subsequent diagnose() call skips them:

import skore

with skore.configuration(ignore_diagnostics=["SKD001"]):
    diagnosis = tree_report.diagnose()
diagnosis
Diagnostics: 0 issue(s) detected, 1 check(s) ran, 1 ignored.
  • No issues were detected in your report!


Diagnostics on a CrossValidationReport#

When splitter is an integer, evaluate() returns a CrossValidationReport. Diagnostics aggregate across folds.

Diagnostics: 1 issue(s) detected, 2 check(s) ran, 0 ignored.
  • [SKD001] Potential overfitting. Detected in 5/5 evaluated splits. Read our documentation for more details. Mute with ignore=['SKD001'].


Diagnostics on a ComparisonReport#

Passing a list of estimators returns a ComparisonReport. Diagnostics are grouped by sub-report.

Diagnostics: 2 issue(s) detected, 2 check(s) ran, 0 ignored.
  • [SKD002] Potential underfitting. [LinearRegression] Train/test scores are on par and not significantly better than the dummy baseline for 2/2 comparable metrics. Read our documentation for more details. Mute with ignore=['SKD002'].
  • [SKD001] Potential overfitting. [DecisionTreeRegressor] Significant train/test gaps were found for 2/2 default predictive metrics. Read our documentation for more details. Mute with ignore=['SKD001'].


Total running time of the script: (0 minutes 0.361 seconds)

Gallery generated by Sphinx-Gallery