Note
Go to the end to download the full example code.
The skore API#
This example illustrates the consistent API shared by skore reports and displays.
Reports expose the same accessors (data, metrics, inspection), and
each method that produces a visualization returns a Display object. All
displays implement a common interface: plot(), frame(), set_style(),
and help().
Minimal setup: one report and one display#
We build a simple EstimatorReport and use it to show how
accessors return displays and how those displays behave.
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from skore import EstimatorReport, train_test_split
from skrub import tabular_pipeline
X, y = load_breast_cancer(return_X_y=True, as_frame=True)
split_data = train_test_split(X=X, y=y, random_state=42, as_dict=True)
estimator = tabular_pipeline(LogisticRegression())
report = EstimatorReport(estimator, **split_data)
â•────────────────────── HighClassImbalanceTooFewExamplesWarning ───────────────────────╮
│ It seems that you have a classification problem with at least one class with fewer │
│ than 100 examples in the test set. In this case, using train_test_split may not be a │
│ good idea because of high variability in the scores obtained on the test set. We │
│ suggest three options to tackle this challenge: you can increase test_size, collect │
│ more data, or use skore's CrossValidationReport with the `splitter` parameter of │
│ your choice. │
╰──────────────────────────────────────────────────────────────────────────────────────╯
â•───────────────────────────────── ShuffleTrueWarning ─────────────────────────────────╮
│ We detected that the `shuffle` parameter is set to `True` either explicitly or from │
│ its default value. In case of time-ordered events (even if they are independent), │
│ this will result in inflated model performance evaluation because natural drift will │
│ not be taken into account. We recommend setting the shuffle parameter to `False` in │
│ order to ensure the evaluation process is really representative of your production │
│ release process. │
╰──────────────────────────────────────────────────────────────────────────────────────╯
Data accessor: report.data.analyze() returns a display#
The data accessor provides dataset summaries. Its analyze() method
returns a TableReportDisplay.
data_display = report.data.analyze()
data_display.help()
Every display implements the same API. You can:
Plot it (with optional backend and style):
data_display.plot(kind="dist", x="mean radius", y="mean texture")

You can set the style of the plot via set_style() and then call plot():
data_display.set_style(scatterplot_kwargs={"color": "orange", "alpha": 1.0})
data_display.plot(kind="dist", x="mean radius", y="mean texture")

Export the underlying data as a DataFrame:
Metrics accessor: same idea, same display API#
The metrics accessor exposes methods such as confusion_matrix(),
roc_curve(), precision_recall(), and prediction_error(). Each
returns a display (e.g. ConfusionMatrixDisplay) with the
same interface: plot(), frame(), set_style(), help().
metrics_display = report.metrics.confusion_matrix()
metrics_display.help()
Draw the confusion matrix by calling plot():

Inspection accessor#
The inspection accessor exposes model-specific displays (e.g.
coefficients() for linear models, impurity_decrease() for trees).
These also return Display objects with the same plot(), frame(),
set_style(), and help() methods.
inspection_display = report.inspection.coefficients()
inspection_display.plot(select_k=15, sorting_order="descending")

Same API with CrossValidationReport#
The same accessors and display API apply to CrossValidationReport.
We use the same dataset and model; only the report type changes.
from skore import CrossValidationReport
cv_report = CrossValidationReport(estimator, X, y, splitter=3)
Again: data, metrics, and inspection return displays with
plot(), frame(), and set_style().

cv_report.metrics.confusion_matrix().plot()

cv_report.inspection.coefficients().plot(select_k=10, sorting_order="descending")

The same accessors and display API apply to ComparisonReport
(metrics and inspection; no data accessor when comparing reports).
Summary#
Reports (Estimator, CrossValidation, Comparison) use the same accessor layout:
report.data,report.metrics,report.inspection(where applicable).Accessor methods that produce figures or tables return Display objects.
Displays share a single, predictable API:
plot(**kwargs)— render the visualizationframe(**kwargs)— return the data as apandas.DataFrameset_style(policy=..., **kwargs)— customize appearancehelp()— show available options
This consistency makes it easy to switch between report types and to reuse the same workflow across data, metrics, and inspection.
Total running time of the script: (0 minutes 5.267 seconds)