Quick start#

From your Python code, create and load a skore Project:

import skore

my_project = skore.open("my_project", create=True)

This will create a skore project directory named quick_start.skore in your current working directory. Note that overwrite=True will overwrite any pre-existing project with the same path (which you might not want to do that depending on your use case).

Evaluate your model using skore’s CrossValidationReport:

from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression

from skore import CrossValidationReport

X, y = make_classification(n_classes=2, n_samples=100_000, n_informative=4)
clf = LogisticRegression()

cv_report = CrossValidationReport(clf, X, y)
Processing cross-validation ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
for LogisticRegression

Display the help tree to see all the insights that are available to you (given that you are doing binary classification):

╭─────────────────── Tools to diagnose estimator LogisticRegression ───────────────────╮
│ report                                                                               │
│ ├── .metrics                                                                         │
│ │   ├── .accuracy(...)         (↗︎)     - Compute the accuracy score.                 │
│ │   ├── .brier_score(...)      (↘︎)     - Compute the Brier score.                    │
│ │   ├── .log_loss(...)         (↘︎)     - Compute the log loss.                       │
│ │   ├── .precision(...)        (↗︎)     - Compute the precision score.                │
│ │   ├── .recall(...)           (↗︎)     - Compute the recall score.                   │
│ │   ├── .roc_auc(...)          (↗︎)     - Compute the ROC AUC score.                  │
│ │   ├── .custom_metric(...)            - Compute a custom metric.                    │
│ │   ├── .report_metrics(...)           - Report a set of metrics for our estimator.  │
│ │   └── .plot                                                                        │
│ │       ├── .precision_recall(...)     - Plot the precision-recall curve.            │
│ │       └── .roc(...)                  - Plot the ROC curve.                         │
│ ├── .cache_predictions(...)            - Cache the predictions for sub-estimators    │
│ │   reports.                                                                         │
│ ├── .clear_cache(...)                  - Clear the cache.                            │
│ └── Attributes                                                                       │
│     ├── .X                                                                           │
│     ├── .y                                                                           │
│     ├── .estimator_                                                                  │
│     ├── .estimator_name_                                                             │
│     ├── .estimator_reports_                                                          │
│     └── .n_jobs                                                                      │
│                                                                                      │
│                                                                                      │
│ Legend:                                                                              │
│ (↗︎) higher is better (↘︎) lower is better                                             │
╰──────────────────────────────────────────────────────────────────────────────────────╯

Display the report metrics that was computed for you:

Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
Metric Precision (↗︎) Recall (↗︎) ROC AUC (↗︎) Brier score (↘︎)
Class label 0 1 0 1
LogisticRegression Split #0 0.711855 0.679716 0.652961 0.735906 0.742988 0.203268
Split #1 0.708920 0.676875 0.649560 0.733507 0.739072 0.204214
Split #2 0.716189 0.677460 0.645258 0.744502 0.733268 0.206385
Split #3 0.705876 0.675994 0.650160 0.729308 0.736770 0.205158
Split #4 0.713305 0.681327 0.654927 0.737031 0.739471 0.204202


Display the ROC curve that was generated for you:

import matplotlib.pyplot as plt

roc_plot = cv_report.metrics.plot.roc()
roc_plot
plt.tight_layout()
plot quick start
Computing predictions for display ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%

Store the results in the skore project for safe-keeping:

my_project.put("df_cv_report_metrics", df_cv_report_metrics)
my_project.put("roc_plot", roc_plot)

What’s next?

For a more in-depth guide, see our Skore: getting started page!

Cleanup the project#

Let’s clear the skore project (to avoid any conflict with other documentation examples).

Total running time of the script: (0 minutes 1.961 seconds)

Gallery generated by Sphinx-Gallery