MLflow skore Project#

This example shows how to persist reports in MLflow using Project in mode="mlflow": log reports as MLflow runs and inspect them. It uses a CrossValidationReport, but the same approach applies to EstimatorReport.

To run this example against your own MLflow tracking server, use:

TRACKING_URI=<tracking_uri> PROJECT=<project> python plot_skore_mlflow_project.py

To try it locally, start an MLflow server with uvx mlflow server and set TRACKING_URI=http://127.0.0.1:5000.

First, let us build one report to persist:

from sklearn.datasets import load_iris
from sklearn.ensemble import HistGradientBoostingClassifier
from skore import CrossValidationReport, Project

X, y = load_iris(return_X_y=True, as_frame=True)

estimator = HistGradientBoostingClassifier()
report = CrossValidationReport(estimator, X, y)

Then, we can push the report to the MLflow backend:

import io


# MLflow/Alembic emits verbose DB initialization logs; silence them so the
# example page focuses on skore usage rather than backend startup details.
with redirect_stdout(io.StringIO()), redirect_stderr(io.StringIO()):
    # This creates an MLflow experiment with name `PROJECT`:
    project = Project(
        PROJECT,
        mode="mlflow",
        tracking_uri=TRACKING_URI,
    )
project.put("hgb-baseline", report)
2026/03/12 16:59:46 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html
2026/03/12 16:59:51 INFO mlflow.models.model: Found the following environment variables used during model inference: [SPHINX_EXAMPLE_API_KEY]. Please check if you need to set them when deploying the model. To disable this message, set environment variable `MLFLOW_RECORD_ENV_VARS_IN_MODEL_LOGGING` to `false`.
2026/03/12 16:59:52 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html
2026/03/12 16:59:56 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html
2026/03/12 17:00:00 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html
2026/03/12 17:00:05 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html
2026/03/12 17:00:09 WARNING mlflow.sklearn: Saving scikit-learn models in the pickle or cloudpickle format requires exercising caution because these formats rely on Python's object serialization mechanism, which can execute arbitrary code during deserialization. The recommended safe alternative is the 'skops' format. For more information, see: https://scikit-learn.org/stable/model_persistence.html

Note that mlflow warns us about saving models with pickle. Future versions of skore might rely on skops for model serialization, which will make these warnings disappear.

Like for other types of projects (local, hub), you can access the summary and its DataFrame version:

import pandas as pd

summary = project.summarize()
pandas_summary = pd.DataFrame(summary).reset_index()
pandas_summary[["id", "key", "report_type", "learner", "ml_task", "dataset"]]
id key report_type learner ml_task dataset
0 92b9fced651d4d829f3ce15103c7b02d hgb-baseline cross-validation HistGradientBoostingClassifier multiclass-classification 8f9eb48c


The “id” column corresponds to the MLflow run ID, so you can access the MLflow run this way:

import mlflow

(run_id,) = pandas_summary["id"]

mlflow_run = mlflow.get_run(run_id)
mlflow_run.data.metrics
{'accuracy': 0.9466666666666667, 'accuracy_std': 0.0649786289653931, 'log_loss': 0.2822828899154286, 'log_loss_std': 0.24315920107983896, 'recall': 0.9466666666666667, 'recall_std': 0.0649786289653931, 'precision': 0.9466666666666667, 'precision_std': 0.0649786289653931, 'roc_auc': 0.9913333333333334, 'roc_auc_std': 0.009888264649460847, 'fit_time': 0.13231392220000088, 'predict_time': 0.006634758200016222}

But most importantly, this ID lets you load saved reports:

loaded_report = project.get(run_id)
loaded_report.metrics.summarize().frame()
HistGradientBoostingClassifier
mean std
Metric Label / Average
Accuracy 0.946667 0.064979
Precision 0 1.000000 0.000000
1 0.935065 0.062957
2 0.920280 0.133381
Recall 0 1.000000 0.000000
1 0.900000 0.173205
2 0.940000 0.054772
ROC AUC 0 1.000000 0.000000
1 0.980000 0.020917
2 0.981000 0.018841
Log loss 0.282283 0.243159
Fit time (s) 0.132314 0.008354
Predict time (s) 0.003361 0.000058


Total running time of the script: (0 minutes 32.570 seconds)

Gallery generated by Sphinx-Gallery