.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/getting_started/plot_getting_started.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_getting_started_plot_getting_started.py: .. _example_getting_started: ====================== Skore: getting started ====================== .. GENERATED FROM PYTHON SOURCE LINES 10-24 This guide illustrates how to use skore through a complete machine learning workflow for binary classification: #. Set up a proper experiment with training and test data #. Develop and evaluate multiple models using cross-validation #. Compare models to select the best one #. Validate the final model on held-out data #. Track and organize your machine learning results Throughout this guide, we will see how skore helps you: * Avoid common pitfalls with smart diagnostics * Quickly get rich insights into model performance * Organize and track your experiments .. GENERATED FROM PYTHON SOURCE LINES 26-34 Setting up our binary classification problem ============================================ Let's start by loading the German credit dataset, a classic binary classification problem where we predict the customer's credit risk ("good" or "bad"). This dataset contains various features about credit applicants, including personal information, credit history, and loan details. .. GENERATED FROM PYTHON SOURCE LINES 36-45 .. code-block:: Python import pandas as pd import skore from sklearn.datasets import fetch_openml from skrub import TableReport german_credit = fetch_openml(data_id=31, as_frame=True, parser="pandas") X, y = german_credit.data, german_credit.target TableReport(german_credit.frame) .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 46-56 Creating our experiment and held-out sets ----------------------------------------- We will use skore's enhanced :func:`~skore.train_test_split` function to create our experiment set and a left-out test set. The experiment set will be used for model development and cross-validation, while the left-out set will only be used at the end to validate our final model. Unlike scikit-learn's :func:`~skore.train_test_split`, skore's version provides helpful diagnostics about potential issues with your data split, such as class imbalance. .. GENERATED FROM PYTHON SOURCE LINES 58-62 .. code-block:: Python X_experiment, X_holdout, y_experiment, y_holdout = skore.train_test_split( X, y, random_state=42 ) .. rst-class:: sphx-glr-script-out .. code-block:: none ╭────────────────────── HighClassImbalanceTooFewExamplesWarning ───────────────────────╮ │ It seems that you have a classification problem with at least one class with fewer │ │ than 100 examples in the test set. In this case, using train_test_split may not be a │ │ good idea because of high variability in the scores obtained on the test set. We │ │ suggest three options to tackle this challenge: you can increase test_size, collect │ │ more data, or use skore's CrossValidationReport with the `splitter` parameter of │ │ your choice. │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ ╭───────────────────────────────── ShuffleTrueWarning ─────────────────────────────────╮ │ We detected that the `shuffle` parameter is set to `True` either explicitly or from │ │ its default value. In case of time-ordered events (even if they are independent), │ │ this will result in inflated model performance evaluation because natural drift will │ │ not be taken into account. We recommend setting the shuffle parameter to `False` in │ │ order to ensure the evaluation process is really representative of your production │ │ release process. │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 63-67 skore tells us we have class-imbalance issues with our data, which we confirm with the :class:`~skore.TableReport` above by clicking on the "class" column and looking at the class distribution: there are only 300 examples where the target is "bad". The second warning concerns time-ordered data, but our data does not contain time-ordered columns so we can safely ignore it. .. GENERATED FROM PYTHON SOURCE LINES 69-83 Model development with cross-validation ======================================= We will investigate two different families of models using cross-validation. 1. A :class:`~sklearn.linear_model.LogisticRegression` which is a linear model 2. A :class:`~sklearn.ensemble.RandomForestClassifier` which is an ensemble of decision trees. In both cases, we rely on :func:`skrub.tabular_pipeline` to choose the proper preprocessing depending on the kind of model. Cross-validation is necessary to get a more reliable estimate of model performance. skore makes it easy through :class:`skore.CrossValidationReport`. .. GENERATED FROM PYTHON SOURCE LINES 85-91 Model no. 1: Linear regression with preprocessing ------------------------------------------------- Our first model will be a linear model, with automatic preprocessing of non-numeric data. Under the hood, skrub's :class:`~skrub.TableVectorizer` will adapt the preprocessing based on our choice to use a linear model. .. GENERATED FROM PYTHON SOURCE LINES 93-99 .. code-block:: Python from sklearn.linear_model import LogisticRegression from skrub import tabular_pipeline simple_model = tabular_pipeline(LogisticRegression()) simple_model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(datetime=DatetimeEncoder(periodic_encoding='spline'))),
                    ('simpleimputer', SimpleImputer(add_indicator=True)),
                    ('squashingscaler', SquashingScaler(max_absolute_value=5)),
                    ('logisticregression', LogisticRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 100-101 We now cross-validate the model with :class:`~skore.CrossValidationReport`. .. GENERATED FROM PYTHON SOURCE LINES 103-113 .. code-block:: Python from skore import CrossValidationReport simple_cv_report = CrossValidationReport( simple_model, X=X_experiment, y=y_experiment, pos_label="good", splitter=5, ) .. GENERATED FROM PYTHON SOURCE LINES 114-118 Skore reports allow us to structure the statistical information we look for when experimenting with predictive models. First, the :meth:`~skore.CrossValidationReport.help` method shows us all its available methods and attributes, with the knowledge that our model was trained for classification: .. GENERATED FROM PYTHON SOURCE LINES 120-122 .. code-block:: Python simple_cv_report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭─────────────────── Tools to diagnose estimator LogisticRegression ───────────────────╮ │ CrossValidationReport │ │ ├── .data │ │ │ └── .analyze(...) - Plot dataset statistics. │ │ ├── .metrics │ │ │ ├── .accuracy(...) (↗︎) - Compute the accuracy score. │ │ │ ├── .brier_score(...) (↘︎) - Compute the Brier score. │ │ │ ├── .confusion_matrix(...) - Plot the confusion matrix. │ │ │ ├── .log_loss(...) (↘︎) - Compute the log loss. │ │ │ ├── .precision(...) (↗︎) - Compute the precision score. │ │ │ ├── .precision_recall(...) - Plot the precision-recall curve. │ │ │ ├── .recall(...) (↗︎) - Compute the recall score. │ │ │ ├── .roc(...) - Plot the ROC curve. │ │ │ ├── .roc_auc(...) (↗︎) - Compute the ROC AUC score. │ │ │ ├── .timings(...) - Get all measured processing times related │ │ │ │ to the estimator. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ └── .summarize(...) - Report a set of metrics for our estimator. │ │ ├── .inspection │ │ │ └── .coefficients(...) - Retrieve the coefficients across splits, │ │ │ including the intercept. │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ ├── .create_estimator_report(...) - Create an estimator report from the │ │ │ cross-validation report. │ │ ├── .get_predictions(...) - Get estimator's predictions. │ │ └── Attributes │ │ ├── .X - The data to fit │ │ ├── .y - The target variable to try to predict in │ │ │ the case of supervised learning │ │ ├── .estimator - Estimator to make the cross-validation │ │ │ report from │ │ ├── .estimator_ - The cloned or copied estimator │ │ ├── .estimator_name_ - The name of the estimator │ │ ├── .estimator_reports_ - The estimator reports for each split │ │ ├── .ml_task - No description available │ │ ├── .n_jobs - Number of jobs to run in parallel │ │ ├── .pos_label - For binary classification, the positive │ │ │ class │ │ ├── .split_indices - No description available │ │ └── .splitter - Determines the cross-validation splitting │ │ strategy │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 123-124 For example, we can examine the training data, which excludes the held-out data: .. GENERATED FROM PYTHON SOURCE LINES 126-128 .. code-block:: Python simple_cv_report.data.analyze() .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 129-131 But we can also quickly get an overview of the performance of our model, using :meth:`~skore.CrossValidationReport.metrics.summarize`: .. GENERATED FROM PYTHON SOURCE LINES 133-136 .. code-block:: Python simple_metrics = simple_cv_report.metrics.summarize(favorability=True) simple_metrics.frame() .. raw:: html
LogisticRegression Favorability
mean std
Metric
Accuracy 0.729333 0.050903 (↗︎)
Precision 0.785632 0.034982 (↗︎)
Recall 0.840934 0.050696 (↗︎)
ROC AUC 0.750335 0.056447 (↗︎)
Brier score 0.184294 0.026786 (↘︎)
Fit time (s) 0.112238 0.009682 (↘︎)
Predict time (s) 0.054712 0.000496 (↘︎)


.. GENERATED FROM PYTHON SOURCE LINES 137-141 .. note:: `favorability=True` adds a column showing whether higher or lower metric values are better. .. GENERATED FROM PYTHON SOURCE LINES 143-145 In addition to the summary of metrics, skore provides more advanced statistical information such as the precision-recall curve: .. GENERATED FROM PYTHON SOURCE LINES 147-150 .. code-block:: Python precision_recall = simple_cv_report.metrics.precision_recall() precision_recall.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭─────────────────────────── PrecisionRecallCurveDisplay ───────────────────────────╮ │ display │ │ ├── Attributes │ │ └── Methods │ │ ├── .frame(...) - Get the data used to create the precision-recall curve plot. │ │ ├── .plot(...) - Plot visualization. │ │ └── .set_style(...) - Set the style parameters for the display. │ ╰────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 151-156 .. note:: The output of :meth:`~skore.CrossValidation.precision_recall` is a :class:`~skore.Display` object. This is a common pattern in skore which allows us to access the information in several ways. .. GENERATED FROM PYTHON SOURCE LINES 158-159 We can visualize the critical information as a plot, with only a few lines of code: .. GENERATED FROM PYTHON SOURCE LINES 161-163 .. code-block:: Python precision_recall.plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_001.png :alt: Precision-Recall Curve for LogisticRegression Positive label: good Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 164-165 Or we can access the raw information as a dataframe if additional analysis is needed: .. GENERATED FROM PYTHON SOURCE LINES 167-169 .. code-block:: Python precision_recall.frame() .. raw:: html
split threshold precision recall
0 0 0.110277 0.700000 1.000000
1 0 0.215387 0.744681 1.000000
2 0 0.238563 0.742857 0.990476
3 0 0.248819 0.741007 0.980952
4 0 0.273961 0.739130 0.971429
... ... ... ... ...
660 4 0.982595 1.000000 0.048077
661 4 0.988545 1.000000 0.038462
662 4 0.989817 1.000000 0.028846
663 4 0.994946 1.000000 0.019231
664 4 0.995636 1.000000 0.009615

665 rows × 4 columns



.. GENERATED FROM PYTHON SOURCE LINES 170-171 As another example, we can plot the confusion matrix with the same consistent API: .. GENERATED FROM PYTHON SOURCE LINES 173-176 .. code-block:: Python confusion_matrix = simple_cv_report.metrics.confusion_matrix() confusion_matrix.plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_002.png :alt: Confusion Matrix Decision threshold: 0.50 Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 177-179 Skore also provides utilities to inspect models. Since our model is a linear model, we can study the importance that it gives to each feature: .. GENERATED FROM PYTHON SOURCE LINES 181-184 .. code-block:: Python coefficients = simple_cv_report.inspection.coefficients() coefficients.frame() .. raw:: html
split feature coefficients
0 0 Intercept 1.232482
1 0 checking_status_0<=X<200 -0.322232
2 0 checking_status_<0 -0.572662
3 0 checking_status_>=200 0.196627
4 0 checking_status_no checking 0.791377
... ... ... ...
295 4 job_unemp/unskilled non res 0.272749
296 4 job_unskilled resident 0.118356
297 4 num_dependents -0.112250
298 4 own_telephone_yes 0.319237
299 4 foreign_worker_yes -0.660103

300 rows × 3 columns



.. GENERATED FROM PYTHON SOURCE LINES 185-187 .. code-block:: Python coefficients.plot(select_k=15) .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_003.png :alt: Coefficients of LogisticRegression :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 188-194 Model no. 2: Random forest -------------------------- Now, we cross-validate a more advanced model using :class:`~sklearn.ensemble.RandomForestClassifier`. Again, we rely on :func:`~skrub.tabular_pipeline` to perform the appropriate preprocessing to use with this model. .. GENERATED FROM PYTHON SOURCE LINES 196-201 .. code-block:: Python from sklearn.ensemble import RandomForestClassifier advanced_model = tabular_pipeline(RandomForestClassifier(random_state=0)) advanced_model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(low_cardinality=OrdinalEncoder(handle_unknown='use_encoded_value',
                                                                    unknown_value=-1))),
                    ('randomforestclassifier',
                     RandomForestClassifier(random_state=0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 202-206 .. code-block:: Python advanced_cv_report = CrossValidationReport( advanced_model, X=X_experiment, y=y_experiment, pos_label="good" ) .. GENERATED FROM PYTHON SOURCE LINES 207-208 We will now compare this new model with the previous one. .. GENERATED FROM PYTHON SOURCE LINES 210-215 Comparing our models ==================== Now that we have our two models, we need to decide which one should go into production. We can compare them with a :class:`skore.ComparisonReport`. .. GENERATED FROM PYTHON SOURCE LINES 217-226 .. code-block:: Python from skore import ComparisonReport comparison = ComparisonReport( { "Simple Linear Model": simple_cv_report, "Advanced Pipeline": advanced_cv_report, }, ) .. GENERATED FROM PYTHON SOURCE LINES 227-228 This report follows the same API as :class:`~skore.CrossValidationReport`: .. GENERATED FROM PYTHON SOURCE LINES 228-230 .. code-block:: Python comparison.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭──────────────────────────── Tools to compare estimators ─────────────────────────────╮ │ ComparisonReport │ │ ├── .metrics │ │ │ ├── .accuracy(...) (↗︎) - Compute the accuracy score. │ │ │ ├── .brier_score(...) (↘︎) - Compute the Brier score. │ │ │ ├── .confusion_matrix(...) - Plot the confusion matrix. │ │ │ ├── .log_loss(...) (↘︎) - Compute the log loss. │ │ │ ├── .precision(...) (↗︎) - Compute the precision score. │ │ │ ├── .precision_recall(...) - Plot the precision-recall curve. │ │ │ ├── .recall(...) (↗︎) - Compute the recall score. │ │ │ ├── .roc(...) - Plot the ROC curve. │ │ │ ├── .roc_auc(...) (↗︎) - Compute the ROC AUC score. │ │ │ ├── .timings(...) - Get all measured processing times related │ │ │ │ to the different estimators. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ └── .summarize(...) - Report a set of metrics for the estimators. │ │ ├── .inspection │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ ├── .create_estimator_report(...) - Create an estimator report from one of the │ │ │ reports in the comparison. │ │ ├── .get_predictions(...) - Get predictions from the underlying │ │ │ reports. │ │ └── Attributes │ │ ├── .n_jobs - Number of jobs to run in parallel │ │ ├── .pos_label - No description available │ │ └── .reports_ - The compared reports │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 231-233 We have access to the same tools to perform statistical analysis and compare both models: .. GENERATED FROM PYTHON SOURCE LINES 233-236 .. code-block:: Python comparison_metrics = comparison.metrics.summarize(favorability=True) comparison_metrics.frame() .. raw:: html
mean std Favorability
Estimator Simple Linear Model Advanced Pipeline Simple Linear Model Advanced Pipeline
Metric
Accuracy 0.729333 0.745333 0.050903 0.032796 (↗︎)
Precision 0.785632 0.779443 0.034982 0.018644 (↗︎)
Recall 0.840934 0.885037 0.050696 0.053558 (↗︎)
ROC AUC 0.750335 0.773334 0.056447 0.034190 (↗︎)
Brier score 0.184294 0.169911 0.026786 0.010967 (↘︎)
Fit time (s) 0.112238 0.207508 0.009682 0.001207 (↘︎)
Predict time (s) 0.054712 0.050366 0.000496 0.000338 (↘︎)


.. GENERATED FROM PYTHON SOURCE LINES 237-239 .. code-block:: Python comparison.metrics.precision_recall().plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_004.png :alt: Precision-Recall Curve Positive label: good Data source: Test set, estimator = Simple Linear Model, estimator = Advanced Pipeline :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 240-245 Based on the previous tables and plots, it seems that the :class:`~sklearn.ensemble.RandomForestClassifier` model has slightly better performance. For the purposes of this guide however, we make the arbitrary choice to deploy the linear model to make a comparison with the coefficients study shown earlier. .. GENERATED FROM PYTHON SOURCE LINES 247-254 Final model evaluation on held-out data ======================================= Now that we have chosen to deploy the linear model, we will train it on the full experiment set and evaluate it on our held-out data: training on more data should help performance and we can also validate that our model generalizes well to new data. This can be done in one step with :meth:`~skore.ComparisonReport.create_estimator_report`. .. GENERATED FROM PYTHON SOURCE LINES 256-261 .. code-block:: Python final_report = comparison.create_estimator_report( name="Simple Linear Model", X_test=X_holdout, y_test=y_holdout ) .. GENERATED FROM PYTHON SOURCE LINES 262-263 This returns a :class:`~skore.EstimatorReport` which has a similar API to the other report classes: .. GENERATED FROM PYTHON SOURCE LINES 265-268 .. code-block:: Python final_metrics = final_report.metrics.summarize() final_metrics.frame() .. raw:: html
LogisticRegression
Metric
Accuracy 0.764000
Precision 0.808290
Recall 0.876404
ROC AUC 0.809613
Brier score 0.153900
Fit time (s) 0.094020
Predict time (s) 0.054049


.. GENERATED FROM PYTHON SOURCE LINES 269-271 .. code-block:: Python final_report.metrics.confusion_matrix().plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_005.png :alt: Confusion Matrix Decision threshold: 0.50 Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 272-276 We can easily combine the results of the previous cross-validation together with the evaluation on the held-out dataset, since the two are accessible as dataframes. This way, we can check if our chosen model meets the expectations we set during the experiment phase. .. GENERATED FROM PYTHON SOURCE LINES 278-283 .. code-block:: Python pd.concat( [final_metrics.frame(), simple_cv_report.metrics.summarize().frame()], axis="columns", ) .. raw:: html
LogisticRegression (LogisticRegression, mean) (LogisticRegression, std)
Metric
Accuracy 0.764000 0.729333 0.050903
Precision 0.808290 0.785632 0.034982
Recall 0.876404 0.840934 0.050696
ROC AUC 0.809613 0.750335 0.056447
Brier score 0.153900 0.184294 0.026786
Fit time (s) 0.094020 0.112238 0.009682
Predict time (s) 0.054049 0.054712 0.000496


.. GENERATED FROM PYTHON SOURCE LINES 284-286 As expected, our final model gets better performance, likely thanks to the larger training set. .. GENERATED FROM PYTHON SOURCE LINES 288-290 Our final sanity check is to compare the features considered most impactful between our final model and the cross-validation: .. GENERATED FROM PYTHON SOURCE LINES 292-310 .. code-block:: Python final_coefficients = final_report.inspection.coefficients() final_top_15_features = final_coefficients.frame(select_k=15)["feature"] simple_coefficients = simple_cv_report.inspection.coefficients() cv_top_15_features = ( simple_coefficients.frame(select_k=15) .groupby("feature", sort=False) .mean() .drop(columns="split") .reset_index()["feature"] ) pd.concat( [final_top_15_features, cv_top_15_features], axis="columns", ignore_index=True ) .. raw:: html
0 1
0 Intercept Intercept
1 checking_status_0<=X<200 checking_status_<0
2 checking_status_<0 checking_status_no checking
4 checking_status_no checking credit_history_critical/other existing credit
6 credit_history_all paid purpose_education
7 credit_history_critical/other existing credit purpose_new car
10 credit_history_no credits/all paid credit_amount
13 purpose_education age
15 purpose_new car NaN
19 purpose_retraining NaN
20 purpose_used car NaN
21 credit_amount NaN
32 installment_commitment NaN
45 age NaN
59 foreign_worker_yes NaN
3 NaN credit_history_all paid
5 NaN credit_history_no credits/all paid
8 NaN purpose_retraining
9 NaN purpose_used car
11 NaN savings_status_>=1000
12 NaN installment_commitment
14 NaN foreign_worker_yes


.. GENERATED FROM PYTHON SOURCE LINES 314-315 They seem very similar, so we are done! .. GENERATED FROM PYTHON SOURCE LINES 317-330 Tracking our work with a skore Project ====================================== Now that we have completed our modeling workflow, we should store our models in a safe place for future work. Indeed, if this research notebook were modified, we would no longer be able to relate the current production model to the code that generated it. We can use a :class:`skore.Project` to keep track of our experiments. This makes it easy to organize, retrieve, and compare models over time. Usually this would be done as you go along the model development, but in the interest of simplicity we kept this until the end. .. GENERATED FROM PYTHON SOURCE LINES 332-333 We load or create a local project: .. GENERATED FROM PYTHON SOURCE LINES 335-339 .. code-block:: Python project = skore.Project("german_credit_classification") .. GENERATED FROM PYTHON SOURCE LINES 347-348 We store our reports with descriptive keys: .. GENERATED FROM PYTHON SOURCE LINES 350-354 .. code-block:: Python project.put("simple_linear_model_cv", simple_cv_report) project.put("advanced_pipeline_cv", advanced_cv_report) project.put("final_model", final_report) .. GENERATED FROM PYTHON SOURCE LINES 355-356 Now we can retrieve a summary of our stored reports: .. GENERATED FROM PYTHON SOURCE LINES 358-362 .. code-block:: Python summary = project.summarize() # Uncomment the next line to display the widget in an interactive environment: # summary .. GENERATED FROM PYTHON SOURCE LINES 363-375 .. note:: Calling `summary` in a Jupyter notebook cell will show the following parallel coordinate plot to help you select models that you want to retrieve: .. image:: /_static/images/screenshot_getting_started.png :alt: Screenshot of the widget in a Jupyter notebook Each line represents a model, and we can select models by clicking on lines or dragging on metric axes to filter by performance. In the screenshot, we selected only the cross-validation reports; this allows us to retrieve exactly those reports programmatically. .. GENERATED FROM PYTHON SOURCE LINES 377-381 Supposing you selected "Cross-validation" in the "Report type" tab, if you now call :meth:`~skore.project._summary.Summary.reports`, you get only the :class:`~skore.CrossValidationReport` objects, which you can directly put in the form of a :class:`~skore.ComparisonReport`: .. GENERATED FROM PYTHON SOURCE LINES 383-389 .. code-block:: Python new_report = summary.reports(return_as="comparison") new_report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭──────────────────────────── Tools to compare estimators ─────────────────────────────╮ │ ComparisonReport │ │ ├── .metrics │ │ │ ├── .accuracy(...) (↗︎) - Compute the accuracy score. │ │ │ ├── .brier_score(...) (↘︎) - Compute the Brier score. │ │ │ ├── .confusion_matrix(...) - Plot the confusion matrix. │ │ │ ├── .log_loss(...) (↘︎) - Compute the log loss. │ │ │ ├── .precision(...) (↗︎) - Compute the precision score. │ │ │ ├── .precision_recall(...) - Plot the precision-recall curve. │ │ │ ├── .recall(...) (↗︎) - Compute the recall score. │ │ │ ├── .roc(...) - Plot the ROC curve. │ │ │ ├── .roc_auc(...) (↗︎) - Compute the ROC AUC score. │ │ │ ├── .timings(...) - Get all measured processing times related │ │ │ │ to the different estimators. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ └── .summarize(...) - Report a set of metrics for the estimators. │ │ ├── .inspection │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ ├── .create_estimator_report(...) - Create an estimator report from one of the │ │ │ reports in the comparison. │ │ ├── .get_predictions(...) - Get predictions from the underlying │ │ │ reports. │ │ └── Attributes │ │ ├── .n_jobs - Number of jobs to run in parallel │ │ ├── .pos_label - No description available │ │ └── .reports_ - The compared reports │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 397-411 .. admonition:: Stay tuned! This is only the beginning for skore. We welcome your feedback and ideas to make it the best tool for end-to-end data science. Key benefits of using skore in your ML workflow: * Standardized evaluation and comparison of models * Rich visualizations and diagnostics * Organized experiment tracking * Seamless integration with scikit-learn Feel free to join our community on `Discord `_ or `create an issue `_. .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 14.795 seconds) .. _sphx_glr_download_auto_examples_getting_started_plot_getting_started.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_getting_started.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_getting_started.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_getting_started.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_