.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/getting_started/plot_getting_started.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_getting_started_plot_getting_started.py: .. _example_getting_started: ====================== Skore: getting started ====================== This guide illustrates how to use skore through a complete machine learning workflow for binary classification: #. Set up a proper experiment with training and test data #. Develop and evaluate multiple models using cross-validation #. Compare models to select the best one #. Validate the final model on held-out data #. Track and organize your machine learning results Throughout this guide, we will see how skore helps you: * Avoid common pitfalls with smart diagnostics * Quickly get rich insights into model performance * Organize and track your experiments Storing reports in Skore Hub ---------------------------- At the end of this example, we send the reports in Skore Hub (https://skore.probabl.ai/) that is a platform for storing, sharing and exploring your machine learning reports. To run this example and push in your own Skore Hub workspace and project, you can run this example with the following command: .. code-block:: bash WORKSPACE= PROJECT= python plot_getting_started.py In this gallery, we are going to push the different reports into a public workspace. .. GENERATED FROM PYTHON SOURCE LINES 42-50 Setting up our binary classification problem ============================================ Let's start by loading the German credit dataset, a classic binary classification problem where we predict the customer's credit risk ("good" or "bad"). This dataset contains various features about credit applicants, including personal information, credit history, and loan details. .. GENERATED FROM PYTHON SOURCE LINES 52-61 .. code-block:: Python import pandas as pd import skore from sklearn.datasets import fetch_openml from skrub import TableReport german_credit = fetch_openml(data_id=31, as_frame=True, parser="pandas") X, y = german_credit.data, german_credit.target TableReport(german_credit.frame) .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 62-73 Creating our experiment and held-out sets ----------------------------------------- We will use skore's enhanced :func:`~skore.train_test_split` function to create our experiment set and a left-out test set. The experiment set will be used for model development and cross-validation, while the left-out set will only be used at the end to validate our final model. Unlike scikit-learn's :func:`~skore.train_test_split`, skore's version provides helpful diagnostics about potential issues with your data split, such as class imbalance. .. GENERATED FROM PYTHON SOURCE LINES 75-79 .. code-block:: Python X_experiment, X_holdout, y_experiment, y_holdout = skore.train_test_split( X, y, random_state=42 ) .. rst-class:: sphx-glr-script-out .. code-block:: none ╭────────────────────── HighClassImbalanceTooFewExamplesWarning ───────────────────────╮ │ It seems that you have a classification problem with at least one class with fewer │ │ than 100 examples in the test set. In this case, using train_test_split may not be a │ │ good idea because of high variability in the scores obtained on the test set. We │ │ suggest three options to tackle this challenge: you can increase test_size, collect │ │ more data, or use skore's CrossValidationReport with the `splitter` parameter of │ │ your choice. │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ ╭───────────────────────────────── ShuffleTrueWarning ─────────────────────────────────╮ │ We detected that the `shuffle` parameter is set to `True` either explicitly or from │ │ its default value. In case of time-ordered events (even if they are independent), │ │ this will result in inflated model performance evaluation because natural drift will │ │ not be taken into account. We recommend setting the shuffle parameter to `False` in │ │ order to ensure the evaluation process is really representative of your production │ │ release process. │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 80-85 Skore tells us we have class-imbalance issues with our data, which we confirm with the :class:`~skore.TableReport` above by clicking on the "class" column and looking at the class distribution: there are only 300 examples where the target is "bad". The second warning concerns time-ordered data, but our data does not contain time-ordered columns so we can safely ignore it. .. GENERATED FROM PYTHON SOURCE LINES 87-101 Model development with cross-validation ======================================= We will investigate two different families of models using cross-validation. 1. A :class:`~sklearn.linear_model.LogisticRegression` which is a linear model 2. A :class:`~sklearn.ensemble.RandomForestClassifier` which is an ensemble of decision trees. In both cases, we rely on :func:`skrub.tabular_pipeline` to choose the proper preprocessing depending on the kind of model. Cross-validation is necessary to get a more reliable estimate of model performance. skore makes it easy through :class:`skore.CrossValidationReport`. .. GENERATED FROM PYTHON SOURCE LINES 103-109 Model no. 1: Linear regression with preprocessing ------------------------------------------------- Our first model will be a linear model, with automatic preprocessing of non-numeric data. Under the hood, skrub's :class:`~skrub.TableVectorizer` will adapt the preprocessing based on our choice to use a linear model. .. GENERATED FROM PYTHON SOURCE LINES 111-117 .. code-block:: Python from sklearn.linear_model import LogisticRegression from skrub import tabular_pipeline simple_model = tabular_pipeline(LogisticRegression()) simple_model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(datetime=DatetimeEncoder(periodic_encoding='spline'))),
                    ('simpleimputer', SimpleImputer(add_indicator=True)),
                    ('squashingscaler', SquashingScaler(max_absolute_value=5)),
                    ('logisticregression', LogisticRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 118-122 We now evaluate our model with cross-validation, using :func:`~skore.evaluate` with `splitter=5` to perform 5-fold cross-validation. This returns a :class:`~skore.CrossValidationReport` object, which can be used to access the performance metrics and other information about the model. .. GENERATED FROM PYTHON SOURCE LINES 124-130 .. code-block:: Python from skore import evaluate simple_cv_report = evaluate( simple_model, X_experiment, y_experiment, pos_label="good", splitter=5 ) .. GENERATED FROM PYTHON SOURCE LINES 131-135 Skore reports allow us to structure the statistical information we look for when experimenting with predictive models. First, the :meth:`~skore.CrossValidationReport.help` method shows us all its available methods and attributes, with the knowledge that our model was trained for classification: .. GENERATED FROM PYTHON SOURCE LINES 137-139 .. code-block:: Python simple_cv_report.help() .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 140-141 For example, we can examine the training data, which excludes the held-out data: .. GENERATED FROM PYTHON SOURCE LINES 143-145 .. code-block:: Python simple_cv_report.data.analyze() .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 146-148 But we can also quickly get an overview of the performance of our model, using :meth:`~skore.CrossValidationReport.metrics.summarize`: .. GENERATED FROM PYTHON SOURCE LINES 150-153 .. code-block:: Python simple_metrics = simple_cv_report.metrics.summarize() simple_metrics.frame(favorability=True) .. raw:: html
LogisticRegression Favorability
mean std
Metric
Accuracy 0.729333 0.050903 (↗︎)
Precision 0.785632 0.034982 (↗︎)
Recall 0.840934 0.050696 (↗︎)
ROC AUC 0.750335 0.056447 (↗︎)
Brier score 0.184294 0.026786 (↘︎)
Fit time (s) 0.091420 0.000669 (↘︎)
Predict time (s) 0.054091 0.000392 (↘︎)


.. GENERATED FROM PYTHON SOURCE LINES 154-158 .. note:: `favorability=True` adds a column showing whether higher or lower metric values are better. .. GENERATED FROM PYTHON SOURCE LINES 160-162 In addition to the summary of metrics, skore provides more advanced statistical information such as the precision-recall curve: .. GENERATED FROM PYTHON SOURCE LINES 164-167 .. code-block:: Python precision_recall = simple_cv_report.metrics.precision_recall() precision_recall.help() .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 168-173 .. note:: The output of :meth:`~skore.CrossValidationReport.metrics.precision_recall` is a :class:`~skore.Display` object. This is a common pattern in skore which allows us to access the information in several ways. .. GENERATED FROM PYTHON SOURCE LINES 175-176 We can visualize the critical information as a plot, with only a few lines of code: .. GENERATED FROM PYTHON SOURCE LINES 178-180 .. code-block:: Python precision_recall.plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_001.png :alt: Precision-Recall Curve for LogisticRegression Positive label: good Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 181-182 Or we can access the raw information as a dataframe if additional analysis is needed: .. GENERATED FROM PYTHON SOURCE LINES 184-186 .. code-block:: Python precision_recall.frame() .. raw:: html
split threshold precision recall
0 0 0.110277 0.700000 1.000000
1 0 0.215387 0.744681 1.000000
2 0 0.238563 0.742857 0.990476
3 0 0.248819 0.741007 0.980952
4 0 0.273961 0.739130 0.971429
... ... ... ... ...
660 4 0.982595 1.000000 0.048077
661 4 0.988545 1.000000 0.038462
662 4 0.989817 1.000000 0.028846
663 4 0.994946 1.000000 0.019231
664 4 0.995636 1.000000 0.009615

665 rows × 4 columns



.. GENERATED FROM PYTHON SOURCE LINES 187-188 As another example, we can plot the confusion matrix with the same consistent API: .. GENERATED FROM PYTHON SOURCE LINES 190-193 .. code-block:: Python confusion_matrix = simple_cv_report.metrics.confusion_matrix() confusion_matrix.plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_002.png :alt: Confusion Matrix Decision threshold: 0.50 Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_002.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): .. GENERATED FROM PYTHON SOURCE LINES 194-196 Skore also provides utilities to inspect models. Since our model is a linear model, we can study the importance that it gives to each feature: .. GENERATED FROM PYTHON SOURCE LINES 198-201 .. code-block:: Python coefficients = simple_cv_report.inspection.coefficients() coefficients.frame() .. raw:: html
feature coefficient_mean coefficient_std
0 Intercept 1.221919 0.145022
1 checking_status_0<=X<200 -0.356274 0.116078
2 checking_status_<0 -0.712660 0.098075
3 checking_status_>=200 0.216352 0.220024
4 checking_status_no checking 0.957280 0.117765
5 duration -0.299406 0.085176
6 credit_history_all paid -0.456190 0.110763
7 credit_history_critical/other existing credit 0.655752 0.152976
8 credit_history_delayed previously 0.104324 0.117658
9 credit_history_existing paid -0.267659 0.095892
10 credit_history_no credits/all paid -0.383078 0.178751
11 purpose_business -0.042625 0.146252
12 purpose_domestic appliance -0.128409 0.223069
13 purpose_education -0.504801 0.218228
14 purpose_furniture/equipment 0.088990 0.063633
15 purpose_new car -0.410305 0.084358
16 purpose_other 0.069206 0.138734
17 purpose_radio/tv 0.138978 0.075132
18 purpose_repairs -0.271701 0.122305
19 purpose_retraining 0.448025 0.236986
20 purpose_used car 0.469388 0.116847
21 credit_amount -0.383071 0.145635
22 savings_status_100<=X<500 -0.010966 0.098748
23 savings_status_500<=X<1000 0.085306 0.085370
24 savings_status_<100 -0.333553 0.109446
25 savings_status_>=1000 0.370625 0.316363
26 savings_status_no known savings 0.186693 0.082206
27 employment_1<=X<4 -0.062535 0.085583
28 employment_4<=X<7 0.161965 0.094966
29 employment_<1 -0.091905 0.094219
30 employment_>=7 0.075963 0.109892
31 employment_unemployed -0.050175 0.131802
32 installment_commitment -0.627063 0.178107
33 personal_status_female div/dep/mar -0.166590 0.081708
34 personal_status_male div/sep -0.220082 0.171560
35 personal_status_male mar/wid -0.087428 0.061253
36 personal_status_male single 0.328973 0.116731
37 other_parties_co applicant -0.122421 0.066232
38 other_parties_guarantor 0.276689 0.046478
39 other_parties_none -0.154268 0.064498
40 residence_since -0.070676 0.080168
41 property_magnitude_car -0.012530 0.106496
42 property_magnitude_life insurance -0.110668 0.086645
43 property_magnitude_no known property -0.216614 0.087059
44 property_magnitude_real estate 0.184397 0.084662
45 age 0.445303 0.102326
46 other_payment_plans_bank -0.128049 0.084283
47 other_payment_plans_none 0.200376 0.060104
48 other_payment_plans_stores -0.072327 0.111137
49 housing_for free -0.108021 0.141465
50 housing_own 0.149660 0.058071
51 housing_rent -0.175394 0.053475
52 existing_credits -0.293875 0.160032
53 job_high qualif/self emp/mgmt -0.146369 0.138251
54 job_skilled -0.069226 0.146688
55 job_unemp/unskilled non res 0.245873 0.372472
56 job_unskilled resident 0.031590 0.083569
57 num_dependents -0.097292 0.102585
58 own_telephone_yes 0.326305 0.141762
59 foreign_worker_yes -0.680383 0.183700


.. GENERATED FROM PYTHON SOURCE LINES 202-204 .. code-block:: Python coefficients.plot(select_k=15) .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_003.png :alt: Coefficients of LogisticRegression :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 205-212 Model no. 2: Random forest -------------------------- Now, we cross-validate a more advanced model using :class:`~sklearn.ensemble.RandomForestClassifier`. Again, we rely on :func:`~skrub.tabular_pipeline` to perform the appropriate preprocessing to use with this model. .. GENERATED FROM PYTHON SOURCE LINES 214-219 .. code-block:: Python from sklearn.ensemble import RandomForestClassifier advanced_model = tabular_pipeline(RandomForestClassifier(random_state=0)) advanced_model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(low_cardinality=OrdinalEncoder(handle_unknown='use_encoded_value',
                                                                    unknown_value=-1))),
                    ('randomforestclassifier',
                     RandomForestClassifier(random_state=0))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 220-224 .. code-block:: Python advanced_cv_report = evaluate( advanced_model, X_experiment, y_experiment, pos_label="good", splitter=5 ) .. GENERATED FROM PYTHON SOURCE LINES 225-226 We will now compare this new model with the previous one. .. GENERATED FROM PYTHON SOURCE LINES 228-234 Comparing our models ==================== Now that we have our two models, we need to decide which one should go into production. We can compare them with a :func:`~skore.compare` that returns a :class:`~skore.ComparisonReport`. .. GENERATED FROM PYTHON SOURCE LINES 236-245 .. code-block:: Python from skore import compare comparison = compare( { "Simple Linear Model": simple_cv_report, "Advanced Pipeline": advanced_cv_report, }, ) .. GENERATED FROM PYTHON SOURCE LINES 246-247 This report follows the same API as :class:`~skore.CrossValidationReport`: .. GENERATED FROM PYTHON SOURCE LINES 247-249 .. code-block:: Python comparison.help() .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 250-252 We have access to the same tools to perform statistical analysis and compare both models: .. GENERATED FROM PYTHON SOURCE LINES 252-255 .. code-block:: Python comparison_metrics = comparison.metrics.summarize() comparison_metrics.frame(favorability=True) .. raw:: html
mean std Favorability
Estimator Advanced Pipeline Simple Linear Model Advanced Pipeline Simple Linear Model
Metric
Accuracy 0.745333 0.729333 0.032796 0.050903 (↗︎)
Precision 0.779443 0.785632 0.018644 0.034982 (↗︎)
Recall 0.885037 0.840934 0.053558 0.050696 (↗︎)
ROC AUC 0.773334 0.750335 0.034190 0.056447 (↗︎)
Brier score 0.169911 0.184294 0.010967 0.026786 (↘︎)
Fit time (s) 0.205567 0.091420 0.000927 0.000669 (↘︎)
Predict time (s) 0.051138 0.054091 0.000694 0.000392 (↘︎)


.. GENERATED FROM PYTHON SOURCE LINES 256-258 .. code-block:: Python comparison.metrics.precision_recall().plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_004.png :alt: Precision-Recall Curve Positive label: good Data source: Test set, estimator = Simple Linear Model, estimator = Advanced Pipeline :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_004.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 259-264 Based on the previous tables and plots, it seems that the :class:`~sklearn.ensemble.RandomForestClassifier` model has slightly better performance. For the purposes of this guide however, we make the arbitrary choice to deploy the linear model to make a comparison with the coefficients study shown earlier. .. GENERATED FROM PYTHON SOURCE LINES 266-273 Final model evaluation on held-out data ======================================= Now that we have chosen to deploy the linear model, we will train it on the full experiment set and evaluate it on our held-out data: training on more data should help performance and we can also validate that our model generalizes well to new data. This can be done in one step with :meth:`~skore.ComparisonReport.create_estimator_report`. .. GENERATED FROM PYTHON SOURCE LINES 275-280 .. code-block:: Python final_report = comparison.create_estimator_report( report_key="Simple Linear Model", X_test=X_holdout, y_test=y_holdout ) .. GENERATED FROM PYTHON SOURCE LINES 281-283 This returns a :class:`~skore.EstimatorReport` which has a similar API to the other report classes: .. GENERATED FROM PYTHON SOURCE LINES 285-288 .. code-block:: Python final_metrics = final_report.metrics.summarize() final_metrics.frame() .. raw:: html
LogisticRegression
Metric
Accuracy 0.764000
Precision 0.808290
Recall 0.876404
ROC AUC 0.809613
Brier score 0.153900
Fit time (s) 0.104981
Predict time (s) 0.058107


.. GENERATED FROM PYTHON SOURCE LINES 289-291 .. code-block:: Python final_report.metrics.confusion_matrix().plot() .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_005.png :alt: Confusion Matrix Decision threshold: 0.50 Data source: Test set :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_005.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 292-296 We can easily combine the results of the previous cross-validation together with the evaluation on the held-out dataset, since the two are accessible as dataframes. This way, we can check if our chosen model meets the expectations we set during the experiment phase. .. GENERATED FROM PYTHON SOURCE LINES 298-303 .. code-block:: Python pd.concat( [final_metrics.frame(), simple_cv_report.metrics.summarize().frame()], axis="columns", ) .. raw:: html
LogisticRegression (LogisticRegression, mean) (LogisticRegression, std)
Metric
Accuracy 0.764000 0.729333 0.050903
Precision 0.808290 0.785632 0.034982
Recall 0.876404 0.840934 0.050696
ROC AUC 0.809613 0.750335 0.056447
Brier score 0.153900 0.184294 0.026786
Fit time (s) 0.104981 0.091420 0.000669
Predict time (s) 0.058107 0.054091 0.000392


.. GENERATED FROM PYTHON SOURCE LINES 304-306 As expected, our final model gets better performance, likely thanks to the larger training set. .. GENERATED FROM PYTHON SOURCE LINES 308-310 Our final sanity check is to compare the features considered most impactful between our final model and the cross-validation: .. GENERATED FROM PYTHON SOURCE LINES 312-328 .. code-block:: Python final_coefficients = final_report.inspection.coefficients() cv_coefficients = simple_cv_report.inspection.coefficients() features_final_coefficients = final_coefficients.frame(select_k=15)["feature"] features_cv_coefficients = cv_coefficients.frame(select_k=15)["feature"] print( f"Most important features available in both models: " f"{set(features_final_coefficients).intersection(set(features_cv_coefficients))}" ) print( f"Most important features available in final model but not in cross-validation: " f"{set(features_final_coefficients).difference(set(features_cv_coefficients))}" ) .. rst-class:: sphx-glr-script-out .. code-block:: none Most important features available in both models: {'purpose_education', 'credit_history_no credits/all paid', 'credit_history_all paid', 'checking_status_no checking', 'credit_amount', 'credit_history_critical/other existing credit', 'Intercept', 'purpose_new car', 'checking_status_<0', 'purpose_used car', 'installment_commitment', 'foreign_worker_yes', 'age', 'purpose_retraining'} Most important features available in final model but not in cross-validation: {'checking_status_0<=X<200'} .. GENERATED FROM PYTHON SOURCE LINES 329-331 We can further check if there is a drastic difference in the ordering by plotting those features with the largest absolute coefficients. .. GENERATED FROM PYTHON SOURCE LINES 333-336 .. code-block:: Python final_coefficients.plot(select_k=15, sorting_order="descending") cv_coefficients.plot(select_k=15, sorting_order="descending") .. rst-class:: sphx-glr-horizontal * .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_006.png :alt: Coefficients of LogisticRegression :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_006.png :class: sphx-glr-multi-img * .. image-sg:: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_007.png :alt: Coefficients of LogisticRegression :srcset: /auto_examples/getting_started/images/sphx_glr_plot_getting_started_007.png :class: sphx-glr-multi-img .. GENERATED FROM PYTHON SOURCE LINES 337-338 They seem very similar, so we are done! .. GENERATED FROM PYTHON SOURCE LINES 340-360 Tracking our work with a skore Project ====================================== Now that we have completed our modeling workflow, we should store our models in a safe place for future work. Indeed, if this research notebook were modified, we would no longer be able to relate the current production model to the code that generated it. We can use a :class:`skore.Project` to keep track of our experiments. This makes it easy to organize, retrieve, and compare models over time. Usually this would be done as you go along the model development, but in the interest of simplicity we kept this until the end. We are using Skore Hub (https://skore.probabl.ai/) to store and review our reports. .. note:: Here, we are using Skore Hub to store and analyze the reports that we computed. Note that you can store reports as well locally using `mode="local"` when creating or loading projects via `skore.Project`. .. GENERATED FROM PYTHON SOURCE LINES 360-367 .. code-block:: Python from skore import login login() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭───────────────────────────────── Login to Skore Hub ─────────────────────────────────╮ │ │ │ Successfully logged in, using API key. │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 402-403 We load or create a hub project: .. GENERATED FROM PYTHON SOURCE LINES 403-406 .. code-block:: Python project = Project(f"{WORKSPACE}/{PROJECT}", mode="hub") .. GENERATED FROM PYTHON SOURCE LINES 407-408 We store our reports with descriptive keys: .. GENERATED FROM PYTHON SOURCE LINES 408-411 .. code-block:: Python project.put("simple_linear_model_cv", simple_cv_report) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): Putting simple_linear_model_cv 0:03:01 Consult your report at https://skore.probabl.ai/skore/example-getting-started-0.14/cross-validations/2365 .. GENERATED FROM PYTHON SOURCE LINES 412-414 .. code-block:: Python project.put("advanced_pipeline_cv", advanced_cv_report) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): /home/runner/work/skore/skore/skore/venv/lib/python3.13/site-packages/skore/_sklearn/_plot/metrics/confusion_matrix.py:604: FutureWarning: The default of observed=False is deprecated and will be changed to True in a future version of pandas. Pass observed=False to retain current behavior or observed=True to adopt the future default and silence this warning. for _, group in self.confusion_matrix.groupby(["split"]): Putting advanced_pipeline_cv 0:03:07 Consult your report at https://skore.probabl.ai/skore/example-getting-started-0.14/cross-validations/2371 .. GENERATED FROM PYTHON SOURCE LINES 415-417 In this example, we created a read-only Skore Hub project that you can visit by clicking on the link above and explore the reports. .. GENERATED FROM PYTHON SOURCE LINES 419-420 Now we can retrieve a summary of our stored reports: .. GENERATED FROM PYTHON SOURCE LINES 422-426 .. code-block:: Python summary = project.summarize() # Uncomment the next line to display the widget in an interactive environment: # summary .. GENERATED FROM PYTHON SOURCE LINES 427-439 .. note:: Calling `summary` in a Jupyter notebook cell will show the following parallel coordinate plot to help you select models that you want to retrieve: .. image:: /_static/images/screenshot_getting_started.png :alt: Screenshot of the widget in a Jupyter notebook Each line represents a model, and we can select models by clicking on lines or dragging on metric axes to filter by performance. In the screenshot, we selected only the cross-validation reports; this allows us to retrieve exactly those reports programmatically. .. GENERATED FROM PYTHON SOURCE LINES 441-445 Supposing you selected "Cross-validation" in the "Report type" tab, if you now call :meth:`~skore.project._summary.Summary.reports`, you get only the :class:`~skore.CrossValidationReport` objects, which you can directly put in the form of a :class:`~skore.ComparisonReport`: .. GENERATED FROM PYTHON SOURCE LINES 447-452 .. code-block:: Python new_report = summary.reports(return_as="comparison") new_report.help() .. raw:: html


.. GENERATED FROM PYTHON SOURCE LINES 457-471 .. admonition:: Stay tuned! This is only the beginning for skore. We welcome your feedback and ideas to make it the best tool for end-to-end data science. Key benefits of using skore in your ML workflow: * Standardized evaluation and comparison of models * Rich visualizations and diagnostics * Organized experiment tracking * Seamless integration with scikit-learn Feel free to join our community on `Discord `_ or `create an issue `_. .. rst-class:: sphx-glr-timing **Total running time of the script:** (6 minutes 32.484 seconds) .. _sphx_glr_download_auto_examples_getting_started_plot_getting_started.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_getting_started.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_getting_started.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_getting_started.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_