.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/use_cases/plot_employee_salaries.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_use_cases_plot_employee_salaries.py: .. _example_use_case_employee_salaries: =============================== Simplified experiment reporting =============================== This example shows how to leverage skore for reporting model evaluation and storing the results for further analysis. .. GENERATED FROM PYTHON SOURCE LINES 13-15 We set some environment variables to avoid some spurious warnings related to parallelism. .. GENERATED FROM PYTHON SOURCE LINES 16-21 .. code-block:: Python import os os.environ["POLARS_ALLOW_FORKING_THREAD"] = "1" os.environ["TOKENIZERS_PARALLELISM"] = "true" .. GENERATED FROM PYTHON SOURCE LINES 22-24 Creating a skore project and loading some data ============================================== .. GENERATED FROM PYTHON SOURCE LINES 26-28 Let's open a skore project in which we will be able to store artifacts from our experiments. .. GENERATED FROM PYTHON SOURCE LINES 29-33 .. code-block:: Python import skore my_project = skore.Project("my_project") .. GENERATED FROM PYTHON SOURCE LINES 43-44 We use a skrub dataset that is non-trivial. .. GENERATED FROM PYTHON SOURCE LINES 45-50 .. code-block:: Python from skrub.datasets import fetch_employee_salaries datasets = fetch_employee_salaries() df, y = datasets.X, datasets.y .. GENERATED FROM PYTHON SOURCE LINES 51-53 Let's first have a condensed summary of the input data using a :class:`skrub.TableReport`. .. GENERATED FROM PYTHON SOURCE LINES 54-59 .. code-block:: Python from skrub import TableReport table_report = TableReport(df) table_report .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 60-83 From the table report, we can make a few observations: * The type of data is heterogeneous: we mainly have categorical and date-related features. * The year related to the ``date_first_hired`` column is also present in the ``date`` column. Hence, we should beware of not creating twice the same feature during the feature engineering. * By looking at the "Associations" tab of the table report, we observe that two features are holding the exact same information: ``department`` and ``department_name``. Hence, during our feature engineering, we could potentially drop one of them if the final predictive model is sensitive to the collinearity. * When looking at the "Stats" tab, we observe that the ``division`` and ``employee_position_title`` are two features containing a large number of categories. It is something that we should consider in our feature engineering. We can store the report in the skore project so that we can easily retrieve it later without necessarily having to reload the dataset and recompute the report. .. GENERATED FROM PYTHON SOURCE LINES 83-85 .. code-block:: Python my_project.put("Input data summary", table_report) .. GENERATED FROM PYTHON SOURCE LINES 86-89 In terms of target and thus the task that we want to solve, we are interested in predicting the salary of an employee given the previous features. We therefore have a regression task at end. .. GENERATED FROM PYTHON SOURCE LINES 90-92 .. code-block:: Python y .. rst-class:: sphx-glr-script-out .. code-block:: none 0 69222.18 1 97392.47 2 104717.28 3 52734.57 4 93396.00 ... 9223 72094.53 9224 169543.85 9225 102736.52 9226 153747.50 9227 75484.08 Name: current_annual_salary, Length: 9228, dtype: float64 .. GENERATED FROM PYTHON SOURCE LINES 93-95 Tree-based model ================ .. GENERATED FROM PYTHON SOURCE LINES 97-109 Let's start by creating a tree-based model using some out-of-the-box tools. For feature engineering we use skrub's :class:`~skrub.TableVectorizer`. To deal with the high cardinality of the categorical features, we use a :class:`~skrub.TextEncoder` that uses a language model and an embedding model to encode the categorical features. Finally, we use a :class:`~sklearn.ensemble.HistGradientBoostingRegressor` as a base estimator that is a rather robust model. Modelling ^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 109-120 .. code-block:: Python from skrub import TableVectorizer, TextEncoder from sklearn.ensemble import HistGradientBoostingRegressor from sklearn.pipeline import make_pipeline model = make_pipeline( TableVectorizer(high_cardinality=TextEncoder()), HistGradientBoostingRegressor(), ) model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(high_cardinality=TextEncoder())),
                    ('histgradientboostingregressor',
                     HistGradientBoostingRegressor())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 121-125 Evaluation ^^^^^^^^^^ Let us compute the cross-validation report for this model using :class:`skore.CrossValidationReport`: .. GENERATED FROM PYTHON SOURCE LINES 125-130 .. code-block:: Python from skore import CrossValidationReport report = CrossValidationReport(estimator=model, X=df, y=y, cv_splitter=5, n_jobs=4) report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭───────────── Tools to diagnose estimator HistGradientBoostingRegressor ──────────────╮ │ CrossValidationReport │ │ ├── .metrics │ │ │ ├── .prediction_error(...) - Plot the prediction error of a regression │ │ │ │ model. │ │ │ ├── .r2(...) (↗︎) - Compute the R² score. │ │ │ ├── .rmse(...) (↘︎) - Compute the root mean squared error. │ │ │ ├── .timings(...) - Get all measured processing times related │ │ │ │ to the estimator. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ └── .report_metrics(...) - Report a set of metrics for our estimator. │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ ├── .get_predictions(...) - Get estimator's predictions. │ │ └── Attributes │ │ ├── .X - The data to fit │ │ ├── .y - The target variable to try to predict in │ │ │ the case of supervised learning │ │ ├── .estimator_ - The cloned or copied estimator │ │ ├── .estimator_name_ - The name of the estimator │ │ ├── .estimator_reports_ - The estimator reports for each split │ │ └── .n_jobs - Number of jobs to run in parallel │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 131-132 We cache the predictions for later use. .. GENERATED FROM PYTHON SOURCE LINES 133-135 .. code-block:: Python report.cache_predictions(n_jobs=4) .. GENERATED FROM PYTHON SOURCE LINES 136-137 We store the report in our skore project. .. GENERATED FROM PYTHON SOURCE LINES 138-140 .. code-block:: Python my_project.put("HGBT model report", report) .. GENERATED FROM PYTHON SOURCE LINES 141-142 We can now have a look at the performance of the model with some standard metrics. .. GENERATED FROM PYTHON SOURCE LINES 143-146 .. code-block:: Python report.metrics.report_metrics() .. raw:: html
HistGradientBoostingRegressor
mean std
Metric
0.925649 0.015026
RMSE 7925.022613 1086.872675
Fit time 19.717581 6.015212
Predict time 6.735268 1.976639


.. GENERATED FROM PYTHON SOURCE LINES 147-154 Linear model ============ Now that we have established a first model that serves as a baseline, we shall proceed to define a quite complex linear model (a pipeline with a complex feature engineering that uses a linear model as the base estimator). .. GENERATED FROM PYTHON SOURCE LINES 156-158 Modelling ^^^^^^^^^ .. GENERATED FROM PYTHON SOURCE LINES 158-207 .. code-block:: Python import numpy as np from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OneHotEncoder, SplineTransformer from sklearn.linear_model import RidgeCV from skrub import DatetimeEncoder, ToDatetime, DropCols, GapEncoder def periodic_spline_transformer(period, n_splines=None, degree=3): if n_splines is None: n_splines = period n_knots = n_splines + 1 # periodic and include_bias is True return SplineTransformer( degree=degree, n_knots=n_knots, knots=np.linspace(0, period, n_knots).reshape(n_knots, 1), extrapolation="periodic", include_bias=True, ) one_hot_features = ["gender", "department_name", "assignment_category"] datetime_features = "date_first_hired" date_encoder = make_pipeline( ToDatetime(), DatetimeEncoder(resolution="day", add_weekday=True, add_total_seconds=False), DropCols("date_first_hired_year"), ) date_engineering = make_column_transformer( (periodic_spline_transformer(12, n_splines=6), ["date_first_hired_month"]), (periodic_spline_transformer(31, n_splines=15), ["date_first_hired_day"]), (periodic_spline_transformer(7, n_splines=3), ["date_first_hired_weekday"]), ) feature_engineering_date = make_pipeline(date_encoder, date_engineering) preprocessing = make_column_transformer( (feature_engineering_date, datetime_features), (OneHotEncoder(drop="if_binary", handle_unknown="ignore"), one_hot_features), (GapEncoder(n_components=100), "division"), (GapEncoder(n_components=100), "employee_position_title"), ) model = make_pipeline(preprocessing, RidgeCV(alphas=np.logspace(-3, 3, 100))) model .. raw:: html
Pipeline(steps=[('columntransformer',
                     ColumnTransformer(transformers=[('pipeline',
                                                      Pipeline(steps=[('pipeline',
                                                                       Pipeline(steps=[('todatetime',
                                                                                        ToDatetime()),
                                                                                       ('datetimeencoder',
                                                                                        DatetimeEncoder(add_total_seconds=False,
                                                                                                        add_weekday=True,
                                                                                                        resolution='day')),
                                                                                       ('dropcols',
                                                                                        DropCols(cols='date_first_hired_year'))])),
                                                                      ('columntransformer',
                                                                       ColumnTransformer(transfor...
           4.03701726e+01, 4.64158883e+01, 5.33669923e+01, 6.13590727e+01,
           7.05480231e+01, 8.11130831e+01, 9.32603347e+01, 1.07226722e+02,
           1.23284674e+02, 1.41747416e+02, 1.62975083e+02, 1.87381742e+02,
           2.15443469e+02, 2.47707636e+02, 2.84803587e+02, 3.27454916e+02,
           3.76493581e+02, 4.32876128e+02, 4.97702356e+02, 5.72236766e+02,
           6.57933225e+02, 7.56463328e+02, 8.69749003e+02, 1.00000000e+03])))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 208-220 In the diagram above, we can see what how we performed our feature engineering: * For categorical features, we use two approaches: if the number of categories is relatively small, we use a `OneHotEncoder` and if the number of categories is large, we use a `GapEncoder` that was designed to deal with high cardinality categorical features. * Then, we have another transformation to encode the date features. We first split the date into multiple features (day, month, and year). Then, we apply a periodic spline transformation to each of the date features to capture the periodicity of the data. * Finally, we fit a :class:`~sklearn.linear_model.RidgeCV` model. .. GENERATED FROM PYTHON SOURCE LINES 222-228 Evaluation ^^^^^^^^^^ Now, we want to evaluate this linear model via cross-validation (with 5 folds). For that, we use skore's :class:`~skore.CrossValidationReport` to investigate the performance of our model. .. GENERATED FROM PYTHON SOURCE LINES 228-231 .. code-block:: Python report = CrossValidationReport(estimator=model, X=df, y=y, cv_splitter=5, n_jobs=4) report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none ╭──────────────────────── Tools to diagnose estimator RidgeCV ─────────────────────────╮ │ CrossValidationReport │ │ ├── .metrics │ │ │ ├── .prediction_error(...) - Plot the prediction error of a regression │ │ │ │ model. │ │ │ ├── .r2(...) (↗︎) - Compute the R² score. │ │ │ ├── .rmse(...) (↘︎) - Compute the root mean squared error. │ │ │ ├── .timings(...) - Get all measured processing times related │ │ │ │ to the estimator. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ └── .report_metrics(...) - Report a set of metrics for our estimator. │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ ├── .get_predictions(...) - Get estimator's predictions. │ │ └── Attributes │ │ ├── .X - The data to fit │ │ ├── .y - The target variable to try to predict in │ │ │ the case of supervised learning │ │ ├── .estimator_ - The cloned or copied estimator │ │ ├── .estimator_name_ - The name of the estimator │ │ ├── .estimator_reports_ - The estimator reports for each split │ │ └── .n_jobs - Number of jobs to run in parallel │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 232-240 We observe that the cross-validation report detected that we have a regression task and provides us with some metrics and plots that make sense for our specific problem at hand. To accelerate any future computation (e.g. of a metric), we cache once and for all the predictions of our model. Note that we do not necessarily need to cache the predictions as the report will compute them on the fly (if not cached) and cache them for us. .. GENERATED FROM PYTHON SOURCE LINES 242-248 .. code-block:: Python import warnings with warnings.catch_warnings(): warnings.simplefilter(action="ignore", category=FutureWarning) report.cache_predictions(n_jobs=4) .. GENERATED FROM PYTHON SOURCE LINES 249-251 To ensure this cross-validation report is not lost, let us save it in our skore project. .. GENERATED FROM PYTHON SOURCE LINES 251-253 .. code-block:: Python my_project.put("Linear model report", report) .. GENERATED FROM PYTHON SOURCE LINES 254-255 We can now have a look at the performance of the model with some standard metrics. .. GENERATED FROM PYTHON SOURCE LINES 255-257 .. code-block:: Python report.metrics.report_metrics(indicator_favorability=True) .. raw:: html
RidgeCV Favorability
mean std
Metric
0.756305 0.027521 (↗︎)
RMSE 14364.672774 1292.627990 (↘︎)
Fit time 11.054011 2.608963 (↘︎)
Predict time 1.177964 0.042182 (↘︎)


.. GENERATED FROM PYTHON SOURCE LINES 258-266 Comparing the models ==================== At this point, we may not have been cautious and could have already overwritten the report and model from our initial (tree-based model) attempt. Fortunately, since we saved the reports in our skore project, we can easily recover them. So, let us retrieve those reports. .. GENERATED FROM PYTHON SOURCE LINES 266-270 .. code-block:: Python hgbt_model_report = my_project.get("HGBT model report") linear_model_report = my_project.get("Linear model report") .. GENERATED FROM PYTHON SOURCE LINES 271-273 Now that we retrieved the reports, we can make some further comparison and build upon some usual pandas operations to concatenate the results. .. GENERATED FROM PYTHON SOURCE LINES 274-285 .. code-block:: Python import pandas as pd results = pd.concat( [ hgbt_model_report.metrics.report_metrics(), linear_model_report.metrics.report_metrics(), ], axis=1, ) results .. raw:: html
HistGradientBoostingRegressor RidgeCV
mean std mean std
Metric
0.925649 0.015026 0.756305 0.027521
RMSE 7925.022613 1086.872675 14364.672774 1292.627990
Fit time 19.717581 6.015212 11.054011 2.608963
Predict time 6.735268 1.976639 1.177964 0.042182


.. GENERATED FROM PYTHON SOURCE LINES 286-291 In addition, if we forgot to compute a specific metric (e.g. :func:`~sklearn.metrics.mean_absolute_error`), we can easily add it to the report, without re-training the model and even without re-computing the predictions since they are cached internally in the report. This allows us to save some potentially huge computation time. .. GENERATED FROM PYTHON SOURCE LINES 292-314 .. code-block:: Python from sklearn.metrics import mean_absolute_error scoring = ["r2", "rmse", mean_absolute_error] scoring_kwargs = {"response_method": "predict"} scoring_names = ["R2", "RMSE", "MAE"] results = pd.concat( [ hgbt_model_report.metrics.report_metrics( scoring=scoring, scoring_kwargs=scoring_kwargs, scoring_names=scoring_names, ), linear_model_report.metrics.report_metrics( scoring=scoring, scoring_kwargs=scoring_kwargs, scoring_names=scoring_names, ), ], axis=1, ) results .. raw:: html
HistGradientBoostingRegressor RidgeCV
mean std mean std
Metric
R2 0.925649 0.015026 0.756305 0.027521
RMSE 7925.022613 1086.872675 14364.672774 1292.627990
MAE 4407.990704 185.681370 10066.841959 427.165646


.. GENERATED FROM PYTHON SOURCE LINES 315-319 .. note:: We could have also used the :class:`skore.ComparisonReport` to compare estimator reports. This is done in :ref:`example_feature_importance`. .. GENERATED FROM PYTHON SOURCE LINES 321-324 Finally, we can even get the individual :class:`~skore.EstimatorReport` for each fold from the cross-validation to make further analysis. Here, we plot the actual vs predicted values for each fold. .. GENERATED FROM PYTHON SOURCE LINES 325-340 .. code-block:: Python from itertools import zip_longest import matplotlib.pyplot as plt fig, axs = plt.subplots(ncols=2, nrows=3, figsize=(12, 18)) for split_idx, (ax, estimator_report) in enumerate( zip_longest(axs.flatten(), linear_model_report.estimator_reports_) ): if estimator_report is None: ax.axis("off") continue estimator_report.metrics.prediction_error().plot(kind="actual_vs_predicted", ax=ax) ax.set_title(f"Split #{split_idx + 1}") ax.legend(loc="lower right") plt.tight_layout() .. image-sg:: /auto_examples/use_cases/images/sphx_glr_plot_employee_salaries_001.png :alt: Split #1, Split #2, Split #3, Split #4, Split #5 :srcset: /auto_examples/use_cases/images/sphx_glr_plot_employee_salaries_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** (1 minutes 51.009 seconds) .. _sphx_glr_download_auto_examples_use_cases_plot_employee_salaries.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_employee_salaries.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_employee_salaries.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_employee_salaries.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_