.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/use_cases/plot_employee_salaries.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_use_cases_plot_employee_salaries.py: .. _example_use_case_employee_salaries: =============================== Simplified experiment reporting =============================== This example shows how to leverage skore for reporting model evaluation and storing the results for further analysis. .. GENERATED FROM PYTHON SOURCE LINES 13-15 We set some environment variables to avoid some spurious warnings related to parallelism. .. GENERATED FROM PYTHON SOURCE LINES 16-21 .. code-block:: Python import os os.environ["POLARS_ALLOW_FORKING_THREAD"] = "1" os.environ["TOKENIZERS_PARALLELISM"] = "true" .. GENERATED FROM PYTHON SOURCE LINES 22-24 Creating a skore project and loading some data ============================================== .. GENERATED FROM PYTHON SOURCE LINES 26-28 Let's open a skore project in which we will be able to store artifacts from our experiments. .. GENERATED FROM PYTHON SOURCE LINES 29-33 .. code-block:: Python import skore project = skore.open("my_project", create=True) .. GENERATED FROM PYTHON SOURCE LINES 34-35 We use a skrub dataset that is non-trivial. .. GENERATED FROM PYTHON SOURCE LINES 36-41 .. code-block:: Python from skrub.datasets import fetch_employee_salaries datasets = fetch_employee_salaries() df, y = datasets.X, datasets.y .. GENERATED FROM PYTHON SOURCE LINES 42-44 Let's first have a condensed summary of the input data using a :class:`skrub.TableReport`. .. GENERATED FROM PYTHON SOURCE LINES 45-50 .. code-block:: Python from skrub import TableReport table_report = TableReport(df) table_report .. raw:: html

Please enable javascript

The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").



.. GENERATED FROM PYTHON SOURCE LINES 51-69 From the table report, we can make a few observations: * The type of data is heterogeneous: we mainly have categorical and date-related features. * The year related to the ``date_first_hired`` column is also present in the ``date`` column. Hence, we should beware of not creating twice the same feature during the feature engineering. * By looking at the "Associations" tab of the table report, we observe that two features are holding the exact same information: ``department`` and ``department_name``. Hence, during our feature engineering, we could potentially drop one of them if the final predictive model is sensitive to the collinearity. We can store the report in the skore project so that we can easily retrieve it later without necessarily having to reload the dataset and recompute the report. .. GENERATED FROM PYTHON SOURCE LINES 69-71 .. code-block:: Python project.put("Input data summary", table_report) .. GENERATED FROM PYTHON SOURCE LINES 72-75 In terms of target and thus the task that we want to solve, we are interested in predicting the salary of an employee given the previous features. We therefore have a regression task at end. .. GENERATED FROM PYTHON SOURCE LINES 76-78 .. code-block:: Python y .. rst-class:: sphx-glr-script-out .. code-block:: none 0 69222.18 1 97392.47 2 104717.28 3 52734.57 4 93396.00 ... 9223 72094.53 9224 169543.85 9225 102736.52 9226 153747.50 9227 75484.08 Name: current_annual_salary, Length: 9228, dtype: float64 .. GENERATED FROM PYTHON SOURCE LINES 79-81 Modelling ========= .. GENERATED FROM PYTHON SOURCE LINES 83-85 In a first attempt, we define a rather complex predictive model that uses a linear model as a base estimator. .. GENERATED FROM PYTHON SOURCE LINES 86-139 .. code-block:: Python import numpy as np from sklearn.compose import make_column_transformer from sklearn.pipeline import make_pipeline from sklearn.preprocessing import OneHotEncoder, SplineTransformer from sklearn.linear_model import RidgeCV from skrub import DatetimeEncoder, ToDatetime, DropCols def periodic_spline_transformer(period, n_splines=None, degree=3): if n_splines is None: n_splines = period n_knots = n_splines + 1 # periodic and include_bias is True return SplineTransformer( degree=degree, n_knots=n_knots, knots=np.linspace(0, period, n_knots).reshape(n_knots, 1), extrapolation="periodic", include_bias=True, ) categorical_features = [ "gender", "department_name", "division", "assignment_category", "employee_position_title", "year_first_hired", ] datetime_features = "date_first_hired" date_encoder = make_pipeline( ToDatetime(), DatetimeEncoder(resolution="day", add_weekday=True, add_total_seconds=False), DropCols("date_first_hired_year"), ) date_engineering = make_column_transformer( (periodic_spline_transformer(12, n_splines=6), ["date_first_hired_month"]), (periodic_spline_transformer(31, n_splines=15), ["date_first_hired_day"]), (periodic_spline_transformer(7, n_splines=3), ["date_first_hired_weekday"]), ) feature_engineering_date = make_pipeline(date_encoder, date_engineering) preprocessing = make_column_transformer( (feature_engineering_date, datetime_features), (OneHotEncoder(drop="if_binary", handle_unknown="ignore"), categorical_features), ) model = make_pipeline(preprocessing, RidgeCV(alphas=np.logspace(-3, 3, 100))) model .. raw:: html
Pipeline(steps=[('columntransformer',
                     ColumnTransformer(transformers=[('pipeline',
                                                      Pipeline(steps=[('pipeline',
                                                                       Pipeline(steps=[('todatetime',
                                                                                        ToDatetime()),
                                                                                       ('datetimeencoder',
                                                                                        DatetimeEncoder(add_total_seconds=False,
                                                                                                        add_weekday=True,
                                                                                                        resolution='day')),
                                                                                       ('dropcols',
                                                                                        DropCols(cols='date_first_hired_year'))])),
                                                                      ('columntransformer',
                                                                       ColumnTransformer(transfor...
           4.03701726e+01, 4.64158883e+01, 5.33669923e+01, 6.13590727e+01,
           7.05480231e+01, 8.11130831e+01, 9.32603347e+01, 1.07226722e+02,
           1.23284674e+02, 1.41747416e+02, 1.62975083e+02, 1.87381742e+02,
           2.15443469e+02, 2.47707636e+02, 2.84803587e+02, 3.27454916e+02,
           3.76493581e+02, 4.32876128e+02, 4.97702356e+02, 5.72236766e+02,
           6.57933225e+02, 7.56463328e+02, 8.69749003e+02, 1.00000000e+03])))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 140-155 In the diagram above, we can see what how we performed our feature engineering: * For categorical features, we use a :class:`~sklearn.preprocessing.OneHotEncoder` to transform the categorical features. From the previous data exploration using a :class:`~skrub.TableReport`, from the "Stats" tab, one may have looked at the number of unique values and observed that we have feature with a large cardinality. In such cases, one-hot encoding might not be the best choice, but this is our starting point to get the ball rolling. * Then, we have another transformation to encode the date features. We first split the date into multiple features (day, month, and year). Then, we apply a periodic spline transformation to each of the date features to capture the periodicity of the data. * Finally, we fit a :class:`~sklearn.linear_model.RidgeCV` model. .. GENERATED FROM PYTHON SOURCE LINES 157-166 Model evaluation using :class:`skore.CrossValidationReport` ============================================================ First model ^^^^^^^^^^^ Now, we want to evaluate this complex model via cross-validation (with 5 folds). For that, we use skore's :class:`~skore.CrossValidationReport` to investigate the performance of our model. .. GENERATED FROM PYTHON SOURCE LINES 166-171 .. code-block:: Python from skore import CrossValidationReport report = CrossValidationReport(estimator=model, X=df, y=y, cv_splitter=5) report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none Processing cross-validation ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% for RidgeCV ╭──────────────────────── Tools to diagnose estimator RidgeCV ─────────────────────────╮ │ report │ │ ├── .metrics │ │ │ ├── .r2(...) (↗︎) - Compute the R² score. │ │ │ ├── .rmse(...) (↘︎) - Compute the root mean squared error. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ ├── .report_metrics(...) - Report a set of metrics for our estimator. │ │ │ └── .plot │ │ │ └── .prediction_error(...) - Plot the prediction error of a regression │ │ │ model. │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ └── Attributes │ │ ├── .X │ │ ├── .y │ │ ├── .estimator_ │ │ ├── .estimator_name_ │ │ ├── .estimator_reports_ │ │ └── .n_jobs │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 172-180 We observe that the cross-validation report detected that we have a regression task and provides us with some metrics and plots that make sense for our specific problem at hand. To accelerate any future computation (e.g. of a metric), we cache once and for all the predictions of our model. Note that we don't necessarily need to cache the predictions as the report will compute them on the fly (if not cached) and cache them for us. .. GENERATED FROM PYTHON SOURCE LINES 182-190 .. code-block:: Python import warnings with warnings.catch_warnings(): # catch the warnings raised by the OneHotEncoder for seeing unknown categories # at transform time warnings.simplefilter(action="ignore", category=UserWarning) report.cache_predictions(n_jobs=3) .. rst-class:: sphx-glr-script-out .. code-block:: none /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. /home/thomas/Documents/workspace/probabl/skore/.venv/lib/python3.12/site-packages/sklearn/pipeline.py:62: FutureWarning: This Pipeline instance is not fitted yet. Call 'fit' with appropriate arguments before using other methods such as transform, predict, etc. This will raise an error in 1.8 instead of the current warning. Cross-validation predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. GENERATED FROM PYTHON SOURCE LINES 191-192 To not lose this cross-validation report, let's store it in our skore project. .. GENERATED FROM PYTHON SOURCE LINES 193-195 .. code-block:: Python project.put("Linear model report", report) .. GENERATED FROM PYTHON SOURCE LINES 196-197 We can now have a look at the performance of the model with some standard metrics. .. GENERATED FROM PYTHON SOURCE LINES 197-199 .. code-block:: Python report.metrics.report_metrics(aggregate=["mean", "std"]) .. rst-class:: sphx-glr-script-out .. code-block:: none Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. raw:: html
Metric R² (↗︎) RMSE (↘︎)
RidgeCV mean 0.897318 9293.149793
std 0.026105 1478.534085


.. GENERATED FROM PYTHON SOURCE LINES 200-211 Second model ^^^^^^^^^^^^ Now that we have our first baseline model, we can try an out-of-the-box model: skrub's :class:`~skrub.TableVectorizer` that makes the feature engineering for us. To deal with the high cardinality of the categorical features, we use a :class:`~skrub.TextEncoder` that uses a language model and an embedding model to encode the categorical features. Finally, we use a :class:`~sklearn.ensemble.HistGradientBoostingRegressor` as a base estimator that is a rather robust model. .. GENERATED FROM PYTHON SOURCE LINES 211-221 .. code-block:: Python from skrub import TableVectorizer, TextEncoder from sklearn.ensemble import HistGradientBoostingRegressor from sklearn.pipeline import make_pipeline model = make_pipeline( TableVectorizer(high_cardinality=TextEncoder()), HistGradientBoostingRegressor(), ) model .. raw:: html
Pipeline(steps=[('tablevectorizer',
                     TableVectorizer(high_cardinality=TextEncoder())),
                    ('histgradientboostingregressor',
                     HistGradientBoostingRegressor())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


.. GENERATED FROM PYTHON SOURCE LINES 222-223 Let's compute the cross-validation report for this model. .. GENERATED FROM PYTHON SOURCE LINES 224-227 .. code-block:: Python report = CrossValidationReport(estimator=model, X=df, y=y, cv_splitter=5, n_jobs=3) report.help() .. rst-class:: sphx-glr-script-out .. code-block:: none Processing cross-validation ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% for HistGradientBoostingRegressor ╭───────────── Tools to diagnose estimator HistGradientBoostingRegressor ──────────────╮ │ report │ │ ├── .metrics │ │ │ ├── .r2(...) (↗︎) - Compute the R² score. │ │ │ ├── .rmse(...) (↘︎) - Compute the root mean squared error. │ │ │ ├── .custom_metric(...) - Compute a custom metric. │ │ │ ├── .report_metrics(...) - Report a set of metrics for our estimator. │ │ │ └── .plot │ │ │ └── .prediction_error(...) - Plot the prediction error of a regression │ │ │ model. │ │ ├── .cache_predictions(...) - Cache the predictions for sub-estimators │ │ │ reports. │ │ ├── .clear_cache(...) - Clear the cache. │ │ └── Attributes │ │ ├── .X │ │ ├── .y │ │ ├── .estimator_ │ │ ├── .estimator_name_ │ │ ├── .estimator_reports_ │ │ └── .n_jobs │ │ │ │ │ │ Legend: │ │ (↗︎) higher is better (↘︎) lower is better │ ╰──────────────────────────────────────────────────────────────────────────────────────╯ .. GENERATED FROM PYTHON SOURCE LINES 228-229 We cache the predictions for later use. .. GENERATED FROM PYTHON SOURCE LINES 230-232 .. code-block:: Python report.cache_predictions(n_jobs=3) .. rst-class:: sphx-glr-script-out .. code-block:: none Cross-validation predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Caching predictions ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. GENERATED FROM PYTHON SOURCE LINES 233-234 We store the report in our skore project. .. GENERATED FROM PYTHON SOURCE LINES 235-237 .. code-block:: Python project.put("HGBDT model report", report) .. GENERATED FROM PYTHON SOURCE LINES 238-239 We can now have a look at the performance of the model with some standard metrics. .. GENERATED FROM PYTHON SOURCE LINES 240-242 .. code-block:: Python report.metrics.report_metrics(aggregate=["mean", "std"]) .. rst-class:: sphx-glr-script-out .. code-block:: none Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. raw:: html
Metric R² (↗︎) RMSE (↘︎)
HistGradientBoostingRegressor mean 0.920948 8175.659520
std 0.014001 1005.191733


.. GENERATED FROM PYTHON SOURCE LINES 243-249 Investigating the models ^^^^^^^^^^^^^^^^^^^^^^^^ At this stage, we might not been careful and have already overwritten the report and model from our first attempt. Hopefully, because we stored the reports in our skore project, we can easily retrieve them. So let's retrieve the reports. .. GENERATED FROM PYTHON SOURCE LINES 249-252 .. code-block:: Python linear_model_report = project.get("Linear model report") hgbdt_model_report = project.get("HGBDT model report") .. GENERATED FROM PYTHON SOURCE LINES 253-255 Now that we retrieved the reports, we can make further comparison and build upon some usual pandas operations to concatenate the results. .. GENERATED FROM PYTHON SOURCE LINES 256-266 .. code-block:: Python import pandas as pd results = pd.concat( [ linear_model_report.metrics.report_metrics(aggregate=["mean", "std"]), hgbdt_model_report.metrics.report_metrics(aggregate=["mean", "std"]), ] ) results .. rst-class:: sphx-glr-script-out .. code-block:: none Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. raw:: html
Metric R² (↗︎) RMSE (↘︎)
RidgeCV mean 0.897318 9293.149793
std 0.026105 1478.534085
HistGradientBoostingRegressor mean 0.920948 8175.659520
std 0.014001 1005.191733


.. GENERATED FROM PYTHON SOURCE LINES 267-272 In addition, if we forgot to compute a specific metric (e.g. :func:`~sklearn.metrics.mean_absolute_error`), we can easily add it to the report, without re-training the model and even without re-computing the predictions since they are cached internally in the report. This allows us to save some potentially huge computation time. .. GENERATED FROM PYTHON SOURCE LINES 273-296 .. code-block:: Python from sklearn.metrics import mean_absolute_error scoring = ["r2", "rmse", mean_absolute_error] scoring_kwargs = {"response_method": "predict"} scoring_names = ["R2", "RMSE", "MAE"] results = pd.concat( [ linear_model_report.metrics.report_metrics( scoring=scoring, scoring_kwargs=scoring_kwargs, scoring_names=scoring_names, aggregate=["mean", "std"], ), hgbdt_model_report.metrics.report_metrics( scoring=scoring, scoring_kwargs=scoring_kwargs, scoring_names=scoring_names, aggregate=["mean", "std"], ), ] ) results .. rst-class:: sphx-glr-script-out .. code-block:: none Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% Compute metric for each split ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% .. raw:: html
Metric R2 RMSE MAE
RidgeCV mean 0.897318 9293.149793 5022.762482
std 0.026105 1478.534085 191.509546
HistGradientBoostingRegressor mean 0.920948 8175.659520 4692.369211
std 0.014001 1005.191733 226.298663


.. GENERATED FROM PYTHON SOURCE LINES 297-300 Finally, we can even get the individual :class:`~skore.EstimatorReport` for each fold from the cross-validation to make further analysis. Here, we plot the actual vs predicted values for each fold. .. GENERATED FROM PYTHON SOURCE LINES 301-316 .. code-block:: Python from itertools import zip_longest import matplotlib.pyplot as plt fig, axs = plt.subplots(ncols=2, nrows=3, figsize=(12, 18)) for split_idx, (ax, estimator_report) in enumerate( zip_longest(axs.flatten(), linear_model_report.estimator_reports_) ): if estimator_report is None: ax.axis("off") continue estimator_report.metrics.plot.prediction_error(kind="actual_vs_predicted", ax=ax) ax.set_title(f"Split #{split_idx + 1}") ax.legend(loc="lower right") plt.tight_layout() .. image-sg:: /auto_examples/use_cases/images/sphx_glr_plot_employee_salaries_001.png :alt: Split #1, Split #2, Split #3, Split #4, Split #5 :srcset: /auto_examples/use_cases/images/sphx_glr_plot_employee_salaries_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 317-322 Cleanup the project ------------------- Let's clear the skore project (to avoid any conflict with other documentation examples). .. GENERATED FROM PYTHON SOURCE LINES 324-325 .. code-block:: Python project.clear() .. rst-class:: sphx-glr-timing **Total running time of the script:** (4 minutes 7.740 seconds) .. _sphx_glr_download_auto_examples_use_cases_plot_employee_salaries.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_employee_salaries.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_employee_salaries.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_employee_salaries.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_