Note
Go to the end to download the full example code.
Skore: getting started#
This guide illustrates how to use skore through a complete machine learning workflow for binary classification:
Set up a proper experiment with training and test data
Develop and evaluate multiple models using cross-validation
Compare models to select the best one
Validate the final model on held-out data
Track and organize your machine learning results
Throughout this guide, we will see how skore helps you:
Avoid common pitfalls with smart diagnostics
Quickly get rich insights into model performance
Organize and track your experiments
Setting up our binary classification problem#
Letโs start by loading the German credit dataset, a classic binary classification problem where we predict the customerโs credit risk (โgoodโ or โbadโ).
This dataset contains various features about credit applicants, including personal information, credit history, and loan details.
import pandas as pd
import skore
from sklearn.datasets import fetch_openml
from skrub import TableReport
german_credit = fetch_openml(data_id=31, as_frame=True, parser="pandas")
X, y = german_credit.data, german_credit.target
TableReport(german_credit.frame)
| checking_status | duration | credit_history | purpose | credit_amount | savings_status | employment | installment_commitment | personal_status | other_parties | residence_since | property_magnitude | age | other_payment_plans | housing | existing_credits | job | num_dependents | own_telephone | foreign_worker | class | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | <0 | 6 | critical/other existing credit | radio/tv | 1,169 | no known savings | >=7 | 4 | male single | none | 4 | real estate | 67 | none | own | 2 | skilled | 1 | yes | yes | good |
| 1 | 0<=X<200 | 48 | existing paid | radio/tv | 5,951 | <100 | 1<=X<4 | 2 | female div/dep/mar | none | 2 | real estate | 22 | none | own | 1 | skilled | 1 | none | yes | bad |
| 2 | no checking | 12 | critical/other existing credit | education | 2,096 | <100 | 4<=X<7 | 2 | male single | none | 3 | real estate | 49 | none | own | 1 | unskilled resident | 2 | none | yes | good |
| 3 | <0 | 42 | existing paid | furniture/equipment | 7,882 | <100 | 4<=X<7 | 2 | male single | guarantor | 4 | life insurance | 45 | none | for free | 1 | skilled | 2 | none | yes | good |
| 4 | <0 | 24 | delayed previously | new car | 4,870 | <100 | 1<=X<4 | 3 | male single | none | 4 | no known property | 53 | none | for free | 2 | skilled | 2 | none | yes | bad |
| 995 | no checking | 12 | existing paid | furniture/equipment | 1,736 | <100 | 4<=X<7 | 3 | female div/dep/mar | none | 4 | real estate | 31 | none | own | 1 | unskilled resident | 1 | none | yes | good |
| 996 | <0 | 30 | existing paid | used car | 3,857 | <100 | 1<=X<4 | 4 | male div/sep | none | 4 | life insurance | 40 | none | own | 1 | high qualif/self emp/mgmt | 1 | yes | yes | good |
| 997 | no checking | 12 | existing paid | radio/tv | 804 | <100 | >=7 | 4 | male single | none | 4 | car | 38 | none | own | 1 | skilled | 1 | none | yes | good |
| 998 | <0 | 45 | existing paid | radio/tv | 1,845 | <100 | 1<=X<4 | 4 | male single | none | 4 | no known property | 23 | none | for free | 1 | skilled | 1 | yes | yes | bad |
| 999 | 0<=X<200 | 45 | critical/other existing credit | used car | 4,576 | 100<=X<500 | unemployed | 3 | male single | none | 4 | car | 27 | none | own | 1 | skilled | 1 | none | yes | good |
checking_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
no checking
<0
0<=X<200
>=200
['no checking', '<0', '0<=X<200', '>=200']
duration
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 33ย (3.3%)
- Mean ยฑ Std
- 20.9 ยฑ 12.1
- Median ยฑ IQR
- 18 ยฑ 12
- Min | Max
- 4 | 72
credit_history
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
existing paid
critical/other existing credit
delayed previously
all paid
no credits/all paid
['existing paid', 'critical/other existing credit', 'delayed previously', 'all paid', 'no credits/all paid']
purpose
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 10ย (1.0%)
Most frequent values
radio/tv
new car
furniture/equipment
used car
business
education
repairs
domestic appliance
other
retraining
['radio/tv', 'new car', 'furniture/equipment', 'used car', 'business', 'education', 'repairs', 'domestic appliance', 'other', 'retraining']
credit_amount
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
921ย (92.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 3.27e+03 ยฑ 2.82e+03
- Median ยฑ IQR
- 2,320 ยฑ 2,606
- Min | Max
- 250 | 18,424
savings_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
<100
no known savings
100<=X<500
500<=X<1000
>=1000
['<100', 'no known savings', '100<=X<500', '500<=X<1000', '>=1000']
employment
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
1<=X<4
>=7
4<=X<7
<1
unemployed
['1<=X<4', '>=7', '4<=X<7', '<1', 'unemployed']
installment_commitment
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 2.97 ยฑ 1.12
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
personal_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
male single
female div/dep/mar
male mar/wid
male div/sep
['male single', 'female div/dep/mar', 'male mar/wid', 'male div/sep']
other_parties
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
none
guarantor
co applicant
['none', 'guarantor', 'co applicant']
residence_since
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 2.85 ยฑ 1.10
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
property_magnitude
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
car
real estate
life insurance
no known property
['car', 'real estate', 'life insurance', 'no known property']
age
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
53ย (5.3%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 35.5 ยฑ 11.4
- Median ยฑ IQR
- 33 ยฑ 15
- Min | Max
- 19 | 75
other_payment_plans
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
none
bank
stores
['none', 'bank', 'stores']
housing
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
own
rent
for free
['own', 'rent', 'for free']
existing_credits
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 1.41 ยฑ 0.578
- Median ยฑ IQR
- 1 ยฑ 1
- Min | Max
- 1 | 4
job
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
skilled
unskilled resident
high qualif/self emp/mgmt
unemp/unskilled non res
['skilled', 'unskilled resident', 'high qualif/self emp/mgmt', 'unemp/unskilled non res']
num_dependents
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
- Mean ยฑ Std
- 1.16 ยฑ 0.362
- Median ยฑ IQR
- 1 ยฑ 0
- Min | Max
- 1 | 2
own_telephone
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
none
yes
['none', 'yes']
foreign_worker
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
yes
no
['yes', 'no']
class
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
good
bad
['good', 'bad']
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
|
Column
|
Column name
|
dtype
|
Is sorted
|
Null values
|
Unique values
|
Mean
|
Std
|
Min
|
Median
|
Max
|
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | checking_status | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.4%) | |||||
| 1 | duration | Int64DType | False | 0ย (0.0%) | 33ย (3.3%) | 20.9 | 12.1 | 4 | 18 | 72 |
| 2 | credit_history | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.5%) | |||||
| 3 | purpose | CategoricalDtype | False | 0ย (0.0%) | 10ย (1.0%) | |||||
| 4 | credit_amount | Int64DType | False | 0ย (0.0%) | 921ย (92.1%) | 3.27e+03 | 2.82e+03 | 250 | 2,320 | 18,424 |
| 5 | savings_status | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.5%) | |||||
| 6 | employment | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.5%) | |||||
| 7 | installment_commitment | Int64DType | False | 0ย (0.0%) | 4ย (0.4%) | 2.97 | 1.12 | 1 | 3 | 4 |
| 8 | personal_status | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.4%) | |||||
| 9 | other_parties | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.3%) | |||||
| 10 | residence_since | Int64DType | False | 0ย (0.0%) | 4ย (0.4%) | 2.85 | 1.10 | 1 | 3 | 4 |
| 11 | property_magnitude | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.4%) | |||||
| 12 | age | Int64DType | False | 0ย (0.0%) | 53ย (5.3%) | 35.5 | 11.4 | 19 | 33 | 75 |
| 13 | other_payment_plans | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.3%) | |||||
| 14 | housing | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.3%) | |||||
| 15 | existing_credits | Int64DType | False | 0ย (0.0%) | 4ย (0.4%) | 1.41 | 0.578 | 1 | 1 | 4 |
| 16 | job | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.4%) | |||||
| 17 | num_dependents | Int64DType | False | 0ย (0.0%) | 2ย (0.2%) | 1.16 | 0.362 | 1 | 1 | 2 |
| 18 | own_telephone | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.2%) | |||||
| 19 | foreign_worker | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.2%) | |||||
| 20 | class | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.2%) |
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
checking_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
no checking
<0
0<=X<200
>=200
['no checking', '<0', '0<=X<200', '>=200']
duration
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 33ย (3.3%)
- Mean ยฑ Std
- 20.9 ยฑ 12.1
- Median ยฑ IQR
- 18 ยฑ 12
- Min | Max
- 4 | 72
credit_history
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
existing paid
critical/other existing credit
delayed previously
all paid
no credits/all paid
['existing paid', 'critical/other existing credit', 'delayed previously', 'all paid', 'no credits/all paid']
purpose
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 10ย (1.0%)
Most frequent values
radio/tv
new car
furniture/equipment
used car
business
education
repairs
domestic appliance
other
retraining
['radio/tv', 'new car', 'furniture/equipment', 'used car', 'business', 'education', 'repairs', 'domestic appliance', 'other', 'retraining']
credit_amount
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
921ย (92.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 3.27e+03 ยฑ 2.82e+03
- Median ยฑ IQR
- 2,320 ยฑ 2,606
- Min | Max
- 250 | 18,424
savings_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
<100
no known savings
100<=X<500
500<=X<1000
>=1000
['<100', 'no known savings', '100<=X<500', '500<=X<1000', '>=1000']
employment
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.5%)
Most frequent values
1<=X<4
>=7
4<=X<7
<1
unemployed
['1<=X<4', '>=7', '4<=X<7', '<1', 'unemployed']
installment_commitment
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 2.97 ยฑ 1.12
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
personal_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
male single
female div/dep/mar
male mar/wid
male div/sep
['male single', 'female div/dep/mar', 'male mar/wid', 'male div/sep']
other_parties
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
none
guarantor
co applicant
['none', 'guarantor', 'co applicant']
residence_since
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 2.85 ยฑ 1.10
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
property_magnitude
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
car
real estate
life insurance
no known property
['car', 'real estate', 'life insurance', 'no known property']
age
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
53ย (5.3%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 35.5 ยฑ 11.4
- Median ยฑ IQR
- 33 ยฑ 15
- Min | Max
- 19 | 75
other_payment_plans
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
none
bank
stores
['none', 'bank', 'stores']
housing
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.3%)
Most frequent values
own
rent
for free
['own', 'rent', 'for free']
existing_credits
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
- Mean ยฑ Std
- 1.41 ยฑ 0.578
- Median ยฑ IQR
- 1 ยฑ 1
- Min | Max
- 1 | 4
job
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.4%)
Most frequent values
skilled
unskilled resident
high qualif/self emp/mgmt
unemp/unskilled non res
['skilled', 'unskilled resident', 'high qualif/self emp/mgmt', 'unemp/unskilled non res']
num_dependents
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
- Mean ยฑ Std
- 1.16 ยฑ 0.362
- Median ยฑ IQR
- 1 ยฑ 0
- Min | Max
- 1 | 2
own_telephone
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
none
yes
['none', 'yes']
foreign_worker
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
yes
no
['yes', 'no']
class
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.2%)
Most frequent values
good
bad
['good', 'bad']
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
| Column 1 | Column 2 | Cramรฉr's V | Pearson's Correlation |
|---|---|---|---|
| property_magnitude | housing | 0.553 | |
| job | own_telephone | 0.426 | |
| credit_history | existing_credits | 0.378 | |
| checking_status | class | 0.352 | |
| employment | job | 0.311 | |
| age | num_dependents | 0.309 | 0.118 |
| personal_status | num_dependents | 0.284 | |
| duration | credit_amount | 0.281 | 0.625 |
| age | housing | 0.279 | |
| credit_amount | own_telephone | 0.278 | |
| employment | residence_since | 0.261 | |
| credit_history | class | 0.248 | |
| residence_since | housing | 0.237 | |
| employment | age | 0.236 | |
| credit_amount | job | 0.229 | |
| duration | class | 0.224 | |
| purpose | own_telephone | 0.221 | |
| duration | foreign_worker | 0.216 | |
| credit_amount | property_magnitude | 0.216 | |
| credit_history | other_payment_plans | 0.215 |
Please enable javascript
The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").
Creating our experiment and held-out sets#
We will use skoreโs enhanced train_test_split() function to create our experiment set
and a left-out test set. The experiment set will be used for model development and
cross-validation, while the left-out set will only be used at the end to validate
our final model.
Unlike scikit-learnโs train_test_split(), skoreโs version provides helpful diagnostics
about potential issues with your data split, such as class imbalance.
X_experiment, X_holdout, y_experiment, y_holdout = skore.train_test_split(
X, y, random_state=42
)
โญโโโโโโโโโโโโโโโโโโโโโโ HighClassImbalanceTooFewExamplesWarning โโโโโโโโโโโโโโโโโโโโโโโโฎ
โ It seems that you have a classification problem with at least one class with fewer โ
โ than 100 examples in the test set. In this case, using train_test_split may not be a โ
โ good idea because of high variability in the scores obtained on the test set. We โ
โ suggest three options to tackle this challenge: you can increase test_size, collect โ
โ more data, or use skore's CrossValidationReport with the `splitter` parameter of โ
โ your choice. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ ShuffleTrueWarning โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ We detected that the `shuffle` parameter is set to `True` either explicitly or from โ
โ its default value. In case of time-ordered events (even if they are independent), โ
โ this will result in inflated model performance evaluation because natural drift will โ
โ not be taken into account. We recommend setting the shuffle parameter to `False` in โ
โ order to ensure the evaluation process is really representative of your production โ
โ release process. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
skore tells us we have class-imbalance issues with our data, which we confirm
with the TableReport above by clicking on the โclassโ column and looking at the
class distribution: there are only 300 examples where the target is โbadโ.
The second warning concerns time-ordered data, but our data does not contain time-ordered columns so we can safely ignore it.
Model development with cross-validation#
We will investigate two different families of models using cross-validation.
A
LogisticRegressionwhich is a linear modelA
RandomForestClassifierwhich is an ensemble of decision trees.
In both cases, we rely on skrub.tabular_pipeline() to choose the proper
preprocessing depending on the kind of model.
Cross-validation is necessary to get a more reliable estimate of model performance.
skore makes it easy through skore.CrossValidationReport.
Model no. 1: Linear regression with preprocessing#
Our first model will be a linear model, with automatic preprocessing of non-numeric
data. Under the hood, skrubโs TableVectorizer will adapt the
preprocessing based on our choice to use a linear model.
from sklearn.linear_model import LogisticRegression
from skrub import tabular_pipeline
simple_model = tabular_pipeline(LogisticRegression())
simple_model
Pipeline(steps=[('tablevectorizer',
TableVectorizer(datetime=DatetimeEncoder(periodic_encoding='spline'))),
('simpleimputer', SimpleImputer(add_indicator=True)),
('squashingscaler', SquashingScaler(max_absolute_value=5)),
('logisticregression', LogisticRegression())])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
Parameters
| cardinality_threshold | 40 | |
| low_cardinality | OneHotEncoder..._output=False) | |
| high_cardinality | StringEncoder() | |
| numeric | PassThrough() | |
| datetime | DatetimeEncod...ding='spline') | |
| specific_transformers | () | |
| drop_null_fraction | 1.0 | |
| drop_if_constant | False | |
| drop_if_unique | False | |
| datetime_format | None | |
| n_jobs | None |
Parameters
Parameters
| resolution | 'hour' | |
| add_weekday | False | |
| add_total_seconds | True | |
| add_day_of_year | False | |
| periodic_encoding | 'spline' |
Parameters
Parameters
| n_components | 30 | |
| vectorizer | 'tfidf' | |
| ngram_range | (3, ...) | |
| analyzer | 'char_wb' | |
| stop_words | None | |
| random_state | None |
Parameters
Parameters
| max_absolute_value | 5 | |
| quantile_range | (25.0, ...) |
Parameters
We now cross-validate the model with CrossValidationReport.
from skore import CrossValidationReport
simple_cv_report = CrossValidationReport(
simple_model,
X=X_experiment,
y=y_experiment,
pos_label="good",
splitter=5,
)
Skore reports allow us to structure the statistical information
we look for when experimenting with predictive models. First, the
help() method shows us all its available methods
and attributes, with the knowledge that our model was trained for classification:
โญโโโโโโโโโโโโโโโโโโโ Tools to diagnose estimator LogisticRegression โโโโโโโโโโโโโโโโโโโโฎ
โ CrossValidationReport โ
โ โโโ .data โ
โ โ โโโ .analyze(...) - Plot dataset statistics. โ
โ โโโ .metrics โ
โ โ โโโ .accuracy(...) (โ๏ธ) - Compute the accuracy score. โ
โ โ โโโ .brier_score(...) (โ๏ธ) - Compute the Brier score. โ
โ โ โโโ .confusion_matrix(...) - Plot the confusion matrix. โ
โ โ โโโ .log_loss(...) (โ๏ธ) - Compute the log loss. โ
โ โ โโโ .precision(...) (โ๏ธ) - Compute the precision score. โ
โ โ โโโ .precision_recall(...) - Plot the precision-recall curve. โ
โ โ โโโ .recall(...) (โ๏ธ) - Compute the recall score. โ
โ โ โโโ .roc(...) - Plot the ROC curve. โ
โ โ โโโ .roc_auc(...) (โ๏ธ) - Compute the ROC AUC score. โ
โ โ โโโ .timings(...) - Get all measured processing times related โ
โ โ โ to the estimator. โ
โ โ โโโ .custom_metric(...) - Compute a custom metric. โ
โ โ โโโ .summarize(...) - Report a set of metrics for our estimator. โ
โ โโโ .inspection โ
โ โ โโโ .coefficients(...) - Retrieve the coefficients across splits, โ
โ โ including the intercept. โ
โ โโโ .cache_predictions(...) - Cache the predictions for sub-estimators โ
โ โ reports. โ
โ โโโ .clear_cache(...) - Clear the cache. โ
โ โโโ .create_estimator_report(...) - Create an estimator report from the โ
โ โ cross-validation report. โ
โ โโโ .get_predictions(...) - Get estimator's predictions. โ
โ โโโ Attributes โ
โ โโโ .X - The data to fit โ
โ โโโ .y - The target variable to try to predict in โ
โ โ the case of supervised learning โ
โ โโโ .estimator - Estimator to make the cross-validation โ
โ โ report from โ
โ โโโ .estimator_ - The cloned or copied estimator โ
โ โโโ .estimator_name_ - The name of the estimator โ
โ โโโ .estimator_reports_ - The estimator reports for each split โ
โ โโโ .ml_task - No description available โ
โ โโโ .n_jobs - Number of jobs to run in parallel โ
โ โโโ .pos_label - For binary classification, the positive โ
โ โ class โ
โ โโโ .split_indices - No description available โ
โ โโโ .splitter - Determines the cross-validation splitting โ
โ strategy โ
โ โ
โ โ
โ Legend: โ
โ (โ๏ธ) higher is better (โ๏ธ) lower is better โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
For example, we can examine the training data, which excludes the held-out data:
simple_cv_report.data.analyze()
| checking_status | duration | credit_history | purpose | credit_amount | savings_status | employment | installment_commitment | personal_status | other_parties | residence_since | property_magnitude | age | other_payment_plans | housing | existing_credits | job | num_dependents | own_telephone | foreign_worker | class | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 82 | no checking | 18 | existing paid | business | 1,568 | 100<=X<500 | 1<=X<4 | 3 | female div/dep/mar | none | 4 | life insurance | 24 | none | rent | 1 | unskilled resident | 1 | none | yes | good |
| 991 | no checking | 15 | all paid | radio/tv | 1,569 | 100<=X<500 | >=7 | 4 | male single | none | 4 | car | 34 | bank | own | 1 | unskilled resident | 2 | none | yes | good |
| 789 | <0 | 40 | critical/other existing credit | education | 5,998 | <100 | 1<=X<4 | 4 | male single | none | 3 | no known property | 27 | bank | own | 1 | skilled | 1 | yes | yes | bad |
| 894 | no checking | 18 | critical/other existing credit | radio/tv | 1,169 | no known savings | 1<=X<4 | 4 | male single | none | 3 | life insurance | 29 | none | own | 2 | skilled | 1 | yes | yes | good |
| 398 | 0<=X<200 | 12 | existing paid | new car | 1,223 | <100 | >=7 | 1 | male div/sep | none | 1 | real estate | 46 | none | rent | 2 | skilled | 1 | none | yes | bad |
| 106 | no checking | 18 | all paid | new car | 6,458 | <100 | >=7 | 2 | male single | none | 4 | no known property | 39 | bank | own | 2 | high qualif/self emp/mgmt | 2 | yes | yes | bad |
| 270 | no checking | 18 | existing paid | new car | 2,662 | no known savings | 4<=X<7 | 4 | male single | none | 3 | life insurance | 32 | none | own | 1 | skilled | 1 | none | no | good |
| 860 | no checking | 24 | critical/other existing credit | used car | 5,804 | >=1000 | 1<=X<4 | 4 | male single | none | 2 | real estate | 27 | none | own | 2 | skilled | 1 | none | yes | good |
| 435 | 0<=X<200 | 12 | existing paid | radio/tv | 1,484 | no known savings | 1<=X<4 | 2 | male mar/wid | none | 1 | real estate | 25 | none | own | 1 | skilled | 1 | yes | yes | bad |
| 102 | no checking | 6 | delayed previously | radio/tv | 932 | <100 | 1<=X<4 | 3 | female div/dep/mar | none | 2 | real estate | 24 | none | own | 1 | skilled | 1 | none | yes | good |
checking_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
no checking
0<=X<200
<0
>=200
['no checking', '0<=X<200', '<0', '>=200']
duration
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 31ย (4.1%)
- Mean ยฑ Std
- 21.2 ยฑ 11.8
- Median ยฑ IQR
- 18 ยฑ 12
- Min | Max
- 4 | 60
credit_history
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
existing paid
critical/other existing credit
delayed previously
all paid
no credits/all paid
['existing paid', 'critical/other existing credit', 'delayed previously', 'all paid', 'no credits/all paid']
purpose
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 10ย (1.3%)
Most frequent values
radio/tv
new car
furniture/equipment
used car
business
education
repairs
domestic appliance
other
retraining
['radio/tv', 'new car', 'furniture/equipment', 'used car', 'business', 'education', 'repairs', 'domestic appliance', 'other', 'retraining']
credit_amount
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
706ย (94.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 3.39e+03 ยฑ 2.94e+03
- Median ยฑ IQR
- 2,331 ยฑ 2,840
- Min | Max
- 250 | 18,424
savings_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
<100
no known savings
100<=X<500
500<=X<1000
>=1000
['<100', 'no known savings', '100<=X<500', '500<=X<1000', '>=1000']
employment
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
1<=X<4
>=7
4<=X<7
<1
unemployed
['1<=X<4', '>=7', '4<=X<7', '<1', 'unemployed']
installment_commitment
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 2.98 ยฑ 1.12
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
personal_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
male single
female div/dep/mar
male mar/wid
male div/sep
['male single', 'female div/dep/mar', 'male mar/wid', 'male div/sep']
other_parties
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
none
guarantor
co applicant
['none', 'guarantor', 'co applicant']
residence_since
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 2.85 ยฑ 1.10
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
property_magnitude
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
car
real estate
life insurance
no known property
['car', 'real estate', 'life insurance', 'no known property']
age
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
53ย (7.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 35.4 ยฑ 11.2
- Median ยฑ IQR
- 33 ยฑ 15
- Min | Max
- 19 | 75
other_payment_plans
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
none
bank
stores
['none', 'bank', 'stores']
housing
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
own
rent
for free
['own', 'rent', 'for free']
existing_credits
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 1.42 ยฑ 0.578
- Median ยฑ IQR
- 1 ยฑ 1
- Min | Max
- 1 | 4
job
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
skilled
unskilled resident
high qualif/self emp/mgmt
unemp/unskilled non res
['skilled', 'unskilled resident', 'high qualif/self emp/mgmt', 'unemp/unskilled non res']
num_dependents
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
- Mean ยฑ Std
- 1.14 ยฑ 0.349
- Median ยฑ IQR
- 1 ยฑ 0
- Min | Max
- 1 | 2
own_telephone
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
none
yes
['none', 'yes']
foreign_worker
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
yes
no
['yes', 'no']
class
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
good
bad
['good', 'bad']
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
|
Column
|
Column name
|
dtype
|
Is sorted
|
Null values
|
Unique values
|
Mean
|
Std
|
Min
|
Median
|
Max
|
|---|---|---|---|---|---|---|---|---|---|---|
| 0 | checking_status | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.5%) | |||||
| 1 | duration | Int64DType | False | 0ย (0.0%) | 31ย (4.1%) | 21.2 | 11.8 | 4 | 18 | 60 |
| 2 | credit_history | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.7%) | |||||
| 3 | purpose | CategoricalDtype | False | 0ย (0.0%) | 10ย (1.3%) | |||||
| 4 | credit_amount | Int64DType | False | 0ย (0.0%) | 706ย (94.1%) | 3.39e+03 | 2.94e+03 | 250 | 2,331 | 18,424 |
| 5 | savings_status | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.7%) | |||||
| 6 | employment | CategoricalDtype | False | 0ย (0.0%) | 5ย (0.7%) | |||||
| 7 | installment_commitment | Int64DType | False | 0ย (0.0%) | 4ย (0.5%) | 2.98 | 1.12 | 1 | 3 | 4 |
| 8 | personal_status | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.5%) | |||||
| 9 | other_parties | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.4%) | |||||
| 10 | residence_since | Int64DType | False | 0ย (0.0%) | 4ย (0.5%) | 2.85 | 1.10 | 1 | 3 | 4 |
| 11 | property_magnitude | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.5%) | |||||
| 12 | age | Int64DType | False | 0ย (0.0%) | 53ย (7.1%) | 35.4 | 11.2 | 19 | 33 | 75 |
| 13 | other_payment_plans | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.4%) | |||||
| 14 | housing | CategoricalDtype | False | 0ย (0.0%) | 3ย (0.4%) | |||||
| 15 | existing_credits | Int64DType | False | 0ย (0.0%) | 4ย (0.5%) | 1.42 | 0.578 | 1 | 1 | 4 |
| 16 | job | CategoricalDtype | False | 0ย (0.0%) | 4ย (0.5%) | |||||
| 17 | num_dependents | Int64DType | False | 0ย (0.0%) | 2ย (0.3%) | 1.14 | 0.349 | 1 | 1 | 2 |
| 18 | own_telephone | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.3%) | |||||
| 19 | foreign_worker | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.3%) | |||||
| 20 | class | CategoricalDtype | False | 0ย (0.0%) | 2ย (0.3%) |
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
checking_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
no checking
0<=X<200
<0
>=200
['no checking', '0<=X<200', '<0', '>=200']
duration
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 31ย (4.1%)
- Mean ยฑ Std
- 21.2 ยฑ 11.8
- Median ยฑ IQR
- 18 ยฑ 12
- Min | Max
- 4 | 60
credit_history
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
existing paid
critical/other existing credit
delayed previously
all paid
no credits/all paid
['existing paid', 'critical/other existing credit', 'delayed previously', 'all paid', 'no credits/all paid']
purpose
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 10ย (1.3%)
Most frequent values
radio/tv
new car
furniture/equipment
used car
business
education
repairs
domestic appliance
other
retraining
['radio/tv', 'new car', 'furniture/equipment', 'used car', 'business', 'education', 'repairs', 'domestic appliance', 'other', 'retraining']
credit_amount
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
706ย (94.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 3.39e+03 ยฑ 2.94e+03
- Median ยฑ IQR
- 2,331 ยฑ 2,840
- Min | Max
- 250 | 18,424
savings_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
<100
no known savings
100<=X<500
500<=X<1000
>=1000
['<100', 'no known savings', '100<=X<500', '500<=X<1000', '>=1000']
employment
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 5ย (0.7%)
Most frequent values
1<=X<4
>=7
4<=X<7
<1
unemployed
['1<=X<4', '>=7', '4<=X<7', '<1', 'unemployed']
installment_commitment
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 2.98 ยฑ 1.12
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
personal_status
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
male single
female div/dep/mar
male mar/wid
male div/sep
['male single', 'female div/dep/mar', 'male mar/wid', 'male div/sep']
other_parties
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
none
guarantor
co applicant
['none', 'guarantor', 'co applicant']
residence_since
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 2.85 ยฑ 1.10
- Median ยฑ IQR
- 3 ยฑ 2
- Min | Max
- 1 | 4
property_magnitude
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
car
real estate
life insurance
no known property
['car', 'real estate', 'life insurance', 'no known property']
age
Int64DType- Null values
- 0ย (0.0%)
- Unique values
-
53ย (7.1%)
This column has a high cardinality (> 40).
- Mean ยฑ Std
- 35.4 ยฑ 11.2
- Median ยฑ IQR
- 33 ยฑ 15
- Min | Max
- 19 | 75
other_payment_plans
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
none
bank
stores
['none', 'bank', 'stores']
housing
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 3ย (0.4%)
Most frequent values
own
rent
for free
['own', 'rent', 'for free']
existing_credits
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
- Mean ยฑ Std
- 1.42 ยฑ 0.578
- Median ยฑ IQR
- 1 ยฑ 1
- Min | Max
- 1 | 4
job
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 4ย (0.5%)
Most frequent values
skilled
unskilled resident
high qualif/self emp/mgmt
unemp/unskilled non res
['skilled', 'unskilled resident', 'high qualif/self emp/mgmt', 'unemp/unskilled non res']
num_dependents
Int64DType- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
- Mean ยฑ Std
- 1.14 ยฑ 0.349
- Median ยฑ IQR
- 1 ยฑ 0
- Min | Max
- 1 | 2
own_telephone
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
none
yes
['none', 'yes']
foreign_worker
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
yes
no
['yes', 'no']
class
CategoricalDtype- Null values
- 0ย (0.0%)
- Unique values
- 2ย (0.3%)
Most frequent values
good
bad
['good', 'bad']
No columns match the selected filter: . You can change the column filter in the dropdown menu above.
| Column 1 | Column 2 | Cramรฉr's V | Pearson's Correlation |
|---|---|---|---|
| property_magnitude | housing | 0.558 | |
| job | own_telephone | 0.438 | |
| credit_history | existing_credits | 0.367 | |
| checking_status | class | 0.348 | |
| age | num_dependents | 0.322 | 0.132 |
| employment | job | 0.315 | |
| credit_amount | own_telephone | 0.309 | |
| personal_status | num_dependents | 0.304 | |
| duration | credit_amount | 0.291 | 0.626 |
| age | housing | 0.270 | |
| duration | own_telephone | 0.267 | |
| credit_history | class | 0.265 | |
| employment | residence_since | 0.257 | |
| purpose | own_telephone | 0.253 | |
| duration | foreign_worker | 0.253 | |
| credit_amount | job | 0.242 | |
| employment | age | 0.241 | |
| purpose | housing | 0.231 | |
| duration | class | 0.228 | |
| credit_amount | foreign_worker | 0.227 |
Please enable javascript
The skrub table reports need javascript to display correctly. If you are displaying a report in a Jupyter notebook and you see this message, you may need to re-execute the cell or to trust the notebook (button on the top right or "File > Trust notebook").
But we can also quickly get an overview of the performance of our model,
using summarize():
simple_metrics = simple_cv_report.metrics.summarize(favorability=True)
simple_metrics.frame()
| LogisticRegression | Favorability | ||
|---|---|---|---|
| mean | std | ||
| Metric | |||
| Accuracy | 0.729333 | 0.050903 | (โ๏ธ) |
| Precision | 0.785632 | 0.034982 | (โ๏ธ) |
| Recall | 0.840934 | 0.050696 | (โ๏ธ) |
| ROC AUC | 0.750335 | 0.056447 | (โ๏ธ) |
| Brier score | 0.184294 | 0.026786 | (โ๏ธ) |
| Fit time (s) | 0.112525 | 0.009894 | (โ๏ธ) |
| Predict time (s) | 0.054957 | 0.001311 | (โ๏ธ) |
Note
favorability=True adds a column showing whether higher or lower metric values
are better.
In addition to the summary of metrics, skore provides more advanced statistical information such as the precision-recall curve:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโ PrecisionRecallCurveDisplay โโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ display โ
โ โโโ Attributes โ
โ โโโ Methods โ
โ โโโ .frame(...) - Get the data used to create the precision-recall curve plot. โ
โ โโโ .plot(...) - Plot visualization. โ
โ โโโ .set_style(...) - Set the style parameters for the display. โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Note
The output of precision_recall() is a
Display object. This is a common pattern in skore which allows us
to access the information in several ways.
We can visualize the critical information as a plot, with only a few lines of code:

Or we can access the raw information as a dataframe if additional analysis is needed:
| split | threshold | precision | recall | |
|---|---|---|---|---|
| 0 | 0 | 0.110277 | 0.700000 | 1.000000 |
| 1 | 0 | 0.215387 | 0.744681 | 1.000000 |
| 2 | 0 | 0.238563 | 0.742857 | 0.990476 |
| 3 | 0 | 0.248819 | 0.741007 | 0.980952 |
| 4 | 0 | 0.273961 | 0.739130 | 0.971429 |
| ... | ... | ... | ... | ... |
| 660 | 4 | 0.982595 | 1.000000 | 0.048077 |
| 661 | 4 | 0.988545 | 1.000000 | 0.038462 |
| 662 | 4 | 0.989817 | 1.000000 | 0.028846 |
| 663 | 4 | 0.994946 | 1.000000 | 0.019231 |
| 664 | 4 | 0.995636 | 1.000000 | 0.009615 |
665 rows ร 4 columns
As another example, we can plot the confusion matrix with the same consistent API:

Skore also provides utilities to inspect models. Since our model is a linear model, we can study the importance that it gives to each feature:
coefficients = simple_cv_report.inspection.coefficients()
coefficients.frame()
| split | feature | coefficients | |
|---|---|---|---|
| 0 | 0 | Intercept | 1.232482 |
| 1 | 0 | checking_status_0<=X<200 | -0.322232 |
| 2 | 0 | checking_status_<0 | -0.572662 |
| 3 | 0 | checking_status_>=200 | 0.196627 |
| 4 | 0 | checking_status_no checking | 0.791377 |
| ... | ... | ... | ... |
| 295 | 4 | job_unemp/unskilled non res | 0.272749 |
| 296 | 4 | job_unskilled resident | 0.118356 |
| 297 | 4 | num_dependents | -0.112250 |
| 298 | 4 | own_telephone_yes | 0.319237 |
| 299 | 4 | foreign_worker_yes | -0.660103 |
300 rows ร 3 columns
coefficients.plot(select_k=15)
Model no. 2: Random forest#
Now, we cross-validate a more advanced model using RandomForestClassifier.
Again, we rely on tabular_pipeline() to perform the appropriate
preprocessing to use with this model.
from sklearn.ensemble import RandomForestClassifier
advanced_model = tabular_pipeline(RandomForestClassifier(random_state=0))
advanced_model
Pipeline(steps=[('tablevectorizer',
TableVectorizer(low_cardinality=OrdinalEncoder(handle_unknown='use_encoded_value',
unknown_value=-1))),
('randomforestclassifier',
RandomForestClassifier(random_state=0))])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
Parameters
| cardinality_threshold | 40 | |
| low_cardinality | OrdinalEncode...nown_value=-1) | |
| high_cardinality | StringEncoder() | |
| numeric | PassThrough() | |
| datetime | DatetimeEncoder() | |
| specific_transformers | () | |
| drop_null_fraction | 1.0 | |
| drop_if_constant | False | |
| drop_if_unique | False | |
| datetime_format | None | |
| n_jobs | None |
Parameters
Parameters
| resolution | 'hour' | |
| add_weekday | False | |
| add_total_seconds | True | |
| add_day_of_year | False | |
| periodic_encoding | None |
Parameters
Parameters
| n_components | 30 | |
| vectorizer | 'tfidf' | |
| ngram_range | (3, ...) | |
| analyzer | 'char_wb' | |
| stop_words | None | |
| random_state | None |
Parameters
advanced_cv_report = CrossValidationReport(
advanced_model, X=X_experiment, y=y_experiment, pos_label="good"
)
We will now compare this new model with the previous one.
Comparing our models#
Now that we have our two models, we need to decide which one should go into production.
We can compare them with a skore.ComparisonReport.
from skore import ComparisonReport
comparison = ComparisonReport(
{
"Simple Linear Model": simple_cv_report,
"Advanced Pipeline": advanced_cv_report,
},
)
This report follows the same API as CrossValidationReport:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Tools to compare estimators โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ ComparisonReport โ
โ โโโ .metrics โ
โ โ โโโ .accuracy(...) (โ๏ธ) - Compute the accuracy score. โ
โ โ โโโ .brier_score(...) (โ๏ธ) - Compute the Brier score. โ
โ โ โโโ .confusion_matrix(...) - Plot the confusion matrix. โ
โ โ โโโ .log_loss(...) (โ๏ธ) - Compute the log loss. โ
โ โ โโโ .precision(...) (โ๏ธ) - Compute the precision score. โ
โ โ โโโ .precision_recall(...) - Plot the precision-recall curve. โ
โ โ โโโ .recall(...) (โ๏ธ) - Compute the recall score. โ
โ โ โโโ .roc(...) - Plot the ROC curve. โ
โ โ โโโ .roc_auc(...) (โ๏ธ) - Compute the ROC AUC score. โ
โ โ โโโ .timings(...) - Get all measured processing times related โ
โ โ โ to the different estimators. โ
โ โ โโโ .custom_metric(...) - Compute a custom metric. โ
โ โ โโโ .summarize(...) - Report a set of metrics for the estimators. โ
โ โโโ .inspection โ
โ โโโ .cache_predictions(...) - Cache the predictions for sub-estimators โ
โ โ reports. โ
โ โโโ .clear_cache(...) - Clear the cache. โ
โ โโโ .create_estimator_report(...) - Create an estimator report from one of the โ
โ โ reports in the comparison. โ
โ โโโ .get_predictions(...) - Get predictions from the underlying โ
โ โ reports. โ
โ โโโ Attributes โ
โ โโโ .n_jobs - Number of jobs to run in parallel โ
โ โโโ .pos_label - No description available โ
โ โโโ .reports_ - The compared reports โ
โ โ
โ โ
โ Legend: โ
โ (โ๏ธ) higher is better (โ๏ธ) lower is better โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
We have access to the same tools to perform statistical analysis and compare both models:
comparison_metrics = comparison.metrics.summarize(favorability=True)
comparison_metrics.frame()
| mean | std | Favorability | |||
|---|---|---|---|---|---|
| Estimator | Simple Linear Model | Advanced Pipeline | Simple Linear Model | Advanced Pipeline | |
| Metric | |||||
| Accuracy | 0.729333 | 0.745333 | 0.050903 | 0.032796 | (โ๏ธ) |
| Precision | 0.785632 | 0.779443 | 0.034982 | 0.018644 | (โ๏ธ) |
| Recall | 0.840934 | 0.885037 | 0.050696 | 0.053558 | (โ๏ธ) |
| ROC AUC | 0.750335 | 0.773334 | 0.056447 | 0.034190 | (โ๏ธ) |
| Brier score | 0.184294 | 0.169911 | 0.026786 | 0.010967 | (โ๏ธ) |
| Fit time (s) | 0.112525 | 0.206632 | 0.009894 | 0.000750 | (โ๏ธ) |
| Predict time (s) | 0.054957 | 0.050649 | 0.001311 | 0.000303 | (โ๏ธ) |
comparison.metrics.precision_recall().plot()

Based on the previous tables and plots, it seems that the
RandomForestClassifier model has slightly better
performance. For the purposes of this guide however, we make the arbitrary choice
to deploy the linear model to make a comparison with the coefficients study shown
earlier.
Final model evaluation on held-out data#
Now that we have chosen to deploy the linear model, we will train it on
the full experiment set and evaluate it on our held-out data: training on more data
should help performance and we can also validate that our model generalizes well to
new data. This can be done in one step with create_estimator_report().
final_report = comparison.create_estimator_report(
name="Simple Linear Model", X_test=X_holdout, y_test=y_holdout
)
This returns a EstimatorReport which has a similar API to the other report classes:
final_metrics = final_report.metrics.summarize()
final_metrics.frame()
| LogisticRegression | |
|---|---|
| Metric | |
| Accuracy | 0.764000 |
| Precision | 0.808290 |
| Recall | 0.876404 |
| ROC AUC | 0.809613 |
| Brier score | 0.153900 |
| Fit time (s) | 0.094362 |
| Predict time (s) | 0.055522 |
final_report.metrics.confusion_matrix().plot()

We can easily combine the results of the previous cross-validation together with the evaluation on the held-out dataset, since the two are accessible as dataframes. This way, we can check if our chosen model meets the expectations we set during the experiment phase.
pd.concat(
[final_metrics.frame(), simple_cv_report.metrics.summarize().frame()],
axis="columns",
)
| LogisticRegression | (LogisticRegression, mean) | (LogisticRegression, std) | |
|---|---|---|---|
| Metric | |||
| Accuracy | 0.764000 | 0.729333 | 0.050903 |
| Precision | 0.808290 | 0.785632 | 0.034982 |
| Recall | 0.876404 | 0.840934 | 0.050696 |
| ROC AUC | 0.809613 | 0.750335 | 0.056447 |
| Brier score | 0.153900 | 0.184294 | 0.026786 |
| Fit time (s) | 0.094362 | 0.112525 | 0.009894 |
| Predict time (s) | 0.055522 | 0.054957 | 0.001311 |
As expected, our final model gets better performance, likely thanks to the larger training set.
Our final sanity check is to compare the features considered most impactful between our final model and the cross-validation:
final_coefficients = final_report.inspection.coefficients()
final_top_15_features = final_coefficients.frame(select_k=15)["feature"]
simple_coefficients = simple_cv_report.inspection.coefficients()
cv_top_15_features = (
simple_coefficients.frame(select_k=15)
.groupby("feature", sort=False)
.mean()
.drop(columns="split")
.reset_index()["feature"]
)
pd.concat(
[final_top_15_features, cv_top_15_features], axis="columns", ignore_index=True
)
| 0 | 1 | |
|---|---|---|
| 0 | Intercept | Intercept |
| 1 | checking_status_0<=X<200 | checking_status_<0 |
| 2 | checking_status_<0 | checking_status_no checking |
| 4 | checking_status_no checking | credit_history_critical/other existing credit |
| 6 | credit_history_all paid | purpose_education |
| 7 | credit_history_critical/other existing credit | purpose_new car |
| 10 | credit_history_no credits/all paid | credit_amount |
| 13 | purpose_education | age |
| 15 | purpose_new car | NaN |
| 19 | purpose_retraining | NaN |
| 20 | purpose_used car | NaN |
| 21 | credit_amount | NaN |
| 32 | installment_commitment | NaN |
| 45 | age | NaN |
| 59 | foreign_worker_yes | NaN |
| 3 | NaN | credit_history_all paid |
| 5 | NaN | credit_history_no credits/all paid |
| 8 | NaN | purpose_retraining |
| 9 | NaN | purpose_used car |
| 11 | NaN | savings_status_>=1000 |
| 12 | NaN | installment_commitment |
| 14 | NaN | foreign_worker_yes |
They seem very similar, so we are done!
Tracking our work with a skore Project#
Now that we have completed our modeling workflow, we should store our models in a safe place for future work. Indeed, if this research notebook were modified, we would no longer be able to relate the current production model to the code that generated it.
We can use a skore.Project to keep track of our experiments.
This makes it easy to organize, retrieve, and compare models over time.
Usually this would be done as you go along the model development, but in the interest of simplicity we kept this until the end.
We load or create a local project:
project = skore.Project("german_credit_classification")
We store our reports with descriptive keys:
project.put("simple_linear_model_cv", simple_cv_report)
project.put("advanced_pipeline_cv", advanced_cv_report)
project.put("final_model", final_report)
Now we can retrieve a summary of our stored reports:
summary = project.summarize()
# Uncomment the next line to display the widget in an interactive environment:
# summary
Note
Calling summary in a Jupyter notebook cell will show the following parallel
coordinate plot to help you select models that you want to retrieve:
Each line represents a model, and we can select models by clicking on lines or dragging on metric axes to filter by performance.
In the screenshot, we selected only the cross-validation reports; this allows us to retrieve exactly those reports programmatically.
Supposing you selected โCross-validationโ in the โReport typeโ tab, if you now call
reports(), you get only the
CrossValidationReport objects, which
you can directly put in the form of a ComparisonReport:
new_report = summary.reports(return_as="comparison")
new_report.help()
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Tools to compare estimators โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ ComparisonReport โ
โ โโโ .metrics โ
โ โ โโโ .accuracy(...) (โ๏ธ) - Compute the accuracy score. โ
โ โ โโโ .brier_score(...) (โ๏ธ) - Compute the Brier score. โ
โ โ โโโ .confusion_matrix(...) - Plot the confusion matrix. โ
โ โ โโโ .log_loss(...) (โ๏ธ) - Compute the log loss. โ
โ โ โโโ .precision(...) (โ๏ธ) - Compute the precision score. โ
โ โ โโโ .precision_recall(...) - Plot the precision-recall curve. โ
โ โ โโโ .recall(...) (โ๏ธ) - Compute the recall score. โ
โ โ โโโ .roc(...) - Plot the ROC curve. โ
โ โ โโโ .roc_auc(...) (โ๏ธ) - Compute the ROC AUC score. โ
โ โ โโโ .timings(...) - Get all measured processing times related โ
โ โ โ to the different estimators. โ
โ โ โโโ .custom_metric(...) - Compute a custom metric. โ
โ โ โโโ .summarize(...) - Report a set of metrics for the estimators. โ
โ โโโ .inspection โ
โ โโโ .cache_predictions(...) - Cache the predictions for sub-estimators โ
โ โ reports. โ
โ โโโ .clear_cache(...) - Clear the cache. โ
โ โโโ .create_estimator_report(...) - Create an estimator report from one of the โ
โ โ reports in the comparison. โ
โ โโโ .get_predictions(...) - Get predictions from the underlying โ
โ โ reports. โ
โ โโโ Attributes โ
โ โโโ .n_jobs - Number of jobs to run in parallel โ
โ โโโ .pos_label - No description available โ
โ โโโ .reports_ - The compared reports โ
โ โ
โ โ
โ Legend: โ
โ (โ๏ธ) higher is better (โ๏ธ) lower is better โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
Stay tuned!
This is only the beginning for skore. We welcome your feedback and ideas to make it the best tool for end-to-end data science.
Key benefits of using skore in your ML workflow:
Standardized evaluation and comparison of models
Rich visualizations and diagnostics
Organized experiment tracking
Seamless integration with scikit-learn
Feel free to join our community on Discord or create an issue.
Total running time of the script: (0 minutes 13.258 seconds)