CrossValidationReport.metrics.r2#
- CrossValidationReport.metrics.r2(*, data_source='test', X=None, y=None, multioutput='raw_values', aggregate=('mean', 'std'))[source]#
Compute the R² score.
- Parameters:
- data_source{“test”, “train”, “X_y”}, default=”test”
The data source to use.
“test” : use the test set provided when creating the report.
“train” : use the train set provided when creating the report.
“X_y” : use the provided
X
andy
to compute the metric.
- Xarray-like of shape (n_samples, n_features), default=None
New data on which to compute the metric. By default, we use the validation set provided when creating the report.
- yarray-like of shape (n_samples,), default=None
New target on which to compute the metric. By default, we use the target provided when creating the report.
- multioutput{“raw_values”, “uniform_average”} or array-like of shape (n_outputs,), default=”raw_values”
Defines aggregating of multiple output values. Array-like value defines weights used to average errors. The other possible values are:
“raw_values”: Returns a full set of errors in case of multioutput input.
“uniform_average”: Errors of all outputs are averaged with uniform weight.
By default, no averaging is done.
- aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)
Function to aggregate the scores across the cross-validation splits. None will return the scores for each split.
- Returns:
- pd.DataFrame
The R² score.
Examples
>>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import Ridge >>> from skore import CrossValidationReport >>> X, y = load_diabetes(return_X_y=True) >>> regressor = Ridge() >>> report = CrossValidationReport(regressor, X=X, y=y, cv_splitter=2) >>> report.metrics.r2() Ridge mean std Metric R² 0.37... 0.02...