CrossValidationReport.metrics.custom_metric#
- CrossValidationReport.metrics.custom_metric(metric_function, response_method, *, metric_name=None, data_source='test', X=None, y=None, aggregate=('mean', 'std'), **kwargs)[source]#
- Compute a custom metric. - It brings some flexibility to compute any desired metric. However, we need to follow some rules: - metric_functionshould take- y_trueand- y_predas the first two positional arguments.
- response_methodcorresponds to the estimator’s method to be invoked to get the predictions. It can be a string or a list of strings to defined in which order the methods should be invoked.
 - Parameters:
- metric_functioncallable
- The metric function to be computed. The expected signature is - metric_function(y_true, y_pred, **kwargs).
- response_methodstr or list of str
- The estimator’s method to be invoked to get the predictions. The possible values are: - predict,- predict_proba,- predict_log_proba, and- decision_function.
- metric_namestr, default=None
- The name of the metric. If not provided, it will be inferred from the metric function. 
- data_source{“test”, “train”, “X_y”}, default=”test”
- The data source to use. - “test” : use the test set provided when creating the report. 
- “train” : use the train set provided when creating the report. 
- “X_y” : use the provided - Xand- yto compute the metric.
 
- Xarray-like of shape (n_samples, n_features), default=None
- New data on which to compute the metric. By default, we use the validation set provided when creating the report. 
- yarray-like of shape (n_samples,), default=None
- New target on which to compute the metric. By default, we use the target provided when creating the report. 
- aggregate{“mean”, “std”}, list of such str or None, default=(“mean”, “std”)
- Function to aggregate the scores across the cross-validation splits. None will return the scores for each split. 
- **kwargsdict
- Any additional keyword arguments to be passed to the metric function. 
 
- Returns:
- pd.DataFrame
- The custom metric. 
 
 - Examples - >>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import Ridge >>> from sklearn.metrics import mean_absolute_error >>> from skore import CrossValidationReport >>> X, y = load_diabetes(return_X_y=True) >>> regressor = Ridge() >>> report = CrossValidationReport(regressor, X=X, y=y, cv_splitter=2) >>> report.metrics.custom_metric( ... metric_function=mean_absolute_error, ... response_method="predict", ... metric_name="MAE", ... ) Ridge mean std Metric MAE 51.4... 1.7...