CrossValidationReport.metrics.plot.prediction_error#
- CrossValidationReport.metrics.plot.prediction_error(*, data_source='test', ax=None, kind='residual_vs_predicted', subsample=1000, random_state=None)[source]#
Plot the prediction error of a regression model.
Extra keyword arguments will be passed to matplotlib’s
plot.- Parameters:
- data_source{“test”, “train”}, default=”test”
The data source to use.
“test” : use the test set provided when creating the report.
“train” : use the train set provided when creating the report.
- axmatplotlib axes, default=None
Axes object to plot on. If
None, a new figure and axes is created.- kind{“actual_vs_predicted”, “residual_vs_predicted”}, default=”residual_vs_predicted”
The type of plot to draw:
“actual_vs_predicted” draws the observed values (y-axis) vs. the predicted values (x-axis).
“residual_vs_predicted” draws the residuals, i.e. difference between observed and predicted values, (y-axis) vs. the predicted values (x-axis).
- subsamplefloat, int or None, default=1_000
Sampling the samples to be shown on the scatter plot. If
float, it should be between 0 and 1 and represents the proportion of the original dataset. Ifint, it represents the number of samples display on the scatter plot. IfNone, no subsampling will be applied. by default, 1,000 samples or less will be displayed.- random_stateint, default=None
The random state to use for the subsampling.
- Returns:
- PredictionErrorDisplay
The prediction error display.
Examples
>>> from sklearn.datasets import load_diabetes >>> from sklearn.linear_model import Ridge >>> from skore import CrossValidationReport >>> X, y = load_diabetes(return_X_y=True) >>> regressor = Ridge() >>> report = CrossValidationReport(regressor, X=X, y=y, cv_splitter=2) Processing cross-validation ... >>> display = report.metrics.plot.prediction_error( ... kind="actual_vs_predicted" ... ) Computing predictions for display ... >>> display.plot(line_kwargs={"color": "tab:red"})