Comparing multiple reports#

The class ComparisonReport provides a report allowing to compare EstimatorReport or CrossValidationReport instances in an interactive way. The functionalities of the report are accessible through accessors.

ComparisonReport(reports, *[, n_jobs])

Report for comparing reports.

Methods

ComparisonReport.help()

Display report help using rich or HTML.

ComparisonReport.diagnose(*[, ignore])

Run checks and return a diagnostic with detected issues.

ComparisonReport.add_checks(checks)

Register additional diagnostic checks for this report.

ComparisonReport.cache_predictions()

Cache the predictions for sub-estimators reports.

ComparisonReport.clear_cache()

Clear the cache.

ComparisonReport.create_estimator_report(*, ...)

Create an estimator report from one of the reports in the comparison.

ComparisonReport.get_predictions(*, data_source)

Get predictions from the underlying reports.

Accessors

ComparisonReport.inspection

Accessor for model inspection related operations.

ComparisonReport.metrics

Accessor for metrics-related operations.

Metrics#

The metrics accessor helps you to evaluate the statistical performance of the compared estimators. In addition, we provide a sub-accessor plot, to get the common performance metric representations.

ComparisonReport.metrics.help()

Display accessor help using rich or HTML.

ComparisonReport.metrics.summarize(*[, ...])

Report a set of metrics for the estimators.

ComparisonReport.metrics.accuracy(*[, ...])

Compute the accuracy score.

ComparisonReport.metrics.add(metric, *[, ...])

Add a custom metric to be included in summarize() by default.

ComparisonReport.metrics.available([report_name])

List available metric names in the registry.

ComparisonReport.metrics.brier_score(*[, ...])

Compute the Brier score.

ComparisonReport.metrics.confusion_matrix(*)

Plot the confusion matrix.

ComparisonReport.metrics.log_loss(*[, ...])

Compute the log loss.

ComparisonReport.metrics.mae(*[, ...])

Compute the mean absolute error.

ComparisonReport.metrics.mape(*[, ...])

Compute the mean absolute percentage error.

ComparisonReport.metrics.precision(*[, ...])

Compute the precision score.

ComparisonReport.metrics.precision_recall(*)

Plot the precision-recall curve.

ComparisonReport.metrics.prediction_error(*)

Plot the prediction error of a regression model.

ComparisonReport.metrics.r2(*[, ...])

Compute the R² score.

ComparisonReport.metrics.recall(*[, ...])

Compute the recall score.

ComparisonReport.metrics.remove(name)

Remove a metric from each underlying estimator report.

ComparisonReport.metrics.rmse(*[, ...])

Compute the root mean squared error.

ComparisonReport.metrics.roc(*[, data_source])

Plot the ROC curve.

ComparisonReport.metrics.roc_auc(*[, ...])

Compute the ROC AUC score.

ComparisonReport.metrics.timings([aggregate])

Get all measured processing times related to the different estimators.

Inspection#

The inspection accessor helps you inspect your model by e.g. evaluating the importance of the features in your model.

ComparisonReport.inspection.help()

Display accessor help using rich or HTML.

ComparisonReport.inspection.coefficients()

Retrieve the coefficients for each report, including the intercepts.

ComparisonReport.inspection.impurity_decrease()

Retrieve the Mean Decrease in Impurity (MDI) for each report.

ComparisonReport.inspection.permutation_importance(*)

Display the permutation feature importance.