List of preformated metrics

Evaluation metrics

If you use the compute_metrics function in the classification.compute_metrics or regression.compute_metrics functions there are preformated metrics.

You can see details of the code in the documentation page : transparentai.models

This is the list :

Problem type metric name
classification 'accuracy'
classification 'balanced_accuracy'
classification 'average_precision'
classification 'brier_score'
classification 'f1'
classification 'f1_micro'
classification 'f1_macro'
classification 'f1_weighted'
classification 'f1_samples'
classification 'log_loss'
classification 'precision'
classification 'precision_micro'
classification 'recall'
classification 'recall_micro'
classification 'true_positive_rate'
classification 'false_positive_rate'
classification 'jaccard'
classification 'matthews_corrcoef'
classification 'roc_auc'
classification 'roc_auc_ovr'
classification 'roc_auc_ovo'
classification 'roc_auc_ovr_weighted'
classification 'roc_auc_ovo_weighted'
classification 'true_positives'
classification 'false_positives'
classification 'false_negatives'
classification 'true_negatives'
classification 'confusion_matrix'
regression 'max_error'
regression 'mean_absolute_error'
regression 'mean_squared_error'
regression 'root_mean_squared_error'
regression 'mean_squared_log_error'
regression 'median_absolute_error'
regression 'r2'
regression 'mean_poisson_deviance'
regression 'mean_gamma_deviance'