OpenML

The macro weighted (by class size) average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken, weighted by the number of…

evaluation measure

Precision is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false positives (TP+FP): $$\text{Precision}=\frac{tp}{tp+fp} \, $$ It is…

evaluation measure

The Predictive Accuracy is the percentage of instances that are classified correctly. Is it 1 - ErrorRate.

evaluation measure

Entropy, in bits, of the prior class distribution. Calculated by taking the sum of -log2(priorProb) over all instances, where priorProb is the prior probability of the actual class for that instance.…

evaluation measure

Entropy, in bits, of the prior class distribution. Calculated by taking the sum of -log2(priorProb) over all instances, where priorProb is the prior probability of the actual class for that instance.…

evaluation measure

Recall is defined as the number of true positive (TP) predictions, divided by the sum of the number of true positives and false negatives (TP+FN): $$\text{Recall}=\frac{tp}{tp+fn} \, $$ It is also…

evaluation measure

The Relative Absolute Error (RAE) is the mean absolute error (MAE) divided by the mean prior absolute error (MPAE).

evaluation measure

The Root Mean Prior Squared Error (RMPSE) is the Root Mean Squared Error (RMSE) of the prior (e.g., the default class prediction).

evaluation measure

The Root Mean Squared Error (RMSE) measures how close the model's predictions are to the actual target values. It is the square root of the Mean Squared Error (MSE), the sum of the squared differences…

evaluation measure

The Root Relative Squared Error (RRSE) is the Root Mean Squared Error (RMSE) divided by the Root Mean Prior Squared Error (RMPSE). See root_mean_squared_error and root_mean_prior_squared_error.

evaluation measure

Runtime in seconds of the entire run. In the case of cross-validation runs, this will include all iterations.

evaluation measure

Amount of virtual memory, in bytes, used during the entire run.

evaluation measure

A benchmark tool which measures (single core) CPU performance on the JVM.

evaluation measure

Number of instances that were not classified by the model.

evaluation measure

The time in milliseconds to build and test a single model on all data.

evaluation measure

The time in milliseconds to test a single model on all data.

evaluation measure

The time in milliseconds to build a single model on all data.

evaluation measure

Bias component (squared) of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning, 40(2),…

evaluation measure

Intrinsic error component (squared) of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning,…

evaluation measure

Variance component of the bias-variance decomposition as defined by Webb in: Geoffrey I. Webb (2000), MultiBoosting: A Technique for Combining Boosting and Wagging, Machine Learning, 40(2), pages…

evaluation measure

The macro unweighted (ignoring class size) average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken, weighted by the…

evaluation measure

The macro weighted (by class size) average Recall. In macro-averaging, Recall is computed locally over each category ?rst and then the average over all categories is taken, weighted by the number of…

evaluation measure