Supervised Classification on ozone_level
Supervised Classification on ozone_level
Issue
#Downvotes for this reason
By
Metric:
area under roc curve
average cost
binominal test
build cpu time
build memory
c index
chi-squared
class complexity
class complexity gain
confusion matrix
correlation coefficient
cortana quality
coverage
f measure
information gain
jaccard
kappa
kb relative information score
kohavi wolpert bias squared
kohavi wolpert error
kohavi wolpert sigma squared
kohavi wolpert variance
kononenko bratko information score
matthews correlation coefficient
mean absolute error
mean class complexity
mean class complexity gain
mean f measure
mean kononenko bratko information score
mean precision
mean prior absolute error
mean prior class complexity
mean recall
mean weighted area under roc curve
mean weighted f measure
mean weighted precision
weighted recall
number of instances
os information
positives
precision
predictive accuracy
prior class complexity
prior entropy
probability
quality
ram hours
recall
relative absolute error
root mean prior squared error
root mean squared error
root relative squared error
run cpu time
run memory
run virtual memory
scimark benchmark
single point area under roc curve
total cost
unclassified instance count
usercpu time millis
usercpu time millis testing
usercpu time millis training
webb bias
webb error
webb variance
joint entropy
pattern team auroc10
wall clock time millis
wall clock time millis training
wall clock time millis testing
unweighted recall
0 Runs
Metric:
area under roc curve
average cost
binominal test
build cpu time
build memory
c index
chi-squared
class complexity
class complexity gain
confusion matrix
correlation coefficient
cortana quality
coverage
f measure
information gain
jaccard
kappa
kb relative information score
kohavi wolpert bias squared
kohavi wolpert error
kohavi wolpert sigma squared
kohavi wolpert variance
kononenko bratko information score
matthews correlation coefficient
mean absolute error
mean class complexity
mean class complexity gain
mean f measure
mean kononenko bratko information score
mean precision
mean prior absolute error
mean prior class complexity
mean recall
mean weighted area under roc curve
mean weighted f measure
mean weighted precision
weighted recall
number of instances
os information
positives
precision
predictive accuracy
prior class complexity
prior entropy
probability
quality
ram hours
recall
relative absolute error
root mean prior squared error
root mean squared error
root relative squared error
run cpu time
run memory
run virtual memory
scimark benchmark
single point area under roc curve
total cost
unclassified instance count
usercpu time millis
usercpu time millis testing
usercpu time millis training
webb bias
webb error
webb variance
joint entropy
pattern team auroc10
wall clock time millis
wall clock time millis training
wall clock time millis testing
unweighted recall
Timeline
Plotting contribution timeline
Leaderboard
Rank
Name
Top Score
Entries
Highest rank
Note: The leaderboard ignores resubmissions of previous solutions, as well as parameter variations that do not improve performance.
Challenge
In supervised classification, you are given an input dataset in which instances are labeled with a certain class. The goal is to build a model that predicts the class for future unlabeled instances. The model is evaluated using a train-test procedure, e.g. cross-validation.
To make results by different users comparable, you are given the exact train-test folds to be used, and you need to return at least the predictions generated by your model for each of the test instances. OpenML will use these predictions to calculate a range of evaluation measures on the server.
You can also upload your own evaluation measures, provided that the code for doing so is available from the implementation used. For extremely large datasets, it may be infeasible to upload all predictions. In those cases, you need to compute and provide the evaluations yourself.
Optionally, you can upload the model trained on all the input data. There is no restriction on the file format, but please use a well-known format or PMML.
Given inputs
Expected outputs
evaluations
A list of user-defined evaluations of the task as key-value pairs.
KeyValue (optional)
model
A file containing the model built on all the input data.
File (optional)
predictions
The desired output format
Predictions (optional)
How to submit runs
Using your favorite machine learning environment
Download this task directly in your environment and automatically upload your results
OpenML bootcamp
From your own software
Use one of our APIs to download data from OpenML and upload your results
OpenML APIs