Task
Supervised Classification on Test_vectors_1500_repaired

Supervised Classification on Test_vectors_1500_repaired

Task 233161 Supervised Classification Test_vectors_1500_repaired 5 runs submitted
0 likes downloaded by 1 people , 2 total downloads 0 issues
Visibility: Public
Issue #Downvotes for this reason By


Metric:

5 Runs

Fetching data
Fetching data
Search runs in more detail
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Attribute confidence.like tweeting you like this wen yu aint talking to all yo hoes be like ill wait not found among predictions.
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Attribute confidence.like tweeting you like this wen yu aint talking to all yo hoes be like ill wait not found among predictions.
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Attribute confidence.like tweeting you like this wen yu aint talking to all yo hoes be like ill wait not found among predictions.
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Attribute confidence.like tweeting you like this wen yu aint talking to all yo hoes be like ill wait not found among predictions.
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Attribute confidence.like tweeting you like this wen yu aint talking to all yo hoes be like ill wait not found among predictions.

    Metric:

    Timeline

    Plotting contribution timeline

    Leaderboard

    Rank Name Top Score Entries Highest rank

    Note: The leaderboard ignores resubmissions of previous solutions, as well as parameter variations that do not improve performance.

    Challenge

    In supervised classification, you are given an input dataset in which instances are labeled with a certain class. The goal is to build a model that predicts the class for future unlabeled instances. The model is evaluated using a train-test procedure, e.g. cross-validation.

    To make results by different users comparable, you are given the exact train-test folds to be used, and you need to return at least the predictions generated by your model for each of the test instances. OpenML will use these predictions to calculate a range of evaluation measures on the server.

    You can also upload your own evaluation measures, provided that the code for doing so is available from the implementation used. For extremely large datasets, it may be infeasible to upload all predictions. In those cases, you need to compute and provide the evaluations yourself.

    Optionally, you can upload the model trained on all the input data. There is no restriction on the file format, but please use a well-known format or PMML.

    Given inputs

    Expected outputs

    evaluations A list of user-defined evaluations of the task as key-value pairs. KeyValue (optional)
    model A file containing the model built on all the input data. File (optional)
    predictions The desired output format Predictions (optional)

    How to submit runs

    Using your favorite machine learning environment

    Download this task directly in your environment and automatically upload your results

    OpenML bootcamp

    From your own software

    Use one of our APIs to download data from OpenML and upload your results

    OpenML APIs