Task
Supervised Classification on Run_or_walk_information

Supervised Classification on Run_or_walk_information

Task 167212 Supervised Classification Run_or_walk_information 1 runs submitted
0 likes downloaded by 0 people , 0 total downloads 0 issues
Visibility: Public
Issue #Downvotes for this reason By


Metric:

1 Runs

Fetching data
Fetching data
Search runs in more detail
0 likes - 0 downloads - 0 reach - No evaluations yet (or not applicable). Evaluation Engine Exception: Illegal combination of evaluation measure attributes (repeat, fold, sample): Measure(s): predictive_accuracy(0, 0, 0), usercpu_time_millis_testing(0, 0, 0), usercpu_time_millis(0, 0, 0), root_mean_squared_error(0, 0, 0), root_relative_squared_error(0, 0, 0), kappa(0, 0, 0), usercpu_time_millis_training(0, 0, 0), predictive_accuracy(0, 1, 0), usercpu_time_millis_testing(0, 1, 0), usercpu_time_millis(0, 1, 0), root_mean_squared_error(0, 1, 0), root_relative_squared_error(0, 1, 0), kappa(0, 1, 0), usercpu_time_millis_training(0, 1, 0), predictive_accuracy(0, 2, 0), usercpu_time_millis_testing(0, 2, 0), usercpu_time_millis(0, 2, 0), root_mean_squared_error(0, 2, 0), root_relative_squared_error(0, 2, 0), kappa(0, 2, 0), usercpu_time_millis_training(0, 2, 0), predictive_accuracy(0, 3, 0), usercpu_time_millis_testing(0, 3, 0), usercpu_time_millis(0, 3, 0), root_mean_squared_error(0, 3, 0), root_relative_squared_error(0, 3, 0), kappa(0, 3, 0), usercpu_time_millis_training(... (message cut-off due to excessive length)

    Metric:

    Timeline

    Plotting contribution timeline

    Leaderboard

    Rank Name Top Score Entries Highest rank

    Note: The leaderboard ignores resubmissions of previous solutions, as well as parameter variations that do not improve performance.

    Challenge

    In supervised classification, you are given an input dataset in which instances are labeled with a certain class. The goal is to build a model that predicts the class for future unlabeled instances. The model is evaluated using a train-test procedure, e.g. cross-validation.

    To make results by different users comparable, you are given the exact train-test folds to be used, and you need to return at least the predictions generated by your model for each of the test instances. OpenML will use these predictions to calculate a range of evaluation measures on the server.

    You can also upload your own evaluation measures, provided that the code for doing so is available from the implementation used. For extremely large datasets, it may be infeasible to upload all predictions. In those cases, you need to compute and provide the evaluations yourself.

    Optionally, you can upload the model trained on all the input data. There is no restriction on the file format, but please use a well-known format or PMML.

    Given inputs

    Expected outputs

    evaluations A list of user-defined evaluations of the task as key-value pairs. KeyValue (optional)
    model A file containing the model built on all the input data. File (optional)
    predictions The desired output format Predictions (optional)

    How to submit runs

    Using your favorite machine learning environment

    Download this task directly in your environment and automatically upload your results

    OpenML bootcamp

    From your own software

    Use one of our APIs to download data from OpenML and upload your results

    OpenML APIs