OpenML
Collaborative, reproducible benchmarking and analysis

Collaborative, reproducible benchmarking and analysis

Created 21-02-2019 by Jan van Rijn Visibility: public
Loading wiki
Benchmarking in Machine Learning is often much more difficult than it seems, and hard to reproduce. This study is a new approach to do a collaborative, in-depth benchmarking of algorithms, and allows anybody to reproduce the results, and build on it in many ways. This study collects experiments from multiple researchers, run with different tools (mlr, scikit-learn, WEKA,...) and compares them all on a benchmark set of 100 public datasets (the OpenML-100). All algorithms were run with optimized (hyper)parameters using 200 random search iterations. A preliminary analysis of the results is available in the [associated GitHub repo](https://github.com/openml/Study-14). You can also run the notebooks of this study in the cloud with Everware: [![run at everware](https://img.shields.io/badge/run me-@everware-blue.svg)](https://everware.ysda.yandex.net/hub/oauth_login?repourl=https://github.com/openml/Study-14)