Run
9323379

Run 9323379

Task 9954 (Supervised Classification) one-hundred-plants-margin Uploaded 10-10-2018 by Jan van Rijn
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transfo rmer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=s klearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imp uter,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=skle arn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotenco der=sklearn.preprocessing._encoders.OneHotEncoder)),bernoullinb=sklearn.nai ve_bayes.BernoulliNB)(1)Automatically created scikit-learn flow.
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"passthrough"
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(1)_memorynull
sklearn.impute.MissingIndicator(1)_error_on_newfalse
sklearn.impute.MissingIndicator(1)_features"missing-only"
sklearn.impute.MissingIndicator(1)_missing_valuesNaN
sklearn.impute.MissingIndicator(1)_sparse"auto"
sklearn.preprocessing.imputation.Imputer(29)_axis0
sklearn.preprocessing.imputation.Imputer(29)_copytrue
sklearn.preprocessing.imputation.Imputer(29)_missing_values"NaN"
sklearn.preprocessing.imputation.Imputer(29)_strategy"most_frequent"
sklearn.preprocessing.imputation.Imputer(29)_verbose0
sklearn.preprocessing.data.StandardScaler(14)_copytrue
sklearn.preprocessing.data.StandardScaler(14)_with_meantrue
sklearn.preprocessing.data.StandardScaler(14)_with_stdtrue
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(1)_memorynull
sklearn.impute.SimpleImputer(1)_copytrue
sklearn.impute.SimpleImputer(1)_fill_value-1
sklearn.impute.SimpleImputer(1)_missing_valuesNaN
sklearn.impute.SimpleImputer(1)_strategy"constant"
sklearn.impute.SimpleImputer(1)_verbose0
sklearn.preprocessing._encoders.OneHotEncoder(3)_categorical_featuresnull
sklearn.preprocessing._encoders.OneHotEncoder(3)_categoriesnull
sklearn.preprocessing._encoders.OneHotEncoder(3)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(3)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(3)_n_valuesnull
sklearn.preprocessing._encoders.OneHotEncoder(3)_sparsetrue
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),bernoullinb=sklearn.naive_bayes.BernoulliNB)(1)_memorynull
sklearn.naive_bayes.BernoulliNB(5)_alpha0.5444434156572212
sklearn.naive_bayes.BernoulliNB(5)_binarize0.0
sklearn.naive_bayes.BernoulliNB(5)_class_priornull
sklearn.naive_bayes.BernoulliNB(5)_fit_priortrue

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

15 Evaluation measures

0.4242
Per class
Cross-validation details (10-fold Crossvalidation)
-0.0038
10.6131 ± 0
Cross-validation details (10-fold Crossvalidation)
0.0198 ± 0
Cross-validation details (10-fold Crossvalidation)
0.0198
Cross-validation details (10-fold Crossvalidation)
1600
Per class
Cross-validation details (10-fold Crossvalidation)
0.0063
Cross-validation details (10-fold Crossvalidation)
6.6439
Cross-validation details (10-fold Crossvalidation)
0.0063
Per class
Cross-validation details (10-fold Crossvalidation)
1.0003 ± 0
Cross-validation details (10-fold Crossvalidation)
0.0995
Cross-validation details (10-fold Crossvalidation)
0.0995 ± 0
Cross-validation details (10-fold Crossvalidation)
1.0004
Cross-validation details (10-fold Crossvalidation)