Run
10437789

Run 10437789

Task 2275 (Supervised Classification) meta_instanceincremental.arff Uploaded 03-03-2020 by Fares Gaaloul
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transfo rmer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=s klearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imp uter,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=skle arn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotenco der=sklearn.preprocessing._encoders.OneHotEncoder)),adaboostclassifier=skle arn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree .tree.DecisionTreeClassifier))(2)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting to None.
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),adaboostclassifier=sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier))(2)_memorynull
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),adaboostclassifier=sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier))(2)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "columntransformer", "step_name": "columntransformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "adaboostclassifier", "step_name": "adaboostclassifier"}}]
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(2)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(2)_remainder"passthrough"
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(2)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(2)_transformer_weightsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(2)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}]
sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(2)_memorynull
sklearn.pipeline.Pipeline(missingindicator=sklearn.impute.MissingIndicator,imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(2)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "missingindicator", "step_name": "missingindicator"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "imputer", "step_name": "imputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
sklearn.impute.MissingIndicator(3)_error_on_newfalse
sklearn.impute.MissingIndicator(3)_features"missing-only"
sklearn.impute.MissingIndicator(3)_missing_valuesNaN
sklearn.impute.MissingIndicator(3)_sparse"auto"
sklearn.preprocessing.imputation.Imputer(51)_axis0
sklearn.preprocessing.imputation.Imputer(51)_copytrue
sklearn.preprocessing.imputation.Imputer(51)_missing_values"NaN"
sklearn.preprocessing.imputation.Imputer(51)_strategy"median"
sklearn.preprocessing.imputation.Imputer(51)_verbose0
sklearn.preprocessing.data.StandardScaler(38)_copytrue
sklearn.preprocessing.data.StandardScaler(38)_with_meantrue
sklearn.preprocessing.data.StandardScaler(38)_with_stdtrue
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(6)_memorynull
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(6)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
sklearn.impute.SimpleImputer(18)_copytrue
sklearn.impute.SimpleImputer(18)_fill_value-1
sklearn.impute.SimpleImputer(18)_missing_valuesNaN
sklearn.impute.SimpleImputer(18)_strategy"constant"
sklearn.impute.SimpleImputer(18)_verbose0
sklearn.preprocessing._encoders.OneHotEncoder(20)_categorical_featuresnull
sklearn.preprocessing._encoders.OneHotEncoder(20)_categoriesnull
sklearn.preprocessing._encoders.OneHotEncoder(20)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(20)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(20)_n_valuesnull
sklearn.preprocessing._encoders.OneHotEncoder(20)_sparsetrue
sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier)(14)_algorithm"SAMME.R"
sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier)(14)_learning_rate0.01810332414475016
sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier)(14)_n_estimators71
sklearn.ensemble.weight_boosting.AdaBoostClassifier(base_estimator=sklearn.tree.tree.DecisionTreeClassifier)(14)_random_state45255
sklearn.tree.tree.DecisionTreeClassifier(63)_class_weightnull
sklearn.tree.tree.DecisionTreeClassifier(63)_criterion"gini"
sklearn.tree.tree.DecisionTreeClassifier(63)_max_depth9
sklearn.tree.tree.DecisionTreeClassifier(63)_max_featuresnull
sklearn.tree.tree.DecisionTreeClassifier(63)_max_leaf_nodesnull
sklearn.tree.tree.DecisionTreeClassifier(63)_min_impurity_decrease0.0
sklearn.tree.tree.DecisionTreeClassifier(63)_min_impurity_splitnull
sklearn.tree.tree.DecisionTreeClassifier(63)_min_samples_leaf1
sklearn.tree.tree.DecisionTreeClassifier(63)_min_samples_split2
sklearn.tree.tree.DecisionTreeClassifier(63)_min_weight_fraction_leaf0.0
sklearn.tree.tree.DecisionTreeClassifier(63)_presortfalse
sklearn.tree.tree.DecisionTreeClassifier(63)_random_state16546
sklearn.tree.tree.DecisionTreeClassifier(63)_splitter"best"

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.7869 ± 0.142
Per class
Cross-validation details (10-fold Crossvalidation)
0.8308 ± 0.1441
Per class
Cross-validation details (10-fold Crossvalidation)
0.6076 ± 0.2731
Cross-validation details (10-fold Crossvalidation)
0.605 ± 0.21
Cross-validation details (10-fold Crossvalidation)
0.0811 ± 0.0463
Cross-validation details (10-fold Crossvalidation)
0.2277 ± 0.0173
Cross-validation details (10-fold Crossvalidation)
0.8378 ± 0.0926
Cross-validation details (10-fold Crossvalidation)
74
Per class
Cross-validation details (10-fold Crossvalidation)
0.8293 ± 0.1502
Per class
Cross-validation details (10-fold Crossvalidation)
0.8378 ± 0.0926
Cross-validation details (10-fold Crossvalidation)
1.2388 ± 0.1775
Cross-validation details (10-fold Crossvalidation)
0.3562 ± 0.2008
Cross-validation details (10-fold Crossvalidation)
0.3317 ± 0.0269
Cross-validation details (10-fold Crossvalidation)
0.2847 ± 0.1101
Cross-validation details (10-fold Crossvalidation)
0.8584 ± 0.3352
Cross-validation details (10-fold Crossvalidation)
0.6863
Cross-validation details (10-fold Crossvalidation)