Run
10560564

Run 10560564

Task 146822 (Supervised Classification) segment Uploaded 14-08-2021 by Sergey Redyuk
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transfo rmer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.pr eprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.St andardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.imput e.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder )),variancethreshold=sklearn.feature_selection.variance_threshold.VarianceT hreshold,svc=sklearn.svm.classes.SVC)(2)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting to None.
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_remainder"passthrough"
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_transformer_weightsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": []}}]
sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(9)_memorynull
sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler)(9)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "imputer", "step_name": "imputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
sklearn.preprocessing.imputation.Imputer(56)_axis0
sklearn.preprocessing.imputation.Imputer(56)_copytrue
sklearn.preprocessing.imputation.Imputer(56)_missing_values"NaN"
sklearn.preprocessing.imputation.Imputer(56)_strategy"mean"
sklearn.preprocessing.imputation.Imputer(56)_verbose0
sklearn.preprocessing.data.StandardScaler(44)_copytrue
sklearn.preprocessing.data.StandardScaler(44)_with_meantrue
sklearn.preprocessing.data.StandardScaler(44)_with_stdtrue
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(7)_memorynull
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(7)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
sklearn.impute.SimpleImputer(19)_copytrue
sklearn.impute.SimpleImputer(19)_fill_value-1
sklearn.impute.SimpleImputer(19)_missing_valuesNaN
sklearn.impute.SimpleImputer(19)_strategy"constant"
sklearn.impute.SimpleImputer(19)_verbose0
sklearn.preprocessing._encoders.OneHotEncoder(28)_categorical_featuresnull
sklearn.preprocessing._encoders.OneHotEncoder(28)_categoriesnull
sklearn.preprocessing._encoders.OneHotEncoder(28)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(28)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(28)_n_valuesnull
sklearn.preprocessing._encoders.OneHotEncoder(28)_sparsetrue
sklearn.feature_selection.variance_threshold.VarianceThreshold(32)_threshold0.0
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selection.variance_threshold.VarianceThreshold,svc=sklearn.svm.classes.SVC)(2)_memorynull
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(imputer=sklearn.preprocessing.imputation.Imputer,standardscaler=sklearn.preprocessing.data.StandardScaler),nominal=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute.SimpleImputer,onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selection.variance_threshold.VarianceThreshold,svc=sklearn.svm.classes.SVC)(2)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "columntransformer", "step_name": "columntransformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "variancethreshold", "step_name": "variancethreshold"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "svc", "step_name": "svc"}}]
sklearn.svm.classes.SVC(47)_C143.11841724167172
sklearn.svm.classes.SVC(47)_cache_size200
sklearn.svm.classes.SVC(47)_class_weightnull
sklearn.svm.classes.SVC(47)_coef00.22921525238301776
sklearn.svm.classes.SVC(47)_decision_function_shape"ovr"
sklearn.svm.classes.SVC(47)_degree2
sklearn.svm.classes.SVC(47)_gamma0.0070192306738709525
sklearn.svm.classes.SVC(47)_kernel"poly"
sklearn.svm.classes.SVC(47)_max_iter-1
sklearn.svm.classes.SVC(47)_probabilityfalse
sklearn.svm.classes.SVC(47)_random_state37904
sklearn.svm.classes.SVC(47)_shrinkingfalse
sklearn.svm.classes.SVC(47)_tol1.1571247532842242e-05
sklearn.svm.classes.SVC(47)_verbosefalse

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.9391 ± 0.0133
Per class
Cross-validation details (10-fold Crossvalidation)
0.8953 ± 0.0224
Per class
Cross-validation details (10-fold Crossvalidation)
0.8783 ± 0.0266
Cross-validation details (10-fold Crossvalidation)
0.8874 ± 0.0246
Cross-validation details (10-fold Crossvalidation)
0.0298 ± 0.0065
Cross-validation details (10-fold Crossvalidation)
0.2449
Cross-validation details (10-fold Crossvalidation)
0.8957 ± 0.0228
Cross-validation details (10-fold Crossvalidation)
2310
Per class
Cross-validation details (10-fold Crossvalidation)
0.8967 ± 0.0221
Per class
Cross-validation details (10-fold Crossvalidation)
0.8957 ± 0.0228
Cross-validation details (10-fold Crossvalidation)
2.8074
Cross-validation details (10-fold Crossvalidation)
0.1217 ± 0.0266
Cross-validation details (10-fold Crossvalidation)
0.3499
Cross-validation details (10-fold Crossvalidation)
0.1727 ± 0.0193
Cross-validation details (10-fold Crossvalidation)
0.4934 ± 0.0551
Cross-validation details (10-fold Crossvalidation)
0.8957 ± 0.0228
Cross-validation details (10-fold Crossvalidation)