Run
10555384

Run 10555384

Task 125920 (Supervised Classification) dresses-sales Uploaded 08-08-2020 by Heinrich Peters
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer, columntransformer=sklearn.compose._column_transformer.ColumnTransformer(num =sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.Standa rdScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing ._encoders.OneHotEncoder)),logisticregression=sklearn.linear_model.logistic .LogisticRegression)(2)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement fit and transform methods. The final estimator only needs to implement fit. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a '__', as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to 'passthrough' or ``None``.
sklearn.preprocessing.data.StandardScaler(35)_copytrue
sklearn.preprocessing.data.StandardScaler(35)_with_meantrue
sklearn.preprocessing.data.StandardScaler(35)_with_stdtrue
sklearn.impute._base.SimpleImputer(11)_add_indicatorfalse
sklearn.impute._base.SimpleImputer(11)_copytrue
sklearn.impute._base.SimpleImputer(11)_fill_valuenull
sklearn.impute._base.SimpleImputer(11)_missing_valuesNaN
sklearn.impute._base.SimpleImputer(11)_strategy"most_frequent"
sklearn.impute._base.SimpleImputer(11)_verbose0
sklearn.preprocessing._encoders.OneHotEncoder(16)_categorical_featuresnull
sklearn.preprocessing._encoders.OneHotEncoder(16)_categoriesnull
sklearn.preprocessing._encoders.OneHotEncoder(16)_dropnull
sklearn.preprocessing._encoders.OneHotEncoder(16)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(16)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(16)_n_valuesnull
sklearn.preprocessing._encoders.OneHotEncoder(16)_sparsetrue
sklearn.linear_model.logistic.LogisticRegression(33)_C0.1
sklearn.linear_model.logistic.LogisticRegression(33)_class_weightnull
sklearn.linear_model.logistic.LogisticRegression(33)_dualfalse
sklearn.linear_model.logistic.LogisticRegression(33)_fit_intercepttrue
sklearn.linear_model.logistic.LogisticRegression(33)_intercept_scaling1
sklearn.linear_model.logistic.LogisticRegression(33)_l1_rationull
sklearn.linear_model.logistic.LogisticRegression(33)_max_iter10000
sklearn.linear_model.logistic.LogisticRegression(33)_multi_class"warn"
sklearn.linear_model.logistic.LogisticRegression(33)_n_jobsnull
sklearn.linear_model.logistic.LogisticRegression(33)_penalty"l2"
sklearn.linear_model.logistic.LogisticRegression(33)_random_state1
sklearn.linear_model.logistic.LogisticRegression(33)_solver"lbfgs"
sklearn.linear_model.logistic.LogisticRegression(33)_tol0.0001
sklearn.linear_model.logistic.LogisticRegression(33)_verbose0
sklearn.linear_model.logistic.LogisticRegression(33)_warm_startfalse
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_remainder"drop"
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_transformer_weightsnull
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "num", "step_name": "num", "argument_1": [false, false, true, false, false, false, false, false, false, false, false, false]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "cat", "step_name": "cat", "argument_1": [true, true, false, true, true, true, true, true, true, true, true, true]}}]
sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(6)_verbosefalse
sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler)(6)_memorynull
sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler)(6)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler)(6)_verbosefalse
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(7)_memorynull
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(7)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(7)_verbosefalse
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,columntransformer=sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),logisticregression=sklearn.linear_model.logistic.LogisticRegression)(2)_memorynull
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,columntransformer=sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),logisticregression=sklearn.linear_model.logistic.LogisticRegression)(2)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "columntransformer", "step_name": "columntransformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "logisticregression", "step_name": "logisticregression"}}]
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,columntransformer=sklearn.compose._column_transformer.ColumnTransformer(num=sklearn.pipeline.Pipeline(standardscaler=sklearn.preprocessing.data.StandardScaler),cat=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),logisticregression=sklearn.linear_model.logistic.LogisticRegression)(2)_verbosefalse

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

18 Evaluation measures

0.6547 ± 0.0708
Per class
Cross-validation details (10-fold Crossvalidation)
0.6135 ± 0.0726
Per class
Cross-validation details (10-fold Crossvalidation)
0.2077 ± 0.142
Cross-validation details (10-fold Crossvalidation)
0.0855 ± 0.0466
Cross-validation details (10-fold Crossvalidation)
0.4517 ± 0.0184
Cross-validation details (10-fold Crossvalidation)
0.4873
Cross-validation details (10-fold Crossvalidation)
0.634 ± 0.0626
Cross-validation details (10-fold Crossvalidation)
500
Per class
Cross-validation details (10-fold Crossvalidation)
0.6274 ± 0.0789
Per class
Cross-validation details (10-fold Crossvalidation)
0.634 ± 0.0626
Cross-validation details (10-fold Crossvalidation)
0.9815
Cross-validation details (10-fold Crossvalidation)
0.927 ± 0.0377
Cross-validation details (10-fold Crossvalidation)
0.4936
Cross-validation details (10-fold Crossvalidation)
0.4764 ± 0.0162
Cross-validation details (10-fold Crossvalidation)
0.9652 ± 0.0328
Cross-validation details (10-fold Crossvalidation)
0.5984 ± 0.0679
Cross-validation details (10-fold Crossvalidation)