Run
10591846

Run 10591846

Task 2295 (Supervised Regression) cholesterol Uploaded 10-02-2023 by Sharath Kumar Reddy Alijarla
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
Issue #Downvotes for this reason By


Flow

sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transfo rmer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=skle arn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.S tandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.prep rocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selec tion._variance_threshold.VarianceThreshold,randomforestregressor=sklearn.en semble._forest.RandomForestRegressor)(1)Pipeline of transforms with a final estimator. Sequentially apply a list of transforms and a final estimator. Intermediate steps of the pipeline must be 'transforms', that is, they must implement `fit` and `transform` methods. The final estimator only needs to implement `fit`. The transformers in the pipeline can be cached using ``memory`` argument. The purpose of the pipeline is to assemble several steps that can be cross-validated together while setting different parameters. For this, it enables setting parameters of the various steps using their names and the parameter name separated by a `'__'`, as in the example below. A step's estimator may be replaced entirely by setting the parameter with its name to another estimator, or a transformer removed by setting it to `'passthrough'` or `None`.
sklearn.preprocessing._data.StandardScaler(11)_copytrue
sklearn.preprocessing._data.StandardScaler(11)_with_meantrue
sklearn.preprocessing._data.StandardScaler(11)_with_stdtrue
sklearn.impute._base.SimpleImputer(30)_add_indicatorfalse
sklearn.impute._base.SimpleImputer(30)_copytrue
sklearn.impute._base.SimpleImputer(30)_fill_valuenull
sklearn.impute._base.SimpleImputer(30)_missing_valuesNaN
sklearn.impute._base.SimpleImputer(30)_strategy"mean"
sklearn.impute._base.SimpleImputer(30)_verbose0
sklearn.preprocessing._encoders.OneHotEncoder(31)_categories"auto"
sklearn.preprocessing._encoders.OneHotEncoder(31)_dropnull
sklearn.preprocessing._encoders.OneHotEncoder(31)_dtype{"oml-python:serialized_object": "type", "value": "np.float64"}
sklearn.preprocessing._encoders.OneHotEncoder(31)_handle_unknown"ignore"
sklearn.preprocessing._encoders.OneHotEncoder(31)_sparsetrue
sklearn.feature_selection._variance_threshold.VarianceThreshold(7)_threshold0.0
sklearn.ensemble._forest.RandomForestRegressor(3)_bootstraptrue
sklearn.ensemble._forest.RandomForestRegressor(3)_ccp_alpha0.0
sklearn.ensemble._forest.RandomForestRegressor(3)_criterion"friedman_mse"
sklearn.ensemble._forest.RandomForestRegressor(3)_max_depthnull
sklearn.ensemble._forest.RandomForestRegressor(3)_max_features0.46403148108369807
sklearn.ensemble._forest.RandomForestRegressor(3)_max_leaf_nodesnull
sklearn.ensemble._forest.RandomForestRegressor(3)_max_samplesnull
sklearn.ensemble._forest.RandomForestRegressor(3)_min_impurity_decrease0.0
sklearn.ensemble._forest.RandomForestRegressor(3)_min_samples_leaf4
sklearn.ensemble._forest.RandomForestRegressor(3)_min_samples_split14
sklearn.ensemble._forest.RandomForestRegressor(3)_min_weight_fraction_leaf0.0
sklearn.ensemble._forest.RandomForestRegressor(3)_n_estimators100
sklearn.ensemble._forest.RandomForestRegressor(3)_n_jobsnull
sklearn.ensemble._forest.RandomForestRegressor(3)_oob_scorefalse
sklearn.ensemble._forest.RandomForestRegressor(3)_random_state55781
sklearn.ensemble._forest.RandomForestRegressor(3)_verbose0
sklearn.ensemble._forest.RandomForestRegressor(3)_warm_startfalse
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selection._variance_threshold.VarianceThreshold,randomforestregressor=sklearn.ensemble._forest.RandomForestRegressor)(1)_memorynull
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selection._variance_threshold.VarianceThreshold,randomforestregressor=sklearn.ensemble._forest.RandomForestRegressor)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "columntransformer", "step_name": "columntransformer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "variancethreshold", "step_name": "variancethreshold"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "randomforestregressor", "step_name": "randomforestregressor"}}]
sklearn.pipeline.Pipeline(columntransformer=sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)),variancethreshold=sklearn.feature_selection._variance_threshold.VarianceThreshold,randomforestregressor=sklearn.ensemble._forest.RandomForestRegressor)(1)_verbosefalse
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_n_jobsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_remainder"drop"
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_sparse_threshold0.3
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformer_weightsnull
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_transformers[{"oml-python:serialized_object": "component_reference", "value": {"key": "numeric", "step_name": "numeric", "argument_1": [0, 3, 6, 8, 10, 12]}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "nominal", "step_name": "nominal", "argument_1": [1, 2, 4, 5, 7, 9, 11]}}]
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbosefalse
sklearn.compose._column_transformer.ColumnTransformer(numeric=sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler),nominal=sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder))(1)_verbose_feature_names_outtrue
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_memorynull
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "simpleimputer", "step_name": "simpleimputer"}}, {"oml-python:serialized_object": "component_reference", "value": {"key": "standardscaler", "step_name": "standardscaler"}}]
sklearn.pipeline.Pipeline(simpleimputer=sklearn.impute._base.SimpleImputer,standardscaler=sklearn.preprocessing._data.StandardScaler)(1)_verbosefalse
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(10)_memorynull
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(10)_steps[{"oml-python:serialized_object": "component_reference", "value": {"key": "onehotencoder", "step_name": "onehotencoder"}}]
sklearn.pipeline.Pipeline(onehotencoder=sklearn.preprocessing._encoders.OneHotEncoder)(10)_verbosefalse

Result files

xml
Description

XML file describing the run, including user-defined evaluation measures.

arff
Predictions

ARFF file with instance-level predictions generated by the model.

7 Evaluation measures

39.3139 ± 6.092
Cross-validation details (10-fold Crossvalidation)
303
Cross-validation details (10-fold Crossvalidation)
51.6914 ± 10.5853
Cross-validation details (10-fold Crossvalidation)