Flow
sklearn.ensemble.forest.RandomForestClassifier

sklearn.ensemble.forest.RandomForestClassifier

Visibility: public Uploaded 13-08-2021 by Sergey Redyuk sklearn==0.18 numpy>=1.6.1 scipy>=0.9 42 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_0.18
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
A random forest classifier. A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is always the same as the original input sample size but the samples are drawn with replacement if `bootstrap=True` (default).

Parameters

bootstrapWhether bootstrap samples are used when building treesdefault: true
class_weight"balanced_subsample" or None, optional (default=None) Weights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` The "balanced_subsample" mode is the same as "balanced" except that weights are computed based on the bootstrap sample for every tree grown For multi-output, the weights of each column of y will be multiplied Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified.default: null
criterionThe function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gain Note: this parameter is tree-specificdefault: "gini"
max_depthThe maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samplesdefault: null
max_featuresThe number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a percentage and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` (same as "auto") - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` featuresdefault: "auto"
max_leaf_nodesGrow trees with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodesdefault: null
min_impurity_splitThreshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. versionadded:: 0.18default: 1e-07
min_samples_leafThe minimum number of samples required to be at a leaf node: - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a percentage and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for percentagesdefault: 1
min_samples_splitThe minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a percentage and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for percentagesdefault: 2
min_weight_fraction_leafThe minimum weighted fraction of the input samples required to be at a leaf nodedefault: 0.0
n_estimatorsThe number of trees in the forestdefault: 10
n_jobsThe number of jobs to run in parallel for both `fit` and `predict` If -1, then the number of jobs is set to the number of coresdefault: 1
oob_scoreWhether to use out-of-bag samples to estimate the generalization accuracydefault: false
random_stateIf int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`default: null
verboseControls the verbosity of the tree building processdefault: 0
warm_startWhen set to ``True``, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forestdefault: false

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table