bootstrap | Whether bootstrap samples are used when building trees. If False, the
whole dataset is used to build each tree | default: true |
ccp_alpha | Complexity parameter used for Minimal Cost-Complexity Pruning. The
subtree with the largest cost complexity that is smaller than
``ccp_alpha`` will be chosen. By default, no pruning is performed. See
:ref:`minimal_cost_complexity_pruning` for details
.. versionadded:: 0.22 | default: 0.0 |
class_weight | | default: null |
criterion | | default: "gini" |
max_depth | The maximum depth of the tree. If None, then nodes are expanded until
all leaves are pure or until all leaves contain less than
min_samples_split samples | default: null |
max_features | | default: "sqrt" |
max_leaf_nodes | Grow trees with ``max_leaf_nodes`` in best-first fashion
Best nodes are defined as relative reduction in impurity
If None then unlimited number of leaf nodes | default: null |
max_samples | If bootstrap is True, the number of samples to draw from X
to train each base estimator
- If None (default), then draw `X.shape[0]` samples
- If int, then draw `max_samples` samples
- If float, then draw `max_samples * X.shape[0]` samples. Thus,
`max_samples` should be in the interval `(0.0, 1.0]`
.. versionadded:: 0.22 | default: null |
min_impurity_decrease | A node will be split if this split induces a decrease of the impurity
greater than or equal to this value
The weighted impurity decrease equation is the following::
N_t / N * (impurity - N_t_R / N_t * right_impurity
- N_t_L / N_t * left_impurity)
where ``N`` is the total number of samples, ``N_t`` is the number of
samples at the current node, ``N_t_L`` is the number of samples in the
left child, and ``N_t_R`` is the number of samples in the right child
``N``, ``N_t``, ``N_t_R`` and ``N_t_L`` all refer to the weighted sum,
if ``sample_weight`` is passed
.. versionadded:: 0.19 | default: 0.0 |
min_samples_leaf | The minimum number of samples required to be at a leaf node
A split point at any depth will only be considered if it leaves at
least ``min_samples_leaf`` training samples in each of the left and
right branches. This may have the effect of smoothing the model,
especially in regression
- If int, then consider `min_samples_leaf` as the minimum number
- If float, then `min_samples_leaf` is a fraction and
`ceil(min_samples_leaf * n_samples)` are the minimum
number of samples for each node
.. versionchanged:: 0.18
Added float values for fractions | default: 1 |
min_samples_split | The minimum number of samples required to split an internal node:
- If int, then consider `min_samples_split` as the minimum number
- If float, then `min_samples_split` is a fraction and
`ceil(min_samples_split * n_samples)` are the minimum
number of samples for each split
.. versionchanged:: 0.18
Added float values for fractions | default: 2 |
min_weight_fraction_leaf | The minimum weighted fraction of the sum total of weights (of all
the input samples) required to be at a leaf node. Samples have
equal weight when sample_weight is not provided
max_features : {"sqrt", "log2", None}, int or float, default="sqrt"
The number of features to consider when looking for the best split:
- If int, then consider `max_features` features at each split
- If float, then `max_features` is a fraction and
`max(1, int(max_features * n_features_in_))` features are considered at each
split
- If "auto", then `max_features=sqrt(n_features)`
- If "sqrt", then `max_features=sqrt(n_features)`
- If "log2", then `max_features=log2(n_features)`
- If None, then `max_features=n_features`
.. versionchanged:: 1.1
The default of `max_features` changed from `"auto"` to `"sqrt"`
.. deprecated:: 1.1
The `"auto"` option was deprecated in 1.1 and will be removed
in 1.3
Note: the search for a split does not stop until at lea... | default: 0.0 |
n_estimators | The number of trees in the forest
.. versionchanged:: 0.22
The default value of ``n_estimators`` changed from 10 to 100
in 0.22
criterion : {"gini", "entropy", "log_loss"}, default="gini"
The function to measure the quality of a split. Supported criteria are
"gini" for the Gini impurity and "log_loss" and "entropy" both for the
Shannon information gain, see :ref:`tree_mathematical_formulation`
Note: This parameter is tree-specific | default: 100 |
n_jobs | The number of jobs to run in parallel. :meth:`fit`, :meth:`predict`,
:meth:`decision_path` and :meth:`apply` are all parallelized over the
trees. ``None`` means 1 unless in a :obj:`joblib.parallel_backend`
context. ``-1`` means using all processors. See :term:`Glossary
` for more details | default: null |
oob_score | Whether to use out-of-bag samples to estimate the generalization score
Only available if bootstrap=True | default: false |
random_state | Controls both the randomness of the bootstrapping of the samples used
when building trees (if ``bootstrap=True``) and the sampling of the
features to consider when looking for the best split at each node
(if ``max_features < n_features``)
See :term:`Glossary ` for details | default: null |
verbose | Controls the verbosity when fitting and predicting | default: 0 |
warm_start | When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just fit a whole
new forest. See :term:`Glossary ` and
:ref:`gradient_boosting_warm_start` for details
class_weight : {"balanced", "balanced_subsample"}, dict or list of dicts, default=None
Weights associated with classes in the form ``{class_label: weight}``
If not given, all classes are supposed to have weight one. For
multi-output problems, a list of dicts can be provided in the same
order as the columns of y
Note that for multioutput (including multilabel) weights should be
defined for each class of every column in its own dict. For example,
for four-class multilabel classification weights should be
[{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of
[{1:1}, {2:5}, {3:1}, {4:1}]
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class freq... | default: false |