Issue | #Downvotes for this reason | By |
---|
l2_regularization | The L2 regularization parameter. Use 0 for no regularization | default: 0.0 |
learning_rate | The learning rate, also known as *shrinkage*. This is used as a multiplicative factor for the leaves values. Use ``1`` for no shrinkage | default: 0.1 |
loss | default: "auto" | |
max_bins | The maximum number of bins to use for non-missing values. Before training, each feature of the input array `X` is binned into integer-valued bins, which allows for a much faster training stage Features with a small number of unique values may use less than ``max_bins`` bins. In addition to the ``max_bins`` bins, one more bin is always reserved for missing values. Must be no larger than 255 | default: 255 |
max_depth | The maximum depth of each tree. The depth of a tree is the number of nodes to go from the root to the deepest leaf. Must be strictly greater than 1. Depth isn't constrained by default | default: null |
max_iter | The maximum number of iterations of the boosting process, i.e. the maximum number of trees for binary classification. For multiclass classification, `n_classes` trees per iteration are built | default: 100 |
max_leaf_nodes | The maximum number of leaves for each tree. Must be strictly greater than 1. If None, there is no maximum limit | default: 31 |
min_samples_leaf | The minimum number of samples per leaf. For small datasets with less than a few hundred samples, it is recommended to lower this value since only very shallow trees would be built | default: 20 |
n_iter_no_change | Used to determine when to "early stop". The fitting process is stopped when none of the last ``n_iter_no_change`` scores are better than the ``n_iter_no_change - 1`` -th-to-last one, up to some tolerance. If None or 0, no early-stopping is done | default: null |
random_state | Pseudo-random number generator to control the subsampling in the binning process, and the train/validation data split if early stopping is enabled. See :term:`random_state`. | default: null |
scoring | Scoring parameter to use for early stopping. It can be a single string (see :ref:`scoring_parameter`) or a callable (see :ref:`scoring`). If None, the estimator's default scorer is used. If ``scoring='loss'``, early stopping is checked w.r.t the loss value. Only used if ``n_iter_no_change`` is not None | default: null |
tol | The absolute tolerance to use when comparing scores. The higher the tolerance, the more likely we are to early stop: higher tolerance means that it will be harder for subsequent iterations to be considered an improvement upon the reference score verbose: int, optional (default=0) The verbosity level. If not zero, print some information about the fitting process | default: 1e-07 |
validation_fraction | Proportion (or absolute size) of training data to set aside as validation data for early stopping. If None, early stopping is done on the training data | default: 0.1 |
verbose | default: 0 | |
warm_start | When set to ``True``, reuse the solution of the previous call to fit
and add more estimators to the ensemble. For results to be valid, the
estimator should be re-trained on the same data only
See :term:`the Glossary | default: false |