aggressive_elimination | This is only relevant in cases where there isn't enough resources to
eliminate enough candidates at the last iteration. If ``True``, then
the search process will 'replay' the first iteration for as long as
needed until the number of candidates is small enough. This is
``False`` by default, which means that the last iteration may evaluate
more than ``ratio`` candidates | default: false |
cv | Determines the cross-validation splitting strategy
Possible inputs for cv are:
- integer, to specify the number of folds in a `(Stratified)KFold`,
- :term:`CV splitter`,
- An iterable yielding (train, test) splits as arrays of indices
For integer/None inputs, if the estimator is a classifier and ``y`` is
either binary or multiclass, :class:`StratifiedKFold` is used. In all
other cases, :class:`KFold` is used
Refer :ref:`User Guide ` for the various
cross-validation strategies that can be used here | default: 5 |
error_score | Value to assign to the score if an error occurs in estimator fitting
If set to 'raise', the error is raised. If a numeric value is given,
FitFailedWarning is raised. This parameter does not affect the refit
step, which will always raise the error. Default is ``np.nan`` | default: NaN |
estimator | This is assumed to implement the scikit-learn estimator interface
Either estimator needs to provide a ``score`` function,
or ``scoring`` must be passed | default: {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}} |
force_exhaust_resources | If True, then ``min_resources`` is set to a specific value such that
the last iteration uses as much resources as possible. Namely, the
last iteration uses the highest value smaller than ``max_resources``
that is a multiple of both ``min_resources`` and ``ratio``. | default: false |
max_resources | The maximum number of resources that any candidate is allowed to use
for a given iteration. By default, this is set ``n_samples`` when
``resource='n_samples'`` (default), else an error is raised | default: "auto" |
min_resources | The minimum amount of resource that any candidate is allowed to use for
a given iteration. Equivalently, this defines the amount of resources
that are allocated for each candidate at the first iteration. By
default, this is set to:
- ``n_splits * 2`` when ``resource='n_samples'`` for a regression
problem
- ``n_classes * n_splits * 2`` when ``resource='n_samples'`` for a
regression problem
- The highest possible value satisfying the constraint
``force_exhaust_resources=True``
- ``1`` when ``resource!='n_samples'``
Note that the amount of resources used at each iteration is always a
multiple of ``min_resources``
resource : ``'n_samples'`` or str, default='n_samples'
Defines the resource that increases with each iteration. By default,
the resource is the number of samples. It can also be set to any
parameter of the base estimator that accepts positive integer
values, e.g. 'n_iterations' or 'n_estimators' for a gradient
boosting estim... | default: "auto" |
n_candidates | | default: 3 |
n_jobs | Number of jobs to run in parallel
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context
``-1`` means using all processors. See :term:`Glossary `
for more details | default: null |
param_distributions | Dictionary with parameters names (string) as keys and distributions
or lists of parameters to try. Distributions must provide a ``rvs``
method for sampling (such as those from scipy.stats.distributions)
If a list is given, it is sampled uniformly
n_candidates: int, default='auto'
The number of candidate parameters to sample, at the first
iteration. By default this will sample enough candidates so that the
last iteration uses as many resources as possible. Note that
``force_exhaust_resources`` has no effect in this case | default: {"l2_regularization": [0, 0.01, 0.1], "learning_rate": [0.01, 0.1, 1], "max_depth": [5, 6, 7, 8, 9, 1000], "max_leaf_nodes": [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], "min_samples_leaf": [2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]} |
pre_dispatch | Controls the number of jobs that get dispatched during parallel
execution. Reducing this number can be useful to avoid an
explosion of memory consumption when more jobs get dispatched
than CPUs can process. This parameter can be:
- None, in which case all the jobs are immediately
created and spawned. Use this for lightweight and
fast-running jobs, to avoid delays due to on-demand
spawning of the jobs
- An int, giving the exact number of total jobs that are
spawned
- A string, giving an expression as a function of n_jobs,
as in '2*n_jobs' (default) | default: "2*n_jobs" |
random_state | | default: 0 |
ratio | The 'halving' parameter, which determines the proportion of candidates
that are selected for the next iteration. For example, ``ratio=3``
means that only one third of the candidates are selected | default: 3 |
refit | If True, refit an estimator using the best found parameters on the
whole dataset
The refitted estimator is made available at the ``best_estimator_``
attribute and permits using ``predict`` directly on this
``GridSearchCV`` instance | default: {"oml-python:serialized_object": "function", "value": "sklearn.model_selection._search_successive_halving._refit_callable"} |
resource | | default: "n_samples" |
return_train_score | If ``False``, the ``cv_results_`` attribute will not include training
scores
Computing training scores is used to get insights on how different
parameter settings impact the overfitting/underfitting trade-off
However computing the scores on the training set can be computationally
expensive and is not strictly required to select the parameters that
yield the best generalization performance | default: true |
scoring | A single string (see :ref:`scoring_parameter`) or a callable
(see :ref:`scoring`) to evaluate the predictions on the test set
If None, the estimator's score method is used | default: null |
verbose | Controls the verbosity: the higher, the more messages | default: 0 |