Flow
sklearn.model_selection._search_successive_halving.HalvingRandomSearchCV(estimator=sklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier)

sklearn.model_selection._search_successive_halving.HalvingRandomSearchCV(estimator=sklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier)

Visibility: public Uploaded 16-11-2019 by Nicolas Hug sklearn==0.23.dev0 numpy>=1.6.1 scipy>=0.9 771 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_0.23.dev0
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
Randomized search on hyper parameters. The search strategy starts evaluating all the candidates with a small amount of resources and iteratively selects the best candidates, using more and more resources. The candidates are sampled at random from the parameter space and the number of sampled candidates is determined by ``n_candidates``.

Components

estimatorsklearn.ensemble._hist_gradient_boosting.gradient_boosting.HistGradientBoostingClassifier(6)This is assumed to implement the scikit-learn estimator interface Either estimator needs to provide a ``score`` function, or ``scoring`` must be passed

Parameters

aggressive_eliminationThis is only relevant in cases where there isn't enough resources to eliminate enough candidates at the last iteration. If ``True``, then the search process will 'replay' the first iteration for as long as needed until the number of candidates is small enough. This is ``False`` by default, which means that the last iteration may evaluate more than ``ratio`` candidatesdefault: false
cvDetermines the cross-validation splitting strategy Possible inputs for cv are: - integer, to specify the number of folds in a `(Stratified)KFold`, - :term:`CV splitter`, - An iterable yielding (train, test) splits as arrays of indices For integer/None inputs, if the estimator is a classifier and ``y`` is either binary or multiclass, :class:`StratifiedKFold` is used. In all other cases, :class:`KFold` is used Refer :ref:`User Guide ` for the various cross-validation strategies that can be used heredefault: 5
error_scoreValue to assign to the score if an error occurs in estimator fitting If set to 'raise', the error is raised. If a numeric value is given, FitFailedWarning is raised. This parameter does not affect the refit step, which will always raise the error. Default is ``np.nan``default: NaN
estimatorThis is assumed to implement the scikit-learn estimator interface Either estimator needs to provide a ``score`` function, or ``scoring`` must be passeddefault: {"oml-python:serialized_object": "component_reference", "value": {"key": "estimator", "step_name": null}}
force_exhaust_resourcesIf True, then ``min_resources`` is set to a specific value such that the last iteration uses as much resources as possible. Namely, the last iteration uses the highest value smaller than ``max_resources`` that is a multiple of both ``min_resources`` and ``ratio``.default: false
max_resourcesThe maximum number of resources that any candidate is allowed to use for a given iteration. By default, this is set ``n_samples`` when ``resource='n_samples'`` (default), else an error is raiseddefault: "auto"
min_resourcesThe minimum amount of resource that any candidate is allowed to use for a given iteration. Equivalently, this defines the amount of resources that are allocated for each candidate at the first iteration. By default, this is set to: - ``n_splits * 2`` when ``resource='n_samples'`` for a regression problem - ``n_classes * n_splits * 2`` when ``resource='n_samples'`` for a regression problem - The highest possible value satisfying the constraint ``force_exhaust_resources=True`` - ``1`` when ``resource!='n_samples'`` Note that the amount of resources used at each iteration is always a multiple of ``min_resources`` resource : ``'n_samples'`` or str, default='n_samples' Defines the resource that increases with each iteration. By default, the resource is the number of samples. It can also be set to any parameter of the base estimator that accepts positive integer values, e.g. 'n_iterations' or 'n_estimators' for a gradient boosting estim...default: "auto"
n_candidatesdefault: 3
n_jobsNumber of jobs to run in parallel ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context ``-1`` means using all processors. See :term:`Glossary ` for more detailsdefault: null
param_distributionsDictionary with parameters names (string) as keys and distributions or lists of parameters to try. Distributions must provide a ``rvs`` method for sampling (such as those from scipy.stats.distributions) If a list is given, it is sampled uniformly n_candidates: int, default='auto' The number of candidate parameters to sample, at the first iteration. By default this will sample enough candidates so that the last iteration uses as many resources as possible. Note that ``force_exhaust_resources`` has no effect in this casedefault: {"l2_regularization": [0, 0.01, 0.1], "learning_rate": [0.01, 0.1, 1], "max_depth": [5, 6, 7, 8, 9, 1000], "max_leaf_nodes": [30, 31, 32, 33, 34, 35, 36, 37, 38, 39], "min_samples_leaf": [2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]}
pre_dispatchControls the number of jobs that get dispatched during parallel execution. Reducing this number can be useful to avoid an explosion of memory consumption when more jobs get dispatched than CPUs can process. This parameter can be: - None, in which case all the jobs are immediately created and spawned. Use this for lightweight and fast-running jobs, to avoid delays due to on-demand spawning of the jobs - An int, giving the exact number of total jobs that are spawned - A string, giving an expression as a function of n_jobs, as in '2*n_jobs' (default)default: "2*n_jobs"
random_statedefault: 0
ratioThe 'halving' parameter, which determines the proportion of candidates that are selected for the next iteration. For example, ``ratio=3`` means that only one third of the candidates are selecteddefault: 3
refitIf True, refit an estimator using the best found parameters on the whole dataset The refitted estimator is made available at the ``best_estimator_`` attribute and permits using ``predict`` directly on this ``GridSearchCV`` instancedefault: {"oml-python:serialized_object": "function", "value": "sklearn.model_selection._search_successive_halving._refit_callable"}
resourcedefault: "n_samples"
return_train_scoreIf ``False``, the ``cv_results_`` attribute will not include training scores Computing training scores is used to get insights on how different parameter settings impact the overfitting/underfitting trade-off However computing the scores on the training set can be computationally expensive and is not strictly required to select the parameters that yield the best generalization performancedefault: true
scoringA single string (see :ref:`scoring_parameter`) or a callable (see :ref:`scoring`) to evaluate the predictions on the test set If None, the estimator's score method is useddefault: null
verboseControls the verbosity: the higher, the more messagesdefault: 0

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table