Flow
xgboost.sklearn.XGBClassifier

xgboost.sklearn.XGBClassifier

Visibility: public Uploaded 23-02-2021 by Fabrice Normandin sklearn==0.21.2 numpy>=1.6.1 scipy>=0.9 0 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_0.21.2
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
Implementation of the scikit-learn API for XGBoost classification.

Parameters

base_scoredefault: null
boosterdefault: "gblinear"
colsample_bylevelSubsample ratio of columns for each leveldefault: 0.10236229212682288
colsample_bynodeSubsample ratio of columns for each splitdefault: null
colsample_bytreeSubsample ratio of columns when constructing each treedefault: 0.9133756966924026
gammaMinimum loss reduction required to make a further partition on a leaf node of the treedefault: null
gpu_iddefault: null
importance_typedefault: "gain"
interaction_constraintsConstraints for interaction representing permitted interactions. The constraints must be specified in the form of a nest list, e.g. [[0, 1], [2, 3, 4]], where each inner list is a group of indices of features that are allowed to interact with each other. See tutorial for more information importance_type: string, default "gain" The feature importance type for the feature_importances\_ property: either "gain", "weight", "cover", "total_gain" or "total_cover"default: null
learning_rateBoosting learning rate (xgb's "eta")default: 0.7046906880432579
max_delta_stepMaximum delta step we allow each tree's weight estimation to bedefault: null
max_depthMaximum tree depth for base learnersdefault: 14
min_child_weightMinimum sum of instance weight(hessian) needed in a childdefault: 93.9943569048485
missingValue in the data which needs to be present as a missing value num_parallel_tree: int Used for boosting random forestdefault: NaN
monotone_constraintsConstraint of variable monotonicity. See tutorial for more informationdefault: null
n_estimatorsNumber of boosting roundsdefault: 100
n_jobsNumber of parallel threads used to run xgboost. When used with other Scikit-Learn algorithms like grid search, you may choose which algorithm to parallelize and balance the threads. Creating thread contention will significantly slow dowm both algorithmsdefault: null
num_parallel_treedefault: null
objectiveSpecify the learning task and the corresponding learning objective or a custom objective function to be used (see note below) booster: string Specify which booster to use: gbtree, gblinear or dart tree_method: string Specify which tree method to use. Default to auto. If this parameter is set to default, XGBoost will choose the most conservative option available. It's recommended to study this option from parameters documentdefault: "binary:logistic"
random_stateRandom number seed .. note:: Using gblinear booster with shotgun updater is nondeterministic as it uses Hogwild algorithmdefault: null
reg_alphaL1 regularization term on weightsdefault: 827.8481363932908
reg_lambdaL2 regularization term on weightsdefault: 769.6869249686737
scale_pos_weightBalancing of positive and negative weights base_score: The initial prediction score of all instances, global biasdefault: null
subsampleSubsample ratio of the training instancedefault: 0.8189293035021832
tree_methoddefault: null
use_label_encoder(Deprecated) Use the label encoder from scikit-learn to encode the labels. For new code, we recommend that you set this parameter to Falsedefault: true
validate_parametersdefault: null
verbosityThe degree of verbosity. Valid values are 0 (silent) - 3 (debug)default: null

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table