base_score | | default: null |
booster | | default: null |
colsample_bylevel | Subsample ratio of columns for each level | default: null |
colsample_bynode | Subsample ratio of columns for each split | default: null |
colsample_bytree | Subsample ratio of columns when constructing each tree | default: null |
gamma | Minimum loss reduction required to make a further partition on a leaf
node of the tree | default: null |
gpu_id | | default: null |
importance_type | | default: "gain" |
interaction_constraints | Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nest list, e.g. [[0, 1],
[2, 3, 4]], where each inner list is a group of indices of features
that are allowed to interact with each other. See tutorial for more
information
importance_type: string, default "gain"
The feature importance type for the feature_importances\_ property:
either "gain", "weight", "cover", "total_gain" or "total_cover" | default: null |
learning_rate | Boosting learning rate (xgb's "eta") | default: null |
max_delta_step | Maximum delta step we allow each tree's weight estimation to be | default: null |
max_depth | Maximum tree depth for base learners | default: null |
min_child_weight | Minimum sum of instance weight(hessian) needed in a child | default: null |
missing | Value in the data which needs to be present as a missing value. If
None, defaults to np.nan
num_parallel_tree: int
Used for boosting random forest | default: NaN |
monotone_constraints | Constraint of variable monotonicity. See tutorial for more
information.c | default: null |
n_estimators | | default: 100 |
n_jobs | Number of parallel threads used to run xgboost | default: null |
num_parallel_tree | | default: null |
objective | Specify the learning task and the corresponding learning objective or
a custom objective function to be used (see note below)
booster: string
Specify which booster to use: gbtree, gblinear or dart
tree_method: string
Specify which tree method to use. Default to auto. If this parameter
is set to default, XGBoost will choose the most conservative option
available. It's recommended to study this option from parameters
document | default: "binary:logistic" |
random_state | Random number seed
.. note::
Using gblinear booster with shotgun updater is nondeterministic as
it uses Hogwild algorithm | default: null |
reg_alpha | L1 regularization term on weights | default: null |
reg_lambda | L2 regularization term on weights | default: null |
scale_pos_weight | Balancing of positive and negative weights
base_score:
The initial prediction score of all instances, global bias | default: null |
subsample | Subsample ratio of the training instance | default: null |
tree_method | | default: null |
validate_parameters | | default: false |
verbosity | The degree of verbosity. Valid values are 0 (silent) - 3 (debug) | default: null |