base_score | | default: null |
booster | | default: "gblinear" |
colsample_bylevel | Subsample ratio of columns for each level | default: 0.10236229212682288 |
colsample_bynode | Subsample ratio of columns for each split | default: null |
colsample_bytree | Subsample ratio of columns when constructing each tree | default: 0.9133756966924026 |
gamma | Minimum loss reduction required to make a further partition on a leaf
node of the tree | default: null |
gpu_id | | default: null |
importance_type | | default: "gain" |
interaction_constraints | Constraints for interaction representing permitted interactions. The
constraints must be specified in the form of a nest list, e.g. [[0, 1],
[2, 3, 4]], where each inner list is a group of indices of features
that are allowed to interact with each other. See tutorial for more
information
importance_type: string, default "gain"
The feature importance type for the feature_importances\_ property:
either "gain", "weight", "cover", "total_gain" or "total_cover" | default: null |
learning_rate | Boosting learning rate (xgb's "eta") | default: 0.7046906880432579 |
max_delta_step | Maximum delta step we allow each tree's weight estimation to be | default: null |
max_depth | Maximum tree depth for base learners | default: 14 |
min_child_weight | Minimum sum of instance weight(hessian) needed in a child | default: 93.9943569048485 |
missing | Value in the data which needs to be present as a missing value
num_parallel_tree: int
Used for boosting random forest | default: NaN |
monotone_constraints | Constraint of variable monotonicity. See tutorial for more
information | default: null |
n_estimators | Number of boosting rounds | default: 100 |
n_jobs | Number of parallel threads used to run xgboost. When used with other Scikit-Learn
algorithms like grid search, you may choose which algorithm to parallelize and
balance the threads. Creating thread contention will significantly slow dowm both
algorithms | default: null |
num_parallel_tree | | default: null |
objective | Specify the learning task and the corresponding learning objective or
a custom objective function to be used (see note below)
booster: string
Specify which booster to use: gbtree, gblinear or dart
tree_method: string
Specify which tree method to use. Default to auto. If this parameter
is set to default, XGBoost will choose the most conservative option
available. It's recommended to study this option from parameters
document | default: "binary:logistic" |
random_state | Random number seed
.. note::
Using gblinear booster with shotgun updater is nondeterministic as
it uses Hogwild algorithm | default: null |
reg_alpha | L1 regularization term on weights | default: 827.8481363932908 |
reg_lambda | L2 regularization term on weights | default: 769.6869249686737 |
scale_pos_weight | Balancing of positive and negative weights
base_score:
The initial prediction score of all instances, global bias | default: null |
subsample | Subsample ratio of the training instance | default: 0.8189293035021832 |
tree_method | | default: null |
use_label_encoder | (Deprecated) Use the label encoder from scikit-learn to encode the labels. For new code,
we recommend that you set this parameter to False | default: true |
validate_parameters | | default: null |
verbosity | The degree of verbosity. Valid values are 0 (silent) - 3 (debug) | default: null |