Flow
sklearn.tree.tree.DecisionTreeClassifier

sklearn.tree.tree.DecisionTreeClassifier

Visibility: public Uploaded 13-08-2021 by Sergey Redyuk sklearn==0.18 numpy>=1.6.1 scipy>=0.9 39 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_0.18
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
A decision tree classifier.

Parameters

class_weightWeights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one. For multi-output problems, a list of dicts can be provided in the same order as the columns of y The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` For multi-output, the weights of each column of y will be multiplied Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specifieddefault: null
criterionThe function to measure the quality of a split. Supported criteria are "gini" for the Gini impurity and "entropy" for the information gaindefault: "gini"
max_depthThe maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samplesdefault: null
max_featuresThe number of features to consider when looking for the best split: - If int, then consider `max_features` features at each split - If float, then `max_features` is a percentage and `int(max_features * n_features)` features are considered at each split - If "auto", then `max_features=sqrt(n_features)` - If "sqrt", then `max_features=sqrt(n_features)` - If "log2", then `max_features=log2(n_features)` - If None, then `max_features=n_features` Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than ``max_features`` featuresdefault: null
max_leaf_nodesGrow a tree with ``max_leaf_nodes`` in best-first fashion Best nodes are defined as relative reduction in impurity If None then unlimited number of leaf nodesdefault: null
min_impurity_splitThreshold for early stopping in tree growth. A node will split if its impurity is above the threshold, otherwise it is a leaf .. versionadded:: 0.18default: 1e-07
min_samples_leafThe minimum number of samples required to be at a leaf node: - If int, then consider `min_samples_leaf` as the minimum number - If float, then `min_samples_leaf` is a percentage and `ceil(min_samples_leaf * n_samples)` are the minimum number of samples for each node .. versionchanged:: 0.18 Added float values for percentagesdefault: 1
min_samples_splitThe minimum number of samples required to split an internal node: - If int, then consider `min_samples_split` as the minimum number - If float, then `min_samples_split` is a percentage and `ceil(min_samples_split * n_samples)` are the minimum number of samples for each split .. versionchanged:: 0.18 Added float values for percentagesdefault: 2
min_weight_fraction_leafThe minimum weighted fraction of the input samples required to be at a leaf nodedefault: 0.0
presortWhether to presort the data to speed up the finding of best splits in fitting. For the default settings of a decision tree on large datasets, setting this to true may slow down the training process When using either a smaller dataset or a restricted depth, this may speed up the training.default: false
random_stateIf int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; If None, the random number generator is the RandomState instance used by `np.random`default: null
splitterThe strategy used to choose the split at each node. Supported strategies are "best" to choose the best split and "random" to choose the best random splitdefault: "best"

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table