Cs | Each of the values in Cs describes the inverse of regularization
strength. If Cs is as an int, then a grid of Cs values are chosen
in a logarithmic scale between 1e-4 and 1e4
Like in support vector machines, smaller values specify stronger
regularization | default: 10 |
class_weight | Weights associated with classes in the form ``{class_label: weight}``
If not given, all classes are supposed to have weight one
The "balanced" mode uses the values of y to automatically adjust
weights inversely proportional to class frequencies in the input data
as ``n_samples / (n_classes * np.bincount(y))``
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified
.. versionadded:: 0.17
class_weight == 'balanced' | default: null |
cv | The default cross-validation generator used is Stratified K-Folds
If an integer is provided, then it is the number of folds used
See the module :mod:`sklearn.model_selection` module for the
list of possible cross-validation objects
.. versionchanged:: 0.22
``cv`` default value if None changed from 3-fold to 5-fold | default: null |
dual | Dual or primal formulation. Dual formulation is only implemented for
l2 penalty with liblinear solver. Prefer dual=False when
n_samples > n_features
penalty : {'l1', 'l2', 'elasticnet'}, default='l2'
Specify the norm of the penalty:
- `'l2'`: add a L2 penalty term (used by default);
- `'l1'`: add a L1 penalty term;
- `'elasticnet'`: both L1 and L2 penalty terms are added
.. warning::
Some penalties may not work with some solvers. See the parameter
`solver` below, to know the compatibility between the penalty and
solver | default: false |
fit_intercept | Specifies if a constant (a.k.a. bias or intercept) should be
added to the decision function | default: true |
intercept_scaling | Useful only when the solver 'liblinear' is used
and self.fit_intercept is set to True. In this case, x becomes
[x, self.intercept_scaling],
i.e. a "synthetic" feature with constant value equal to
intercept_scaling is appended to the instance vector
The intercept becomes ``intercept_scaling * synthetic_feature_weight``
Note! the synthetic feature weight is subject to l1/l2 regularization
as all other features
To lessen the effect of regularization on synthetic feature weight
(and therefore on the intercept) intercept_scaling has to be increased
multi_class : {'auto, 'ovr', 'multinomial'}, default='auto'
If the option chosen is 'ovr', then a binary problem is fit for each
label. For 'multinomial' the loss minimised is the multinomial loss fit
across the entire probability distribution, *even when the data is
binary*. 'multinomial' is unavailable when solver='liblinear'
'auto' selects 'ovr' if the data is binary, or if solver='liblinear',
and other... | default: 1.0 |
l1_ratios | The list of Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``
Only used if ``penalty='elasticnet'``. A value of 0 is equivalent to
using ``penalty='l2'``, while 1 is equivalent to using
``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination
of L1 and L2. | default: null |
max_iter | Maximum number of iterations of the optimization algorithm | default: 100 |
multi_class | | default: "auto" |
n_jobs | Number of CPU cores used during the cross-validation loop
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context
``-1`` means using all processors. See :term:`Glossary `
for more details | default: null |
penalty | | default: "l2" |
random_state | Used when `solver='sag'`, 'saga' or 'liblinear' to shuffle the data
Note that this only applies to the solver and not the cross-validation
generator. See :term:`Glossary ` for details | default: null |
refit | If set to True, the scores are averaged across all folds, and the
coefs and the C that corresponds to the best score is taken, and a
final refit is done using these parameters
Otherwise the coefs, intercepts and C that correspond to the
best scores across folds are averaged | default: true |
scoring | A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``. For a list of scoring functions
that can be used, look at :mod:`sklearn.metrics`. The
default scoring option used is 'accuracy'
solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, default='lbfgs'
Algorithm to use in the optimization problem. Default is 'lbfgs'
To choose a solver, you might want to consider the following aspects:
- For small datasets, 'liblinear' is a good choice, whereas 'sag'
and 'saga' are faster for large ones;
- For multiclass problems, only 'newton-cg', 'sag', 'saga' and
'lbfgs' handle multinomial loss;
- 'liblinear' might be slower in :class:`LogisticRegressionCV`
because it does not handle warm-starting. 'liblinear' is
limited to one-versus-rest schemes
- 'newton-cholesky' is a good choice for `n_samples` >> `n_features... | default: null |
solver | | default: "lbfgs" |
tol | Tolerance for stopping criteria | default: 0.0001 |
verbose | For the 'liblinear', 'sag' and 'lbfgs' solvers set verbose to any
positive number for verbosity | default: 0 |