Flow
sklearn.linear_model._logistic.LogisticRegressionCV

sklearn.linear_model._logistic.LogisticRegressionCV

Visibility: public Uploaded 05-04-2023 by Takeaki Sakabe sklearn==1.2.2 numpy>=1.17.3 scipy>=1.3.2 joblib>=1.1.1 threadpoolctl>=2.0.0 0 runs
0 likes downloaded by 0 people 0 issues 0 downvotes , 0 total downloads
  • openml-python python scikit-learn sklearn sklearn_1.2.2
Issue #Downvotes for this reason By


Loading wiki
Help us complete this description Edit
Logistic Regression CV (aka logit, MaxEnt) classifier. See glossary entry for :term:`cross-validation estimator`. This class implements logistic regression using liblinear, newton-cg, sag of lbfgs optimizer. The newton-cg, sag and lbfgs solvers support only L2 regularization with primal formulation. The liblinear solver supports both L1 and L2 regularization, with a dual formulation only for the L2 penalty. Elastic-Net penalty is only supported by the saga solver. For the grid of `Cs` values and `l1_ratios` values, the best hyperparameter is selected by the cross-validator :class:`~sklearn.model_selection.StratifiedKFold`, but it can be changed using the :term:`cv` parameter. The 'newton-cg', 'sag', 'saga' and 'lbfgs' solvers can warm-start the coefficients (see :term:`Glossary`).

Parameters

CsEach of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4 Like in support vector machines, smaller values specify stronger regularizationdefault: 10
class_weightWeights associated with classes in the form ``{class_label: weight}`` If not given, all classes are supposed to have weight one The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified .. versionadded:: 0.17 class_weight == 'balanced'default: null
cvThe default cross-validation generator used is Stratified K-Folds If an integer is provided, then it is the number of folds used See the module :mod:`sklearn.model_selection` module for the list of possible cross-validation objects .. versionchanged:: 0.22 ``cv`` default value if None changed from 3-fold to 5-folddefault: null
dualDual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features penalty : {'l1', 'l2', 'elasticnet'}, default='l2' Specify the norm of the penalty: - `'l2'`: add a L2 penalty term (used by default); - `'l1'`: add a L1 penalty term; - `'elasticnet'`: both L1 and L2 penalty terms are added .. warning:: Some penalties may not work with some solvers. See the parameter `solver` below, to know the compatibility between the penalty and solverdefault: false
fit_interceptSpecifies if a constant (a.k.a. bias or intercept) should be added to the decision functiondefault: true
intercept_scalingUseful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equal to intercept_scaling is appended to the instance vector The intercept becomes ``intercept_scaling * synthetic_feature_weight`` Note! the synthetic feature weight is subject to l1/l2 regularization as all other features To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased multi_class : {'auto, 'ovr', 'multinomial'}, default='auto' If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear' 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', and other...default: 1.0
l1_ratiosThe list of Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1`` Only used if ``penalty='elasticnet'``. A value of 0 is equivalent to using ``penalty='l2'``, while 1 is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2.default: null
max_iterMaximum number of iterations of the optimization algorithmdefault: 100
multi_classdefault: "auto"
n_jobsNumber of CPU cores used during the cross-validation loop ``None`` means 1 unless in a :obj:`joblib.parallel_backend` context ``-1`` means using all processors. See :term:`Glossary ` for more detailsdefault: null
penaltydefault: "l2"
random_stateUsed when `solver='sag'`, 'saga' or 'liblinear' to shuffle the data Note that this only applies to the solver and not the cross-validation generator. See :term:`Glossary ` for detailsdefault: null
refitIf set to True, the scores are averaged across all folds, and the coefs and the C that corresponds to the best score is taken, and a final refit is done using these parameters Otherwise the coefs, intercepts and C that correspond to the best scores across folds are averageddefault: true
scoringA string (see model evaluation documentation) or a scorer callable object / function with signature ``scorer(estimator, X, y)``. For a list of scoring functions that can be used, look at :mod:`sklearn.metrics`. The default scoring option used is 'accuracy' solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, default='lbfgs' Algorithm to use in the optimization problem. Default is 'lbfgs' To choose a solver, you might want to consider the following aspects: - For small datasets, 'liblinear' is a good choice, whereas 'sag' and 'saga' are faster for large ones; - For multiclass problems, only 'newton-cg', 'sag', 'saga' and 'lbfgs' handle multinomial loss; - 'liblinear' might be slower in :class:`LogisticRegressionCV` because it does not handle warm-starting. 'liblinear' is limited to one-versus-rest schemes - 'newton-cholesky' is a good choice for `n_samples` >> `n_features...default: null
solverdefault: "lbfgs"
tolTolerance for stopping criteriadefault: 0.0001
verboseFor the 'liblinear', 'sag' and 'lbfgs' solvers set verbose to any positive number for verbositydefault: 0

0
Runs

List all runs
Parameter:
Rendering chart
Rendering table