作者:手机用户2502939543 | 来源:互联网 | 2022-12-08 16:55
这篇文章是关于LogisticRegressionCV,GridSearchCV和cross_val_score之间的区别。请考虑以下设置:
import numpy as np
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.model_selection import train_test_split, GridSearchCV, \
StratifiedKFold, cross_val_score
from sklearn.metrics import confusion_matrix
read = load_digits()
X, y = read.data, read.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3)
在惩罚逻辑回归中,我们需要设置控制正则化的参数C。scikit-learn中有3种通过交叉验证找到最佳C的方法。
Logistic回归
clf = LogisticRegressionCV (Cs = 10, penalty = "l1",
solver = "saga", scoring = "f1_macro")
clf.fit(X_train, y_train)
confusion_matrix(y_test, clf.predict(X_test))
旁注:文档指出,SAGA和LIBLINEAR是L1惩罚的唯一优化器,而SAGA对于大型数据集则更快。不幸的是,热启动仅适用于Newton-CG和LBFGS。
GridSearchCV
clf = LogisticRegression (penalty = "l1", solver = "saga", warm_start = True)
clf = GridSearchCV (clf, param_grid = {"C": np.logspace(-4, 4, 10)}, scoring = "f1_macro")
clf.fit(X_train, y_train)
confusion_matrix(y_test, clf.predict(X_test))
result = clf.cv_results_
cross_val_score
cv_scores = {}
for val in np.logspace(-4, 4, 10):
clf = LogisticRegression (C = val, penalty = "l1",
solver = "saga", warm_start = True)
cv_scores[val] = cross_val_score (clf, X_train, y_train,
cv = StratifiedKFold(), scoring = "f1_macro").mean()
clf = LogisticRegression (C = max(cv_scores, key = cv_scores.get),
penalty = "l1", solver = "saga", warm_start = True)
clf.fit(X_train, y_train)
confusion_matrix(y_test, clf.predict(X_test))
问题
我是否以3种方式正确执行了交叉验证?
这三种方式都等效吗?如果不是,是否可以通过更改代码使其等效?
就优雅,速度或任何标准而言,哪种方法最好?(换句话说,为什么在scikit-learn中有3种交叉验证方式?)
欢迎对任何一个问题提供简单的答案;我意识到它们有些长,但是希望它们是scikit-learn中超参数选择的一个很好的总结。