我一直在尝试为一个分类问题实现逻辑回归,但它给了我非常奇怪的结果。我已经用梯度提升和随机森林得到了不错的结果,所以我想到了basic,看看我能做到什么最好。你能帮我指出我做错了什么导致这种过度拟合吗?您可以从https://www.kaggle.com/c/santander-customer-satisfaction/data获取数据
下面是我的代码:
import pandas as pd
import numpy as np
train = pd.read_csv("path")
test = pd.read_csv("path")
test["TARGET"] = 0
fullData = pd.concat([train,test], ignore_index = True)
remove1 = []
for col in fullData.columns:
if fullData[col].std() == 0:
remove1.append(col)
fullData.drop(remove1, axis=1, inplace=True)
import numpy as np
remove = []
cols = fullData.columns
for i in range(len(cols)-1):
v = fullData[cols[i]].values
for j in range(i+1,len(cols)):
if np.array_equal(v,fullData[cols[j]].values):
remove.append(cols[j])
fullData.drop(remove, axis=1, inplace=True)
from sklearn.cross_validation import train_test_split
X_train, X_test = train_test_split(fullData, test_size=0.20, random_state=1729)
print(X_train.shape, X_test.shape)
y_train = X_train["TARGET"].values
X = X_train.drop(["TARGET","ID"],axis=1,inplace = False)
from sklearn.ensemble import ExtraTreesClassifier
clf = ExtraTreesClassifier(random_state=1729)
selector = clf.fit(X, y_train)
from sklearn.feature_selection import SelectFromModel
fs = SelectFromModel(selector, prefit=True)
X_t = X_test.drop(["TARGET","ID"],axis=1,inplace = False)
X_t = fs.transform(X_t)
X_tr = X_train.drop(["TARGET","ID"],axis=1,inplace = False)
X_tr = fs.transform(X_tr)
from sklearn.linear_model import LogisticRegression
log = LogisticRegression(penalty ='l2', C = 1, random_state = 1,
)
from sklearn import cross_validation
scores = cross_validation.cross_val_score(log,X_tr,y_train,cv = 10)
print(scores.mean())
log.fit(X_tr,y_train)
predictions = log.predict(X_t)
predictions = predictions.astype(int)
print(predictions.mean())发布于 2016-07-28 18:06:30
您没有配置C参数-从技术上讲是这样的,而只是配置为默认值-这是过度拟合的常见疑点之一。您可以查看GridSearchCV并尝试使用C参数的几个值(例如,从10^-5到10^5),看看它是否可以缓解您的问题。将惩罚规则更改为“l1”也可能会有所帮助。
此外,该竞赛还存在一些挑战:它是一个不平衡的数据集,训练集和私有LB之间的分布略有不同。所有这些都会对你不利,特别是在使用像LR这样的简单算法时。
https://stackoverflow.com/questions/38621685
复制相似问题