我正在用df的数据框架在熊猫中工作。我正在执行一个分类任务,并且有两个不平衡的类df['White']和df['Non-white']。由于这个原因,我已经建立了一个管道,其中包括SMOTE和RandomUnderSampling。
我的管道是这样的:
model = Pipeline([
('preprocessor', preprocessor),
('smote', over),
('random_under_sampler', under),
('classification', knn)
])以下是具体的步骤:
Pipeline(steps=[('preprocessor',
ColumnTransformer(remainder='passthrough',
transformers=[('knnimputer', KNNImputer(),
['policePrecinct']),
('onehotencoder-1',
OneHotEncoder(), ['gender']),
('standardscaler',
StandardScaler(),
['long', 'lat']),
('onehotencoder-2',
OneHotEncoder(),
['neighborhood',
'problem'])])),
('smote', SMOTE()),
('random_under_sampler', RandomUnderSampler()),
('classification', KNeighborsClassifier())])我想评估不同的sampling_strategy在SMOTE和RandomUnderSampling。在调优参数时,我可以在GridSearch中直接这样做吗?现在,我已经编写了以下for loop。此循环不工作(ValueError: too many values to unpack (expected 2))。
strategy_sm = [0.1, 0.3, 0.5]
strategy_un = [0.15, 0.30, 0.50]
best_strat = []
for k, n in strategy_sm, strategy_un:
over = SMOTE(sampling_strategy=k)
under = RandomUnderSampler(sampling_strategy=n)
model = Pipeline([
('preprocessor', preprocessor),
('smote', over),
('random_under_sampler', under),
('classification', knn)
])
mode.fit(X_train, y_train)
best_strat.append[(model.score(X_train, y_train))]我不太精通Python,我怀疑有更好的方法可以做到这一点。另外,我希望for loop (如果这确实是这样做的话)可视化sampling_strategy组合的不同性能。有什么想法吗?
发布于 2021-12-18 16:16:49
下面是一个例子,说明如何使用5倍交叉验证比较分类器对于不同参数组合的准确性,并将结果可视化。
import pandas as pd
import seaborn as sns
from sklearn.datasets import make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV, StratifiedKFold
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline
# generate some data
X, y = make_classification(n_classes=2, weights=[0.1, 0.9], n_features=20, random_state=42)
# define the pipeline
estimator = Pipeline([
('smote', SMOTE()),
('random_under_sampler', RandomUnderSampler()),
('classification', KNeighborsClassifier())
])
# define the parameter grid
param_grid = {
'smote__sampling_strategy': [0.3, 0.4, 0.5],
'random_under_sampler__sampling_strategy': [0.5, 0.6, 0.7]
}
# run a grid search to calculate the cross-validation
# accuracy associated to each parameter combination
clf = GridSearchCV(
estimator=estimator,
param_grid=param_grid,
cv=StratifiedKFold(n_splits=3)
)
clf.fit(X, y)
# organize the grid search results in a data frame
res = pd.DataFrame(clf.cv_results_)
res = res.rename(columns={
'param_smote__sampling_strategy': 'smote_strategy',
'param_random_under_sampler__sampling_strategy': 'random_under_sampler_strategy',
'mean_test_score': 'accuracy'
})
res = res[['smote_strategy', 'random_under_sampler_strategy', 'accuracy']]
print(res)
# smote_strategy random_under_sampler_strategy accuracy
# 0 0.3 0.5 0.829471
# 1 0.4 0.5 0.869578
# 2 0.5 0.5 0.899881
# 3 0.3 0.6 0.809269
# 4 0.4 0.6 0.819370
# 5 0.5 0.6 0.778669
# 6 0.3 0.7 0.708259
# 7 0.4 0.7 0.778966
# 8 0.5 0.7 0.768568
# plot the grid search results
res_ = res.pivot(index='smote_strategy', columns='random_under_sampler_strategy', values='accuracy')
sns.heatmap(res_, annot=True, cbar_kws={'label': 'accuracy'})https://stackoverflow.com/questions/70404605
复制相似问题