我正在尝试用BayesSearchCV优化xgboost模型以进行多类分类。这是我的密码。
n_iterations = 50
estimator = xgb.XGBClassifier(
n_jobs=-1,
objective="multi:softmax",
eval_metric="merror",
verbosity=0,
num_class=3)
search_space = {
"learning_rate": (0.01, 1.0, "log-uniform"),
"min_child_weight": (0, 10),
"max_depth": (1, 50),
"max_delta_step": (0, 10),
"subsample": (0.01, 1.0, "uniform"),
"colsample_bytree": (0.01, 1.0, "log-uniform"),
"colsample_bylevel": (0.01, 1.0, "log-uniform"),
"reg_lambda": (1e-9, 1000, "log-uniform"),
"reg_alpha": (1e-9, 1.0, "log-uniform"),
"gamma": (1e-9, 0.5, "log-uniform"),
"min_child_weight": (0, 5),
"n_estimators": (5, 5000),
"scale_pos_weight": (1e-6, 500, "log-uniform"),
}
cv = GroupKFold(n_splits=10)
#cv = StratifiedKFold(n_splits=3, shuffle=True)
bayes_cv_tuner = BayesSearchCV(
estimator=estimator,
search_spaces=search_space,
scoring="accuracy",
cv=cv,
n_jobs=-1,
n_iter=n_iterations,
verbose=0,
refit=True,
)
import pandas as pd
import numpy as np
def print_status(optimal_result):
"""Shows the best parameters found and accuracy attained of the search so far."""
models_tested = pd.DataFrame(bayes_cv_tuner.cv_results_)
best_parameters_so_far = pd.Series(bayes_cv_tuner.best_params_)
print(
"Model #{}\nBest accuracy so far: {}\nBest parameters so far: {}\n".format(
len(models_tested),
np.round(bayes_cv_tuner.best_score_, 3),
bayes_cv_tuner.best_params_,
)
)
clf_type = bayes_cv_tuner.estimator.__class__.__name__
models_tested.to_csv(clf_type + "_cv_results_summary.csv")
result = bayes_cv_tuner.fit(X, y, callback=print_status, groups=data.groups)当我运行这个程序时,一切都很好,直到它到达模型10,其中is返回以下错误:
Traceback (most recent call last):
File "<ipython-input-189-dc299c53649b>", line 1, in <module>
result = bayes_cv_tuner.fit(X, y, callback=print_status, groups=data_nobands2.AGREEMENT_NUMBER2)
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\searchcv.py", line 694, in fit
groups=groups, n_points=n_points_adjusted
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\searchcv.py", line 565, in _step
params = optimizer.ask(n_points=n_points)
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\optimizer\optimizer.py", line 417, in ask
opt._tell(x, y_lie)
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\optimizer\optimizer.py", line 553, in _tell
n_samples=self.n_points, random_state=self.rng))
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\space\space.py", line 963, in transform
columns[j] = self.dimensions[j].transform(columns[j])
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\space\space.py", line 162, in transform
return self.transformer.transform(X)
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\space\transformers.py", line 304, in transform
X = transformer.transform(X)
File "C:\Users\CatKa\Anaconda3\lib\site-packages\skopt\space\transformers.py", line 251, in transform
"be greater than %f" % self.low)
ValueError: All integer values shouldbe greater than 0.000000我显然搜索过,但没有发现任何有用的东西。有什么想法吗?
顺便说一下,以防万一,我的数据集中根本没有负值。
发布于 2020-11-15 10:17:53
对于二进制分类任务,我经历了完全相同的问题。经过连续的测试,我发现错误与scale_pos_weight有关。
scale_pos_weight default=1 控制正负权重的平衡,对不平衡的班级有用。需要考虑的典型值:sum(负实例)/sum(正实例)。有关更多讨论,请参见参数调优。另外,参见Higgs竞赛示例: R、py1、py2、py3。
知道了我的数据之后,我手动设置了这个参数。
在多类分类的情况下,我建议您在用weight实例化数据时使用xgboost.DMatrix参数,如这个答案所示
https://stackoverflow.com/questions/64126397
复制相似问题