我正在尝试对xgboost模型进行超参数调整。我从AWS Sagemaker Hyperparameter Tuning开始,参数范围如下:
xgb.set_hyperparameters(eval_metric='auc',
objective='binary:logistic',
early_stopping_rounds=500,
rate_drop=0.1,
colsample_bytree=0.8,
subsample=0.75,
min_child_weight=0)
hyperparameter_ranges = {'eta': ContinuousParameter(0.01, 0.3),
'lambda': ContinuousParameter(0.1, 2),
'alpha': ContinuousParameter(0.5, 2),
'max_depth': IntegerParameter(5, 10),
'num_round': IntegerParameter(500, 2000)}
objective_metric_name = 'validation:auc'
tuner = HyperparameterTuner(xgb,
objective_metric_name,
hyperparameter_ranges,
max_jobs=10,
max_parallel_jobs=3,
tags=[{'Key': 'Application', 'Value': 'cxxx'}])并获得具有以下超参数集的最佳模型:
{
"alpha": "1.4009334471163981",
"eta": "0.05726016655019904",
"lambda": "1.2070623852474922",
"max_depth": "7",
"num_round": "1052"
}出于好奇,我将这些超参数连接到xgboost包中,如下所示:
xgb_model = xgb.XGBClassifier(max_depth = 7,
silent = False,
random_state = 42,
n_estimators = 1052,
learning_rate = 0.05726016655019904,
objective = 'binary:logistic',
verbosity = 1,
reg_alpha = 1.4009334471163981,
reg_lambda = 1.2070623852474922,
rate_drop=0.1,
colsample_bytree=0.8,
subsample=0.75,
min_child_weight=0
)我对模型进行了重新训练,并意识到从后者获得的结果比从SageMaker获得的结果更好。xgboost (验证集的auc):0.766 SageMaker最佳模型(验证集的auc):0.751
我想知道为什么SageMaker的性能这么差?如果SageMaker的性能通常比xgboost包差,那么人们通常如何调整xgboost超参数呢?谢谢你的任何提示!
发布于 2020-02-28 05:00:06
我的第一个猜测是您使用的是不同版本的XGBoost。您使用的是哪张图片?启用脚本模式的开源XGBoost使用0.90。
https://stackoverflow.com/questions/60423734
复制相似问题