我想为gbm选择“最优”的超参数。因此,我使用h2o包运行以下代码
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "mean_per_class_error")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "mean_per_class_error", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)这给出了超参数的最佳组合。
learn_rate max_depth min_rows ntrees
0.08 10 5 200然后我试着做同样的事情,但是使用不同的stopping_metric。所以在上面我使用mean_per_class_error,在下面我使用logloss,所以我运行以下代码:
# create hyperparameter grid
hyper_params = list(ntrees = c(10,20,50,100,200,500),
max_depth = c(5,10,20,30),
min_rows = c(5,10,20,30),
learn_rate = c(0.01,0.05,0.08,0.1),
balance_classes=c(T,F))
# random subset of hyperparameters
search_criteria = list(strategy = "RandomDiscrete", max_models = 192, seed = 42,
stopping_rounds = 15, stopping_tolerance = 1e-3, stopping_metric = "logloss")
# run grid
gbm.grid <- h2o.grid("gbm", grid_id = "gbm.grid", x = names(td.train.h2o)[!names(td.train.h2o)%like%"s_sd_segment"], y = "s_sd_segment",
seed = 42, distribution = "multinomial",
training_frame = td.train.hyper.h2o, nfolds = 3,
hyper_params = hyper_params, search_criteria = search_criteria)
# sort results
gbm.sorted.grid <- h2o.getGrid(grid_id = "gbm.grid", sort_by = "logloss", decreasing=FALSE)
hyperparameters_gbm <- as.data.table(gbm.sorted.grid@summary_table)给出了超参数的最优组合。
learn_rate max_depth min_rows ntrees
0.1 20 5 500我知道我使用了参数strategy = "RandomDiscrete",但是,例如,使用stopping_metric = "mean_per_class_error"的gbm的最优组合是使用stopping_metric = "logloss"的gbm的“50最优组合”,使用stopping_metric = "logloss"的gbm的“第二最优组合”是使用stopping_metric = "mean_per_class_error"的gbm的“第14最优组合”。
为什么会发生这种事?
发布于 2017-05-23 17:58:28
首先,您使用不同的度量来确定您做得有多好,这意味着不同的度量发现不同的超参数设置更好,这并不奇怪。其次,一些超参数对于你正在解决的问题可能并不重要,这意味着你从这些超参数得到的所有信号都是噪音。第三,大多数机器学习算法都是随机的,这意味着在训练它们时存在随机性,有时在评估它们时,这意味着即使启动相同的网格或随机搜索也会导致不同的超参数。也就是说,只有当实际表现接近对方时,这种可能性才会很高。
https://datascience.stackexchange.com/questions/19146
复制相似问题