首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >从调优实例存档中提取调优超参数

从调优实例存档中提取调优超参数
EN

Stack Overflow用户
提问于 2022-09-08 18:24:04
回答 1查看 51关注 0票数 0

我构建了一个基于以下示例的自动机器学习系统:

https://mlr-org.com/gallery/2021-03-11-practical-tuning-series-build-an-automated-machine-learning-system/

我使用了学习者xgboost和随机森林,并使用了branching。在训练阶段,xgboost给了我最好的结果。因此,我提取了优化的超参数并构建了最终的xgboost模型:

代码语言:javascript
复制
lrn = as_learner(graph)
lrn$param_set$values = instance$result_learner_param_vals

我还对表现最好的随机森林模型的param_vals感兴趣。

我想,我可以得到超参数并保存最好的随机森林模型:

代码语言:javascript
复制
Arch = as.data.table(instance$archive, exclude_columns = NULL) # so we keep the uhash
best_RF = Arch[branch.selection == "lrn_ranger"]
best_RF = best_RF[which.min(best_RF$regr.rmse), ] # to get the best RF model from the data table
instance$archive$learner_param_vals(uhash = best_RF$uhash)

lrn_2 = as_learner(graph)
lrn_2$param_set$values = instance$archive$learner_param_vals(uhash = best_RF$uhash)
#lrn_2$param_set$values = instance$archive$learner_param_vals(i = best_RF$batch_nr)

当我使用uhashbatch_nr时,我无法检索最佳随机森林模型的超参数。我总是接收存档中第一行的param_set,而uhashbatch_nr是正确的:

代码语言:javascript
复制
$slct_1.selector
selector_name(c("T", "RH"))

$missind.which
[1] "missing_train"

$missind.type
[1] "factor"

$missind.affect_columns
selector_invert(selector_type(c("factor", "ordered", "character")))

$encode.method
[1] "treatment"

$encode.affect_columns
selector_type("factor")

$branch.selection
[1] "lrn_ranger"

$slct_2.selector
selector_name(c("T"))

$mutate.mutation
$mutate.mutation$Tra.Trafo
~(T^(2))/(2)


$mutate.delete_originals
[1] FALSE

$xgboost.nthread
[1] 1

$xgboost.verbose
[1] 0

$ranger.min.node.size
[1] 15

$ranger.mtry
[1] 1

$ranger.num.threads
[1] 1

$ranger.num.trees
[1] 26

$ranger.sample.fraction
[1] 0.8735846

当我不仅对instance$result_learner_param_vals的输出感兴趣时,有人能给我一个提示吗?当我对instance$result_learner_param_vals的输出感兴趣的时候,我如何达到提取其他超参数的目标?

编辑:

我想澄清一些事情,这也与分支有关。在阅读了@be_marc的评论后,我不确定它是否打算那样工作。让我们使用我发布的图库示例作为参考。我希望比较使用GraphLearner对象的不同调优分支的结果。我创建了最后一个模型,就像图库示例中的一样,在我的例子中,这是一个xgboost模型。我还想为其他分支创建最终模型,以便进行基准测试。问题是,如果我不创建原始deep clonegraph_learner,那么原始graph_learner的值就会更改为参数branch.selection。为什么我不能用一个普通的克隆人?为什么它一定是个深克隆人?应该是这样的吗?很可能我不知道克隆和深克隆有什么区别。

代码语言:javascript
复制
# Reference for cloning https://mlr3.mlr-org.com/reference/Resampling.html
# equivalent to object called graph_learner in mlr3 gallery example 
graph_learner$param_set$values$branch.selection # original graph_learner object (reference MLR_gallery in first post)

# individually uncomment for different cases
# --------------------------------------------------------------------------------
#MLR_graph = graph # object graph_learner doesn't keeps its original state
#MLR_graph = graph$clone() # object graph_learner doesn't keeps its original state
MLR_graph = graph$clone(deep = TRUE) # object graph_learner keeps its original state
# --------------------------------------------------------------------------------
MLR_graph$param_set$values$branch.selection # value inherited from original graph
MLR_graph$param_set$values$branch.selection = "lrn_MLR" # change set value to other branch
MLR_graph$param_set$values$branch.selection # changed to branch "lrn_MLR"
MLR_lrn = as_learner(MLR_graph) # create a learner from graph with new set branch

# Here we can see the different behaviours based on if we don't clone, clone or deep clone
# at the end, the original graph_learner is supposed to keep it's original state
graph_learner$param_set$values$branch.selection
MLR_lrn$param_set$values$branch.selection

当我不使用深度克隆时,总体上最好的lrn (跳转到这篇文章的开头)也会受到影响。在我的例子中,是xgboost。参数branch.selection of lrn被设置为lrn_MLR

代码语言:javascript
复制
print(lrn)

<GraphLearner:slct_1.copy.missind.imputer_num.encode.featureunion.branch.nop_1.nop_2.slct_2.nop_3.nop_4.mutate.xgboost.ranger.MLR.unbranch>
* Model: list
* Parameters: slct_1.selector=<Selector>, missind.which=missing_train, missind.type=factor,
  missind.affect_columns=<Selector>, encode.method=treatment, encode.affect_columns=<Selector>,
  branch.selection=lrn_MLR, slct_2.selector=<Selector>, mutate.mutation=<list>, mutate.delete_originals=FALSE,
  xgboost.alpha=1.891, xgboost.eta=0.06144, xgboost.lambda=0.01341, xgboost.max_depth=3, xgboost.nrounds=122,
  xgboost.nthread=1, xgboost.verbose=0, ranger.num.threads=1
* Packages: mlr3, mlr3pipelines, stats, mlr3learners, xgboost, ranger
* Predict Types:  [response], se, distr
* Feature Types: logical, integer, numeric, character, factor, ordered, POSIXct
* Properties: featureless, hotstart_backward, hotstart_forward, importance, loglik, missings, oob_error,
  selected_features, weights

编辑2:,好吧,我刚刚发现,当我在一个实验中和不同的学习者一起工作时,我应该总是使用深克隆。

这种行为是故意的。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2022-09-10 08:13:31

我们修复了最新开发版本(10.09.2022)中的错误。你可以用

代码语言:javascript
复制
remotes::install_github("mlr-org/mlr3")

学生们没有正确地重新组合。这又起作用了

代码语言:javascript
复制
library(mlr3pipelines)
library(mlr3tuning)

learner = po("subsample") %>>% lrn("classif.rpart", cp = to_tune(0.1, 1))

# hyperparameter tuning on the pima indians diabetes data set
instance = tune(
  method = "random_search",
  task = tsk("pima"),
  learner = learner,
  resampling = rsmp("cv", folds = 3),
  measure = msr("classif.ce"),
  term_evals = 10
)

instance$archive$learner_param_vals(i = 1)
instance$archive$learner_param_vals(i = 2)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/73653563

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档