首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >理解xgb.dump

理解xgb.dump
EN

Stack Overflow用户
提问于 2016-09-20 17:49:32
回答 1查看 996关注 0票数 1

我试图了解二进制分类的xgb.dump中发生了什么,交互深度为1。特别是在一行中如何使用相同的拆分(f38 < 2.5) (代码行2和6)

结果的输出如下所示:

代码语言:javascript
复制
 xgb.dump(model_2,with.stats=T) 
   [1] "booster[0]" 
   [2] "0:[f38<2.5] yes=1,no=2,missing=1,gain=173.793,cover=6317" 
   [3] "1:leaf=-0.0366182,cover=3279.75" 
   [4] "2:leaf=-0.0466305,cover=3037.25" 
   [5] "booster[1]" 
   [6] "0:[f38<2.5] yes=1,no=2,missing=1,gain=163.887,cover=6314.25" 
   [7]    "1:leaf=-0.035532,cover=3278.65" 
   [8] "2:leaf=-0.0452568,cover=3035.6"

第一次使用f38和第二次使用f38之间的区别仅仅是剩余的拟合正在进行吗?一开始,我觉得很奇怪,想弄清楚到底是怎么回事!

谢谢!

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2016-09-26 15:26:46

是第一次使用f38和第二次使用f38之间的区别--简单地说,剩余的拟合正在进行?

很有可能是的-它在第一轮之后更新梯度,并在示例中找到具有拆分点的相同特性。

下面是一个可重复的例子。

注意我在第二个例子中是如何降低学习率的,它发现了相同的特性,对于所有三轮都是相同的分割点。在第一个例子中,它在所有三轮中都使用了不同的特性。

代码语言:javascript
复制
require(xgboost)
data(agaricus.train, package='xgboost')
train <- agaricus.train
dtrain <- xgb.DMatrix(data = train$data, label=train$label)

#high learning rate, finds different first split feature (f55,f28,f66) in each tree
bst <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = 1, nrounds = 3,nthread = 2, objective = "binary:logistic")
xgb.dump(model = bst)
# [1] "booster[0]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [3] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=1.71218"                            
# [5] "4:leaf=-1.70044"                            "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [7] "5:leaf=-1.94071"                            "6:leaf=1.85965"                            
# [9] "booster[1]"                                 "0:[f59<-9.53674e-07] yes=1,no=2,missing=1" 
# [11] "1:[f28<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.784718"                           
# [13] "4:leaf=-0.96853"                            "2:leaf=-6.23624"                           
# [15] "booster[2]"                                 "0:[f101<-9.53674e-07] yes=1,no=2,missing=1"
# [17] "1:[f66<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.658725"                           
# [19] "4:leaf=5.77229"                             "2:[f110<-9.53674e-07] yes=5,no=6,missing=5"
# [21] "5:leaf=-0.791407"                           "6:leaf=-9.42142"      

## changed eta to lower learning rate, finds same feature(f55) in first split of each tree 
bst2 <- xgboost(data = train$data, label = train$label, max_depth = 2, eta = .01, nrounds = 3,nthread = 2, objective = "binary:logistic")
xgb.dump(model = bst2)
# [1] "booster[0]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [3] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.0171218"                          
# [5] "4:leaf=-0.0170044"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [7] "5:leaf=-0.0194071"                          "6:leaf=0.0185965"                          
# [9] "booster[1]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [11] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.016952"                           
# [13] "4:leaf=-0.0168371"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [15] "5:leaf=-0.0192151"                          "6:leaf=0.0184251"                          
# [17] "booster[2]"                                 "0:[f28<-9.53674e-07] yes=1,no=2,missing=1" 
# [19] "1:[f55<-9.53674e-07] yes=3,no=4,missing=3"  "3:leaf=0.0167863"                          
# [21] "4:leaf=-0.0166737"                          "2:[f108<-9.53674e-07] yes=5,no=6,missing=5"
# [23] "5:leaf=-0.0190286"                          "6:leaf=0.0182581"    
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/39600635

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档