首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >复制lasso仿真示例1

复制lasso仿真示例1
EN

Stack Overflow用户
提问于 2018-10-09 07:03:18
回答 1查看 912关注 0票数 0

我想在原始套索纸的第280页上复制示例1的结果。

  • 模型是y = X*beta + sigma*epsilon,其中epsilonN(0,1)
  • 模拟由20/20/200观测数据组成的50个数据集,用于培训/验证/测试集。
  • True beta = (3, 1.5, 0, 0, 2, 0, 0, 0)
  • sigma = 3
  • x_ix_j之间的成对相关性设置为corr(i,j) = 0.5^|i-j|。 我使用了训练、验证、测试分割方法来找到test MSE的估计值。我试着手工计算几个test MSE估计,在模拟重复之前检查我是否走对了路。但我发现的test MSE估计值(在9到15之间)似乎比原始论文(中位数2.43)要大得多。我附加了生成test MSE的代码。

有什么建议吗?

代码语言:javascript
复制
    library(MASS)
    library(glmnet)
    
    simfun <- function(trainn = 20, validationn = 20, testn = 200, corr =0.5, sigma = 3, beta) { 


      n <- trainn + testn + validationn
      p <- length(beta)
      Covmatrix <- outer(1:p, 1:p, function(x,y){corr^abs(x-y)})
      X <- mvrnorm(n, rep(0,p), Covmatrix) # MASS
      X <- X[sample(n),]
      y <- X%*%beta + rnorm(n,mean = 0,sd=sigma)
      trainx <- X[1:trainn,]
      validationx <- X[(trainn+1):(trainn+validationn),]
      testx <- X[(trainn+validationn+1):n,]
      trainy <- y[1:trainn,]
      validationy <- y[(trainn+1):(trainn+validationn),]
      testy <- y[(trainn+validationn+1):n,]
      list(trainx = trainx, validationx = validationx, testx = testx, 
           trainy = trainy, validationy = validationy, testy = testy)
    }
    
    beta <- c(3,1.5,0,0,2,0,0,0)
    data <- simfun(20,20,200,corr=0.5,sigma=3,beta)
    trainx <- data$trainx
    trainy <- data$trainy
    validationx <- data$validationx
    validationy <- data$validationy
    testx <- data$testx
    testy <- data$testy
    
    
    # training: find betas for all the lambdas
    betas <- coef(glmnet(trainx,trainy,alpha=1))
    
    # validation: compute validation test error for each lambda and find the minimum
    J.val <- colMeans((validationy-cbind(1,validationx)%*%betas)^2)
    beta.opt <- betas[, which.min(J.val)]
    
    # test
    test.mse <- mean((testy-cbind(1,testx)%*%beta.opt)^2)
    test.mse
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-10-11 14:03:41

这是模拟研究,所以我认为你不需要使用训练-验证的方法。因为它的随机性,它只会引起变化。可以使用它的定义实现预期的测试错误。

  1. 在构建过程中生成多个培训数据集
  2. 生成独立的测试集
  3. 根据每个训练集拟合每个模型
  4. 根据测试集计算错误
  5. 取平均值 set.seed(1)模拟<-函数(n_train= 20,n_test = 10,simul = 50,corr = .5,σ= 3,β= c(3,1.5,0,0,2,0,0,0),lam_grid = 10^seq(-3,5){require(-3,5)) {require(-3,5){ require(tidyverse) #真模型p <- length( beta ) corr矩阵<-外层( 1:p,1:p,函数(x,)){ corr^abs(x - y) }x <- foreach(i = 1:simul,.combine = rbind) %do% {mvrnorm(n_train,rep(0,p),Covmatrix) } eps <- rnorm(n_train,平均值= 0,sd =西格玛)y <- X %*%β+#产生真实模型#生成测试集测试<-质量::mvrnorm(n_test,rep(0,p),te_y <- test %*%β+ rnorm(n_test,平均值= 0,sd =σ)# test y simul_id <- gl(simul,k= n_train,标签= 1:n_train) #预期测试错误序列<- y %>% as_tibble() %>% mutate(m_id = simul_id) %>% group_by(m_id) %>% #用于每个模拟do(yhat =预测(glmnet::cv.glmnet(X,y,alpha = 1,lambda = lam_grid),newx = test,(s= "lambda.min")) #选择最小的lambda.min <- # (y0 - fhat0)^2应用(训练$yhat,函数(X){均(x-te_y)^2 })平均( MSE )# 1/simul * MSE } simpfun()

编辑:对于调优参数,

代码语言:javascript
复制
    find_lambda <- function(.data, x, y, lambda, x_val, y_val) {
      .data %>%
        do(
          tuning = predict(glmnet::glmnet(x, y, alpha = 1, lambda = lambda), newx = x_val)
        ) %>%
        do( # tuning parameter: validation set
          mse = apply(.$tuning, 2, function(yhat, y) {
            mean((y - yhat)^2)
          }, y = y_val)
        ) %>%
        mutate(mse_min = min(mse)) %>%
        mutate(lam_choose = lambda[mse == mse_min]) # minimize mse
    }

使用此函数,可以添加验证步骤。

代码语言:javascript
复制
    simpfun <- function(n_train = 20, n_val = 20, n_test = 10, simul = 50, corr = .5, sigma = 3, beta = c(3, 1.5, 0, 0, 2, 0, 0, 0), lam_grid = 10^seq(10, -1, length.out = 100)) {
    require(foreach)
    require(tidyverse)
    # true model
    p <- length(beta)
    Covmatrix <- outer(
      1:p, 1:p,
      function(x, y) {
        corr^abs(x - y)
      }
    )
    X <- foreach(i = 1:simul, .combine = rbind) %do% {
      MASS::mvrnorm(n_train, rep(0, p), Covmatrix)
    }
    eps <- rnorm(n_train, mean = 0, sd = sigma)
    y <- X %*% beta + eps # generate true model
    # generate validation set
    val <- MASS::mvrnorm(n_val, rep(0, p), Covmatrix)
    val_y <- val %*% beta + rnorm(n_val, mean = 0, sd = sigma) # validation y
    # generate test set
    test <- MASS::mvrnorm(n_test, rep(0, p), Covmatrix)
    te_y <- test %*% beta + rnorm(n_test, mean = 0, sd = sigma) # test y
    simul_id <- gl(simul, k = n_train, labels = 1:n_train)

    Y <-
      y %>%
      as_tibble() %>%
      mutate(m_id = simul_id) %>%
      group_by(m_id) %>% # for each simulation: repeat
      rename(y = V1)

    # Tuning parameter
    Tuning <-
      Y %>%
      find_lambda(x = X, y = y, lambda = lam_grid, x_val = val, y_val = val_y)

    # expected test error
    test_mse <-
      Tuning %>%
      mutate(
        test_err = mean(
          (predict(glmnet::glmnet(X, y, alpha = 1, lambda = lam_choose), newx = test) - te_y)^2
        )
      ) %>%
      select(test_err) %>%
      pull()
    mean(test_mse)
  }
  simpfun()
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/52715098

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档