首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >R:找不到"Regression.Kernel“对象

R:找不到"Regression.Kernel“对象
EN

Stack Overflow用户
提问于 2022-05-04 18:37:36
回答 1查看 40关注 0票数 1

我正在尝试处理数据("final_ts"),以便使用内核函数和并行计算对时间序列进行预测。分析方法如下: 1)对经验时间序列进行分析,2)只对提供最佳验证误差的变量及其时间滞后进行标准变量选择。3)进行上述分析,选择正则化算法的最优正则化参数和核函数。因此,我显示了RMSE (根均方误差)。

代码不是我的,我试图设置它没有问题,但由于我没有这么多的经验,我不知道如何解决(我已经花了2天试图找到解决方案,所以如果你帮我,我会非常感激)出现的问题:“对象‘.’找不到”。

因此,主要代码如下(我将尝试解释发生了什么,请不要评判我):

代码语言:javascript
复制
#download the libraries:

#rm(list=ls(all=TRUE)) 
suppressMessages(library(Matrix))
suppressMessages(library(quantreg))
suppressMessages(library(parallel))
suppressMessages(library(compiler))
suppressMessages(library(lars))
suppressMessages(library(elasticnet))
suppressMessages(library(caret))
options(warn=-1)

#####################################################################################
#Download the sources (I'll show them in the end as a code)

###########Take the file###
ShowPlot = FALSE
lags = TRUE
ModelName = 'final_ts'
FileName = paste(ModelName, '.txt', sep = '')
########Calculate the logspace and the standard error functions #################
logspace <- function(d1, d2, n) exp(log(10)*seq(d1, d2, length.out=n)) 
std_err <- function(x) sd(x)/sqrt(length(x))
############# Choose the kernel function: we have 4 of them, but we try the second one #####
Kernel.Options = c('Exponential.Kernel', 'Epanechnikov.Kernel', 'TriCubic.Kernel', 'Matern.Kernel')
Regression.Kernel = Kernel.Options[1]

############# Choose the parameters for cross validation ####
lambda = logspace(-3,0,15) 
if(Regression.Kernel == 'Exponential.Kernel'){
  tht = seq(from = 0., to = 10, length = 30)         
}else{
  tht = seq(from = 0.1, to = 3, length = 20)     
}
parameters_on_grid = expand.grid(tht, lambda)     
### Read Time series
d = as.matrix(read.table(FileName, header= T))
######################
original.Embedding = c('Pro', 'Syn','Piceu')
original.TargetList = original.Embedding
d = d[, original.Embedding]
#### Here you take combinations of lags (best lag are 1 - 2) #####
x.lag = 1; y.lag = 2; z.lag = 1
sp.lag.selection = c(x.lag, y.lag, z.lag)
lagged.time.series = make.lagged.ts(d, sp.lag.selection)
d = lagged.time.series$time.series
original.col = lagged.time.series$original.variables
if(lags == TRUE){ var.sel = original.col; }else{ var.sel = colnames(d)}
##### Names and embedding in the laged dataset
if(lags == TRUE){ colnames(d) = Embedding =  TargetList = LETTERS[1:ncol(d)]}else{
  Embedding =  TargetList = original.Embedding
}
##### length of training and test set (2 points for testing, 7 - for training)
length.testing = 2
length.training = nrow(d) - length.testing
#### Preserve training for the interactions
ts.train.preserved = d[1:length.training, var.sel]
std.ts.train = Standardizza(ts.train.preserved)
#### Preserve testing for the test (you want your algorithm to learn the real structure of the model)
ts.test.preserved = d[(length.training + 1):nrow(d), var.sel]
#### Training set:
d.training = Standardizza(d[1:length.training, ])
#### You now need to standardize the test set using mean and sd of the training set
d.testing = Standardizza.test(ts.test.preserved,ts.train.preserved)
############## Prepare for parallel computing
Lavoratori = detectCores() - 2
cl <- parallel::makeCluster(Lavoratori, setup_strategy = "sequential")
####
RegressionType = ELNET_fit_

alpha = 0.85

### should you compute all the variables or not?
BestModel = BestModelLOOCV(cl, d.training, TargetList, Embedding, parameters_on_grid, RegressionType,alpha)

我还找到了一个计算核函数的四元算法(它可以放在一个特殊的r(源1)中):

代码语言:javascript
复制
Exponential.Kernel <- function(dst, theta){
  dbar <- mean(dst)
  krnl <- exp(-theta*dst/dbar)
  return(krnl)
}
Exponential.Kernel <- cmpfun(Exponential.Kernel)

寻找最佳休假一次交叉验证参数的公式如下(源2):

代码语言:javascript
复制
########################### Cross Validation (Leave-one-out) ##################################

BestModelLOOCV <- function(cl, X, TargetList, Embedding, grid, RegressionType,alpha){
  mine_output = Jacobian_(cl, X, TargetList, Embedding, grid, RegressionType,alpha)
  theta_opt = mine_output$th
  lambda_opt = mine_output$lm
 mine_c0  = mine_output$c0
  mine_output = mine_output$J
  
  J_ = list()
  C0_ = do.call(cbind, lapply(1:ncol(X), function(x, M) unlist(M[[x]]), mine_c0))
  colnames(C0_) = sapply(TargetList,function(x) paste("c0_", x, sep = ""))
  for(k in 1:(nrow(X) - 1)){
    J_[[k]] = do.call(rbind, lapply(1:ncol(X), function(x, M, i) unlist(M[[x]][i,]), mine_output, k))
    rownames(J_[[k]]) = Embedding
    colnames(J_[[k]]) = Embedding
    
  }
  BestCoefficients = list()
  BestCoefficients$J = J_
  BestCoefficients$c0 = C0_
  BestParameters = list()
  BestParameters$BestTH = theta_opt
  BestParameters$BestLM = lambda_opt
  return(list(BestCoefficients = BestCoefficients, BestParameters = BestParameters))
}
  
#####Compute the jacobian
Jacobian_ <- function(cl, X, TargetList, Embedding, grid, RegressionType,alpha){
  J = c0 = list()
  th = lm = c()
  n_ = 1
  FUN = match.fun(RegressionType)
  for(trg in TargetList){
    RegularizedParameters <- LOOCrossValidation(cl, X, trg, Embedding, grid, RegressionType,alpha)
    ########## Now compute the optimum regularized coefficients
    J[[n_]]  = FUN(X, trg, Embedding, RegularizedParameters$BestTH, RegularizedParameters$BestLM,alpha)
    th = c(th, RegularizedParameters$BestTH)
    lm = c(lm, RegularizedParameters$BestLM)
    c0[[n_]] = J[[n_]]$c0
    J[[n_]] = J[[n_]][-1]
    n_ = n_ + 1
  }
  return(list(J = J, c0 = c0, th = th, lm = lm))
}

为了计算弹性网正则化函数,可以使用这个公式(源3):

代码语言:javascript
复制
ELNET_fit_ <- function(time.series, targ_col, Embedding, theta, lambda,alp){
  Edim <- length(Embedding)
  coeff_names <- sapply(colnames(time.series),function(x) paste("d", targ_col, "d", x, sep = ""))
  block <- cbind(time.series[2:dim(time.series)[1],targ_col],time.series[1:(dim(time.series)[1]-1),])
  block <- as.data.frame(apply(block, 2, function(x) (x-mean(x))/sd(x)))
  
  lib <- 1:dim(block)[1]
  pred <- 1:dim(block)[1]
  
  coeff <- array(0,dim=c(length(pred),Edim + 1))
  colnames(coeff) <- c('c0', coeff_names)
  coeff <- as.data.frame(coeff)
  
  for (ipred in 1:length(pred)){
    libs = lib[-pred[ipred]]
    q <- matrix(as.numeric(block[pred[ipred],2:dim(block)[2]]),
                ncol=Edim, nrow=length(libs), byrow = T)
    distances <- sqrt(rowSums((block[libs,2:dim(block)[2]] - q)^2))
    ### Kernel
    Krnl = match.fun(Regression.Kernel)
    Ws = Krnl(distances, theta)
    ############ Fit function
    x = as.matrix(block[libs,2:dim(block)[2]])
    y = as.matrix(block[libs,1])
    x = x[seq_along(y), ]
    y = y[seq_along(y)]
    Ws = Ws[seq_along(y)]
    x = Ws * cbind(1, x)
    y = Ws * y
    fit <- enet(x, y, lambda = lambda, normalize = TRUE, intercept = FALSE)
    coeff[ipred,] <- predict(fit, s = alp, type="coefficients", mode="fraction")$coefficients 
  }
  return(coeff)
}

ELNET_fit_ <- cmpfun(ELNET_fit_)

计算的辅助公式如下(源4):

代码语言:javascript
复制
TakeLag <- function(X, species.to.lag, num.lag){
  tmp = matrix(0, nrow(X), num.lag)
  tmp[,1] = X[,species.to.lag]
  tmp[1, 1] = NA
  tmp[2:nrow(X), 1] = X[1:(nrow(X) - 1), species.to.lag]
  if(num.lag > 1){
    for(lag. in 2:num.lag){
      tmp[,lag.] = X[,species.to.lag]
      tmp[1, lag.] = NA
      tmp[2:nrow(X), lag.] = tmp[1:(nrow(tmp) - 1), lag.-1]
    }
  }
  tmp
}
make.lagged.ts <- function(X,sp.lag.selection ){
  ### X = time series
  ### sp.lag is a vector whose entry are the lags of each variable
  ### e.g., sp.lag = c(x.lag, y.lag, ..., u.lag)
  s = list()
  for(i in 1:length(sp.lag.selection)){
    Lag.sp = TakeLag(X, original.Embedding[i], sp.lag.selection[i])
    s[[i]] = cbind(X[,original.Embedding[i]], Lag.sp)
  }
  X = do.call(cbind,s)
  ### Remove the NA
  X = X[-c(1:max(sp.lag.selection)),]
  ### Save the position of the unlagged variables
  original.col = c()
  for(k in 1:length(sp.lag.selection)){
    if(k == 1){ original.col = c(original.col, 1)}else{
      num.lags = sum(unlist(lapply(1:(k-1), function(x,X) X[x], sp.lag.selection)))
      original.col = c(original.col, k + num.lags )
    }
  }
  return(list(time.series = X, original.variables = original.col))
}

take.coeff <- function(X, col.to.extract, original.emb){
  ### To use when prediction are made using lagged variables
  ### Take as input the sequence X of Jacobian along the attractor
  ### and the species to look at
  ### return a new sequence of Jacobian of the interaction among those species
  m = lapply(1:length(X$J), function(t, M, specie) M$J[[t]][specie,specie], 
                X, col.to.extract)
  for(i in 1:length(m)){
    colnames(m[[i]]) = rownames(m[[i]]) =original.emb
  }
  return(m)
}

Standardizza <- function(X){
  ### This return y = (x-meanx)/stdx
  for(i in 1:ncol(X)){
    X[,i] = (X[,i]- mean(X[,i]))/sd(X[,i])
  }
  return(X)
}
Standardizza.test <- function(X, Y){
  ### X = test set
  ### Y = training set
  ### This return y = (x-meanY)/stdY
  for(i in 1:ncol(X)){
    X[,i] = (X[,i]- mean(Y[,i]))/sd(Y[,i])
  }
  return(X)
}
###########################
#### Here you compute the quality of the forecast as mean correlation coefficient

主代码中的问题听起来像是:找不到object 'Regression.Kernel‘,但我在代码中看到了,它是编写的。也许问题与它的类型有关?但是,如果我删除引号,以使它成为一个“闭幕式”,我不能强加函数限制。

如果可以的话,请帮帮我,因为我不知道怎么解决。

原始数据集("final_ts.txt"):

代码语言:javascript
复制
decy Temp CTD_S OxFix Pro Syn Piceu Naneu
  2011.74221     27.60333     36.20700     27.26667  58638.33333  13107.00000    799.66667    117.66667
  2011.74401     26.97950     36.13400     27.05000  71392.50000  13228.50000   1149.00000    116.50000
  2011.74617     24.99750     35.34450     24.80000 264292.00000  27514.00000   2434.50000    132.50000
  2011.74692     24.78400     35.25800     25.82500 208996.50000  39284.00000   3761.75000    220.75000
  2011.74774     27.34225     35.86800     27.82500 114617.25000  23115.00000   2337.00000    139.75000
  2011.74950     26.47875     36.18175     27.20000  97008.00000   9775.75000    855.50000     77.50000
  2011.75583     26.86500     36.14575     27.47500  76255.00000  10226.75000    783.00000     99.50000
  2011.75654     27.04550     36.04950     27.60000  95017.75000  10546.25000    915.25000     77.75000
  2011.75962     27.06567     36.46367     26.56667  75750.00000  10194.33333    687.00000     44.00000

请帮忙,如果你有任何猜测,因为我不知道哪里出了问题。

我还提供了尝试运行的完整代码(请尝试运行它,因为我在下面的一段代码中提取了所有源代码):

代码语言:javascript
复制
rm(list=ls(all=TRUE)) 
suppressMessages(library(Matrix))
suppressMessages(library(quantreg))
suppressMessages(library(parallel))
suppressMessages(library(compiler))
suppressMessages(library(lars))
suppressMessages(library(elasticnet))
suppressMessages(library(caret))
options(warn=-1)

###########
ShowPlot = FALSE
lags = TRUE

######
### To make next step prediction
Testing <- function(C, C0, X){
  c0 = C0[nrow(C0), ]
  J = C[[length(C)]]
  return(c0 + J%*%X)
}
Add_to_TS <- function(TS, x){
  return(rbind(TS, x))
}
TakeLag <- function(X, species.to.lag, num.lag){
  tmp = matrix(0, nrow(X), num.lag)
  tmp[,1] = X[,species.to.lag]
  tmp[1, 1] = NA
  tmp[2:nrow(X), 1] = X[1:(nrow(X) - 1), species.to.lag]
  if(num.lag > 1){
    for(lag. in 2:num.lag){
      tmp[,lag.] = X[,species.to.lag]
      tmp[1, lag.] = NA
      tmp[2:nrow(X), lag.] = tmp[1:(nrow(tmp) - 1), lag.-1]
    }
  }
  tmp
}
make.lagged.ts <- function(X,sp.lag.selection ){
  ### X = time series
  ### sp.lag is a vector whose entry are the lags of each variable
  ### e.g., sp.lag = c(x.lag, y.lag, ..., u.lag)
  s = list()
  for(i in 1:length(sp.lag.selection)){
    Lag.sp = TakeLag(X, original.Embedding[i], sp.lag.selection[i])
    s[[i]] = cbind(X[,original.Embedding[i]], Lag.sp)
  }
  X = do.call(cbind,s)
  ### Remove the NA
  X = X[-c(1:max(sp.lag.selection)),]
  ### Save the position of the unlagged variables
  original.col = c()
  for(k in 1:length(sp.lag.selection)){
    if(k == 1){ original.col = c(original.col, 1)}else{
      num.lags = sum(unlist(lapply(1:(k-1), function(x,X) X[x], sp.lag.selection)))
      original.col = c(original.col, k + num.lags )
    }
  }
  return(list(time.series = X, original.variables = original.col))
}

take.coeff <- function(X, col.to.extract, original.emb){
  ### To use when prediction are made using lagged variables
  ### Take as input the sequence X of Jacobian along the attractor
  ### and the species to look at
  ### return a new sequence of Jacobian of the interaction among those species
  m = lapply(1:length(X$J), function(t, M, specie) M$J[[t]][specie,specie], 
             X, col.to.extract)
  for(i in 1:length(m)){
    colnames(m[[i]]) = rownames(m[[i]]) =original.emb
  }
  return(m)
}
naive.forecast <- function(last.point.training, test.set){
  #### Return the naive forecast, i.e., the test set is the last point of the training set
  naive.pred = matrix(0,nrow(test.set), ncol(test.set))
  for(j in 1:ncol(naive.pred)){
    naive.pred[,j] = rep(last.point.training[j], nrow(naive.pred))  
  }
  return(compute.rmse.test(naive.pred, test.set))
}
### Compute the rmse between two multivariate time series
compute.rmse.train <- function(X, Y){
  X = X[-1,]
  rmse = c()
  for(i in 1:ncol(X)){
    combine_xy = cbind(X[,i], Y[,i])
    rmse = c(rmse, sqrt(mean(unlist(lapply(1:nrow(combine_xy), 
                                           function(x, A) (A[x,1] - A[x,2])^2, combine_xy)))))
  }
  return(mean(rmse))
}
compute.rmse.test <- function(X, Y){
  rmse = c()
  for(i in 1:ncol(X)){
    combine_xy = cbind(X[,i], Y[,i])
    rmse = c(rmse, sqrt(mean(unlist(lapply(1:nrow(combine_xy), 
                                           function(x, A) (A[x,1] - A[x,2])^2, combine_xy)))))
  }
  return(mean(rmse))
}

ReadTimeSeries <- function(Nome){
  X = as.matrix(read.table(Nome))
  colnames(X) =  LETTERS[1:ncol(X)]
  return(X)
}
Standardizza <- function(X){
  ### This return y = (x-meanx)/stdx
  for(i in 1:ncol(X)){
    X[,i] = (X[,i]- mean(X[,i]))/sd(X[,i])
  }
  return(X)
}
Standardizza.test <- function(X, Y){
  ### X = test set
  ### Y = training set
  ### This return y = (x-meanY)/stdY
  for(i in 1:ncol(X)){
    X[,i] = (X[,i]- mean(Y[,i]))/sd(Y[,i])
  }
  return(X)
}
###########################
#### Here you compute the quality of the forecast as mean correlation coefficient
#### And we set to zero all those forecast that predict an extinction
MeanCorrelation <- function(TS, X){
  rho  = c()
  for(i in 1:ncol(X)){
    rho = c(rho, cor(TS[,i], X[,i]))
  }
  return(mean(rho))
}

##########
LOOCrossValidation <- function(cl, data.df, targ.sp, Embedding, grid, RegressionType,alpha){
  #### It is a bit long but at least you are sure that paralelizzation does not mess things up
  S_target <- parLapply(cl, 1:nrow(grid), CV, data.df, targ.sp, Embedding, grid, RegressionType,alpha)
  error.mat = cbind(unlist(S_target)[attr(unlist(S_target),"names") == "bndwth"],
                    unlist(S_target)[attr(unlist(S_target),"names") == "lmb"],
                    unlist(S_target)[attr(unlist(S_target),"names") == "mse"])
  rownames(error.mat) = c()
  error.mat[,3] = round(error.mat[,3],4)
  error.mat = error.mat[order(error.mat[,3]), ]
  idx = which(error.mat[,3] == min(error.mat[,3]))
  idx.th = which(grid[idx,1] == min(grid[idx,1]))
  idx = idx[min(idx.th)]
  return(list(BestTH = error.mat[idx,1],
              BestLM = error.mat[idx,2],
              min.var.err = error.mat[idx,3],
              val.err = error.mat))
}
#################################################
CV <- function(i, time_series_training, target_species, Embedding, TL, RegressionType,alpha){
  FUN = match.fun(RegressionType)
  #### Fit the model
  coefficients = FUN(time_series_training, target_species, Embedding, TL[i,1], TL[i,2],alpha)
  #### Take the variables in the time series that belong to the embedding
  time_series_training = time_series_training[,Embedding]
  #### Standardize the time series
  time_series_training = Standardizza(time_series_training)
  #### Take the forecast
  Data = Forecast_SMap(coefficients, time_series_training)
  time_series_training = time_series_training[-1,]
  Data = Data[-length(Data)]
  D = cbind(time_series_training[,target_species], Data)
  #### Compute the mean square error
  MSE = mean(unlist(lapply(1:nrow(D), function(x, A) (A[x,1] - A[x,2])^2, D)))
  #return(MSE)
  return(list(mse = MSE, bndwth = TL[i,1], lmb = TL[i,2]))
}

######################################################
Forecast_SMap <- function(C, X){
  n_coeff = ncol(X)
  predizione = c()
  for(k in 1:nrow(X)){
    s = unlist(lapply(1:n_coeff, function(x, J, i) return(J[[x + 1]][i]), C, k))
    predizione = c(predizione, C[[1]][k] + s%*%X[k,])
  }
  return(predizione)
}

Jacobian_ <- function(cl, X, TargetList, Embedding, grid, RegressionType,alpha){
  J = c0 = list()
  th = lm = c()
  n_ = 1
  FUN = match.fun(RegressionType)
  for(trg in TargetList){
    RegularizedParameters <- LOOCrossValidation(cl, X, trg, Embedding, grid, RegressionType,alpha)
    ########## Now compute the optimum regularized coefficients
    J[[n_]]  = FUN(X, trg, Embedding, RegularizedParameters$BestTH, RegularizedParameters$BestLM,alpha)
    th = c(th, RegularizedParameters$BestTH)
    lm = c(lm, RegularizedParameters$BestLM)
    c0[[n_]] = J[[n_]]$c0
    J[[n_]] = J[[n_]][-1]
    n_ = n_ + 1
  }
  return(list(J = J, c0 = c0, th = th, lm = lm))
}

Reconstructing <- function(J, C0, X){
  return(C0 + matrix(unlist(J),length(X), length(X)) %*% X)
}
###
RootMeanSquareError <- function(X, Y){
  rmse <- unlist(lapply(1:length(X), function(x, X, Y) (X[x] - Y[x])^2, X, Y))
  return(sqrt(mean(rmse)))
}
###
ReconstructionOfTrainingSet <- function(TimeSeries, Jac){
  prd = c()
  TimeSeries = Standardizza(TimeSeries)
  for(i in 1:(nrow(TimeSeries) - 1)){
    prd <- rbind(prd, t(Reconstructing(Jac$J[i], Jac$c0[i,], TimeSeries[i,])))
  }
  return(prd)
}
#####################################################################
#####################################################################
ComputeTrainingError <- function(TimeSeries, Jac, real.var){
  ############ First I check how the reconstructed time series looks like
  rmse = rho = c()
  TimeSeries = Standardizza(TimeSeries)
  prd = ReconstructionOfTrainingSet(TimeSeries, Jac)
  ########################################################################
  for(i in real.var){
    #### Important: you need to remove the first point because you 
    #### are making in sample predictions
    observed = TimeSeries[,i]
    observed = observed[-1]
    rmse = c(rmse, RootMeanSquareError(prd[,i], observed))
    rho = c(rho, cor(prd[,i], observed))
  }
  return(list(rmse = median(rmse), rho = median(rho)))
}

ELNET_fit_ <- function(time.series, targ_col, Embedding, theta, lambda,alp){
  Edim <- length(Embedding)
  coeff_names <- sapply(colnames(time.series),function(x) paste("d", targ_col, "d", x, sep = ""))
  block <- cbind(time.series[2:dim(time.series)[1],targ_col],time.series[1:(dim(time.series)[1]-1),])
  block <- as.data.frame(apply(block, 2, function(x) (x-mean(x))/sd(x)))
  
  lib <- 1:dim(block)[1]
  pred <- 1:dim(block)[1]
  
  coeff <- array(0,dim=c(length(pred),Edim + 1))
  colnames(coeff) <- c('c0', coeff_names)
  coeff <- as.data.frame(coeff)
  
  for (ipred in 1:length(pred)){
    libs = lib[-pred[ipred]]
    q <- matrix(as.numeric(block[pred[ipred],2:dim(block)[2]]),
                ncol=Edim, nrow=length(libs), byrow = T)
    distances <- sqrt(rowSums((block[libs,2:dim(block)[2]] - q)^2))
    ### Kernel
    Exponential.Kernel <- function(dst, theta){
      dbar <- mean(dst)
      krnl <- exp(-theta*dst/dbar)
      return(krnl)
    }
    
    Krnl = match.fun(Exponential.Kernel)
    Ws = Krnl(distances, theta)
    ############ Fit function
    x = as.matrix(block[libs,2:dim(block)[2]])
    y = as.matrix(block[libs,1])
    x = x[seq_along(y), ]
    y = y[seq_along(y)]
    Ws = Ws[seq_along(y)]
    x = Ws * cbind(1, x)
    y = Ws * y
    fit <- enet(x, y, lambda = lambda, normalize = TRUE, intercept = FALSE)
    coeff[ipred,] <- predict(fit, s = alp, type="coefficients", mode="fraction")$coefficients 
  }
  return(coeff)
}

ELNET_fit_ <- cmpfun(ELNET_fit_)

ModelName = 'final_ts'
FileName = paste(ModelName, '.txt', sep = '')
###################################
logspace <- function(d1, d2, n) exp(log(10)*seq(d1, d2, length.out=n)) 
std_err <- function(x) sd(x)/sqrt(length(x))
############# Choose the kernel

############# Parameters for cross validation
lambda = logspace(-3,0,15) 
#if(Regression.Kernel == 'Exponential.Kernel'){
tht = seq(from = 0., to = 10, length = 30)         
#}else{
#  tht = seq(from = 0.1, to = 3, length = 20)     
#}
parameters_on_grid = expand.grid(tht, lambda)

### Read Time series
d = as.matrix(read.table(FileName, header= T))
######################
original.Embedding = c('Pro', 'Syn','Piceu')
original.TargetList = original.Embedding
d = d[, original.Embedding]
#### Here you take combinations of lags (best lag are 1 - 2)
x.lag = 1; y.lag = 2; z.lag = 1
sp.lag.selection = c(x.lag, y.lag, z.lag)
lagged.time.series = make.lagged.ts(d, sp.lag.selection)
d = lagged.time.series$time.series
original.col = lagged.time.series$original.variables
if(lags == TRUE){ var.sel = original.col; }else{ var.sel = colnames(d)}
##### Names and embedding in the laged dataset
if(lags == TRUE){ colnames(d) = Embedding =  TargetList = LETTERS[1:ncol(d)]}else{
  Embedding =  TargetList = original.Embedding
}
##### length of training and test set
length.testing = 2
length.training = nrow(d) - length.testing
#### Preserve training for the interactions
ts.train.preserved = d[1:length.training, var.sel]
std.ts.train = Standardizza(ts.train.preserved)
#### Preserve testing for the test (you want your algorithm to learn the real structure of the model)
ts.test.preserved = d[(length.training + 1):nrow(d), var.sel]
#### Training set:
d.training = Standardizza(d[1:length.training, ])
#### You now need to standardize the test set using mean and sd of the training set
d.testing = Standardizza.test(ts.test.preserved,ts.train.preserved)
############## Prepare for parallel computing
Lavoratori = detectCores() - 2
cl <- parallel::makeCluster(Lavoratori, setup_strategy = "sequential")
####
RegressionType = ELNET_fit_
#RegressionType = ridge_fit_
alpha = 0.85
parameters_on_grid = expand.grid(tht, lambda)
lambda = logspace(-3,0,15)
tht = seq(from = 0., to = 10, length = 30)
### should you compute all the variables or not?

BestModelLOOCV <- function(cl, X, TargetList, Embedding, grid, RegressionType,alpha){
  mine_output = Jacobian_(cl, X, TargetList, Embedding, grid, RegressionType,alpha)
  theta_opt = mine_output$th
  lambda_opt = mine_output$lm
  mine_c0  = mine_output$c0
  mine_output = mine_output$J
  
  J_ = list()
  C0_ = do.call(cbind, lapply(1:ncol(X), function(x, M) unlist(M[[x]]), mine_c0))
  colnames(C0_) = sapply(TargetList,function(x) paste("c0_", x, sep = ""))
  for(k in 1:(nrow(X) - 1)){
    J_[[k]] = do.call(rbind, lapply(1:ncol(X), function(x, M, i) unlist(M[[x]][i,]), mine_output, k))
    rownames(J_[[k]]) = Embedding
    colnames(J_[[k]]) = Embedding
    
  }
  BestCoefficients = list()
  BestCoefficients$J = J_
  BestCoefficients$c0 = C0_
  BestParameters = list()
  BestParameters$BestTH = theta_opt
  BestParameters$BestLM = lambda_opt
  return(list(BestCoefficients = BestCoefficients, BestParameters = BestParameters))
}

BestModel = BestModelLOOCV(cl, d.training, TargetList = TargetList, Embedding = Embedding, grid = parameters_on_grid, RegressionType = RegressionType, alpha = alpha)
EN

回答 1

Stack Overflow用户

发布于 2022-05-04 19:01:47

当你打电话的时候

代码语言:javascript
复制
BestModel = BestModelLOOCV(cl, d.training, TargetList, Embedding, parameters_on_grid)

它默默地忽略了BestModelLOOCV函数还有两个参数这一事实,

代码语言:javascript
复制
BestModelLOOCV <- function(cl, X, TargetList, Embedding, grid, RegressionType,alpha){
 ...
}

在本例中,在函数中插入几行代码,然后调用match.fun(RegressionType),这会强制它。没有找到它,它就失败了。

通过将它添加到您的调用中,您也应该添加alpha

代码语言:javascript
复制
BestModel = BestModelLOOCV(cl, d.training, TargetList, Embedding, parameters_on_grid
  RegressionType = RegressionType, alpha = alpha)

这里的命名是可选的,我添加了它们的名字,只是为了演示。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/72117911

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档