首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >R素食简单分析内存不足

R素食简单分析内存不足
EN

Stack Overflow用户
提问于 2015-01-06 06:19:44
回答 1查看 405关注 0票数 1

我正在尝试使用R在大型数据集上执行simper analysis (vegan package);我已经在具有较小数据集的本地计算机(10核,16 on内存)上运行了一些成功。但是,当我扩展我的分析以包括更大的数据集时,代码以错误终止,例如:

代码语言:javascript
复制
error: cannot allocate vector of size XX gb

因此,我用一个Amazon AWS实例(更具体地说,是一个r3.8xlarge实例: 32核、244 an内存)进行了同样的分析,我得到了同样的错误,这一次是最具体的:

代码语言:javascript
复制
error: cannot allocate vector of size 105.4 gb

我尝试过的两个系统(本地和亚马逊网络服务)都是Ubuntu机器,它们的sessionInfo()都是

代码语言:javascript
复制
R version 3.0.2 (2013-09-25)
Platform: x86_64-pc-linux-gnu (64-bit)

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C
 [9] LC_ADDRESS=C               LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

下面是我正在运行的相关代码行:

代码语言:javascript
复制
# read in data as DFs for mapping file
print("Loading mapping file...")
map_df = read.table(map, sep="\t", header=TRUE, strip.white=T)
rownames(map_df) = map_df[,1] # make first column the index so that we can join on it
map_df[,1] <- NULL # remove first column (we just turned it into the index)

# read in data as DF for biom file
print("Loading biom file...")
biom_df = data.frame(read.table(biom_file, sep="\t", header=TRUE), stringsAsFactors=FALSE)
biom_cols = dim(biom_df)[2] # number of columns in biom file, represents all the samples
otu_names <- as.vector(biom_df[,biom_cols]) # get otu taxonomy (last column) and save for later
biom_df[,biom_cols] <- NULL # remove taxonomy column
biom_df <- t(biom_df) # transpose to get OTUs as columns
biom_cols = dim(biom_df)[2] # number of columns in biom file, represents all the OTUs (now that we've transposed)

# merge our biom_df with map_df so that we reduce the samples down to those given in map_df
merged = merge(biom_df, map_df, by="row.names")
merged_cols = dim(merged)[2]

# clear some memory
rm(biom_df)
print("Total memory used:")
print(object.size(x=lapply(ls(), get)), units="Mb")


# simper analysis
print("Running simper analysis...")
sim <- simper(merged[,2:(biom_cols+1)], merged[,merged_cols], parallel=10)

有什么想法吗?

EN

回答 1

Stack Overflow用户

发布于 2015-01-06 13:31:00

从您提供的信息中,不清楚您的机器在哪一点耗尽了内存。您似乎在分析中使用了基本R函数。您可能想尝试一下data.table包(请查看fread函数,它比read.table快得多)。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/27788895

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档