本地驱动器中有900+文件夹,每个文件夹都有一个.dat扩展名文件。我希望循环遍历每个文件夹来访问其中的文件,以便只获取特定的数据并将数据写入新的文件中。每个.dat文件看起来都是这样的-
Authors:
# Pallavi Subhraveti
# Quang Ong
# Tim Holland
# Anamika Kothari
# Ingrid Keseler
# Ron Caspi
# Peter D Karp
# Please see the license agreement regarding the use of and distribution of
this file.
# The format of this file is defined at http://bioinformatics.ai.sri.com
# Version: 21.5
# File Name: compounds.dat
# Date and time generated: October 24, 2017, 14:52:45
# Attributes:
# UNIQUE-ID
# TYPES
# COMMON-NAME
# ABBREV-NAME
# ACCESSION-1
# ANTICODON
# ATOM-CHARGES
# ATOM-ISOTOPES
# CATALYZES
# CFG-ICON-COLOR
# CHEMICAL-FORMULA
# CITATIONS
# CODONS
# COFACTORS-OF
# MOLECULAR-WEIGHT
# MONOISOTOPIC-MW
[Data Chunk 1]
UNIQUE-ID - CPD0-1108
TYPES - D-Ribofuranose
COMMON-NAME - β-D-ribofuranose
ATOM-CHARGES - (9 -1)
ATOM-CHARGES - (6 1)
CHEMICAL-FORMULA - (C 5)
CHEMICAL-FORMULA - (H 14)
CHEMICAL-FORMULA - (N 1)
CHEMICAL-FORMULA - (O 6)
CHEMICAL-FORMULA - (P 1)
CREDITS - SRI
CREDITS - kaipa
DBLINKS - (CHEBI "10647" NIL |kothari| 3594051403 NIL NIL)
DBLINKS - (BIGG "37147" NIL |kothari| 3584718837 NIL NIL)
DBLINKS - (PUBCHEM "25200464" NIL |taltman| 3466375284 NIL NIL)
DBLINKS - (LIGAND-CPD "C01233" NIL |keseler| 3342798255 NIL NIL)
INCHI - InChI=1S/C5H14NO6P/c6-1-2-11-13(9,10)12-4-5(8)3-7/h5,7-8H,1-4,6H2,(H,9,10)
MOLECULAR-WEIGHT - 215.142
MONOISOTOPIC-MW - 216.0636987293
NON-STANDARD-INCHI - InChI=1S/C5H14NO6P/c6-1-2-11-13(9,10)12-4-5(8)3-7/h5,7-8H,1-4,6H2,(H,9,10)
SMILES - C(OP([O-])(OCC(CO)O)=O)C[N+]
SYNONYMS - sn-Glycero-3-phosphoethanolamine
SYNONYMS - 1-glycerophosphorylethanolamine\
[Data Chunk 2]
//
UNIQUE-ID - URIDINE
TYPES - Pyrimidine
....
....每个文件中大约有18000行(查看Notepad++中的数据)。现在,我希望创建一个新文件,并且只从数据中复制特定的列。我只希望在新创建的文件中复制这些列,该文件应该如下所示-
UNIQUE-ID TYPES COMMON-NAME CHEMICAL-FORMULA BIGG ID CHEMSPIDER ID CAS ID CHEBI ID PUBCHEM ID MOLECULAR-WEIGHT MONOISOTOPIC-MW
CPD0-1108 D-Ribofuranose β-D-ribofuranose C5H14N1O6P1 37147 NA NA 10647 25200464 215.142 216.0636987293
URIDINE Pyrimidine ...每个文件中的每个数据块不一定都有我需要的所有列的信息,这就是为什么我在输出表中提到了那些列的NA。不过,如果在这些列中获得空白值,这是完全可以的,因为以后可以单独处理这些空白。
这是有数据的目录-
File 1] -> C:\Users\robbie\Desktop\Organism_Data\aact1035194-hmpcyc\compounds.dat
File 2] -> C:\Users\robbie\Desktop\Organism_Data\aaph679198-hmpcyc\compounds.dat
File 3] -> C:\Users\robbie\Desktop\Organism_Data\yreg1002368-hmpcyc\compounds.dat
File 4] -> C:\Users\robbie\Desktop\Organism_Data\tden699187-hmpcyc\compounds.dat
...
...我确实倾向于在R中使用dir函数,引用这 post,但是我搞不懂在编写代码时在函数的模式参数中放什么,因为有机体名称(文件夹名)非常奇怪,而且不一致。
任何帮助,以获得所需的输出是非常感谢的。我在想在R中做这件事的方法,但如果我得到了很好的建议和在python中处理这个问题的方法,我也愿意在python中尝试这一点。提前感谢您的帮助!
编辑:链接到数据- 数据
发布于 2018-07-11 17:14:52
另一种方法,我这种情况下只读取您提供的文件,但它可以读取多个文件。
我添加了一些中间结果来显示代码实际上在做什么.
library(tidyverse)
library(data.table)
library(zoo)
# create a data.frame with the desired files
filenames <- list.files( path = getwd(), pattern = "*.dat$", recursive = TRUE, full.names = TRUE )
# > filenames
#[1] "C:/Users/********/Documents/Git/udls2/test.dat"
#read in the files, using data.table's fread.. here I grep lines starting with UNIQUE-ID or TYPES. create your desired regex-pattern
pattern <- "^UNIQUE-ID|^TYPES"
content.list <- lapply( filenames, function(x) fread( x, sep = "\n", header = FALSE )[grepl( pattern, V1 )] )
# > content.list
# [[1]]
# V1
# 1: UNIQUE-ID - CPD0-1108
# 2: TYPES - D-Ribofuranose
# 3: UNIQUE-ID - URIDINE
# 4: TYPES - Pyrimidine
#add all content to a single data.table
dt <- rbindlist( content.list )
# > dt
# V1
# 1: UNIQUE-ID - CPD0-1108
# 2: TYPES - D-Ribofuranose
# 3: UNIQUE-ID - URIDINE
# 4: TYPES - Pyrimidine
#split the text in a variable-name and it's content
dt <- dt %>% separate( V1, into = c("var", "content"), sep = " - ")
# > dt
# var content
# 1: UNIQUE-ID CPD0-1108
# 2: TYPES D-Ribofuranose
# 3: UNIQUE-ID URIDINE
# 4: TYPES Pyrimidine
#add an increasing id for every UNIQUE-ID
dt[var == "UNIQUE-ID", id := seq.int( 1: nrow( dt[var=="UNIQUE-ID", ]))]
# > dt
# var content id
# 1: UNIQUE-ID CPD0-1108 1
# 2: TYPES D-Ribofuranose NA
# 3: UNIQUE-ID URIDINE 2
# 4: TYPES Pyrimidine NA
#fill down id vor all variables found
dt[, id := na.locf( dt$id )]
# > dt
# var content id
# 1: UNIQUE-ID CPD0-1108 1
# 2: TYPES D-Ribofuranose 1
# 3: UNIQUE-ID URIDINE 2
# 4: TYPES Pyrimidine 2
#cast
dcast(dt, id ~ var, value.var = "content")
# id TYPES UNIQUE-ID
# 1: 1 D-Ribofuranose CPD0-1108
# 2: 2 Pyrimidine URIDINE发布于 2018-07-11 16:51:24
一份文件
将其分解为几个逻辑操作:
text2chunks <- function(txt) {
chunks <- split(txt, cumsum(grepl("^\\[Data Chunk.*\\]$", txt)))
Filter(function(a) grepl("^\\[Data Chunk.*\\]$", a[1]), chunks)
}
chunk2dataframe <- function(vec, hdrs = NULL, sep = " - ") {
s <- stringi::stri_split(vec, fixed=sep, n=2L)
s <- Filter(function(a) length(a) == 2L, s)
df <- as.data.frame(setNames(lapply(s, `[[`, 2), sapply(s, `[[`, 1)),
stringsAsFactors=FALSE)
if (! is.null(hdrs)) df <- df[ names(df) %in% make.names(hdrs) ]
df
}hdrs是要保留的列名的可选向量;如果不提供(或NULL),则所有键/值对都作为列返回。
hdrs <- c("UNIQUE-ID", "TYPES", "COMMON-NAME")使用数据(如下所示),我得到了lines,它是来自单个文件的character向量:
head(lines)
# [1] "Authors:"
# [2] "# Pallavi Subhraveti"
# [3] "# Quang Ong"
# [4] "# Please see the license agreement regarding the use of and distribution of this file."
# [5] "# The format of this file is defined at http://bioinformatics.ai.sri.com"
# [6] "# Version: 21.5"
str(text2chunks(lines))
# List of 2
# $ 1: chr [1:5] "[Data Chunk 1]" "UNIQUE-ID - CPD0-1108" "TYPES - D-Ribofuranose" "COMMON-NAME - β-D-ribofuranose" ...
# $ 2: chr [1:6] "[Data Chunk 2]" "// something out of place here?" "UNIQUE-ID - URIDINE" "TYPES - Pyrimidine" ...
str(lapply(text2chunks(lines), chunk2dataframe, hdrs=hdrs))
# List of 2
# $ 1:'data.frame': 1 obs. of 3 variables:
# ..$ UNIQUE.ID : chr "CPD0-1108"
# ..$ TYPES : chr "D-Ribofuranose"
# ..$ COMMON.NAME: chr "β-D-ribofuranose"
# $ 2:'data.frame': 1 obs. of 3 variables:
# ..$ UNIQUE.ID : chr "URIDINE"
# ..$ TYPES : chr "Pyrimidine"
# ..$ COMMON.NAME: chr "β-D-ribofuranose or something"最后的产品:
dplyr::bind_rows(lapply(text2chunks(lines), chunk2dataframe, hdrs=hdrs))
# UNIQUE.ID TYPES COMMON.NAME
# 1 CPD0-1108 D-Ribofuranose β-D-ribofuranose
# 2 URIDINE Pyrimidine β-D-ribofuranose or something由于您希望对许多函数进行迭代,因此为此创建一个方便的函数是有意义的:
text2dataframe <- function(txt) {
dplyr::bind_rows(lapply(text2chunks(txt), chunk2dataframe, hdrs=hdrs))
}许多档案
未经测试,但应有效:
files <- list.files(path="C:/Users/robbie/Desktop/Organism_Data/",
pattern="compounds.dat", recursive=TRUE, full.names=TRUE)
alldata <- lapply(files, readLines)
allframes <- lapply(alldata, text2dataframe)
oneframe <- dplyr::bind_rows(allframes)备注:
stringi::stri_split而不是strsplit只是为了方便参数n=;在基本R中这样做并不难,只需要几行额外的代码。dplyr::bind_rows,是因为它很好地处理了缺少的列和不同的顺序;可以通过一些额外的努力/小心来使用基本rbind.data.frame。data.frame-izing things倾向于稍微推一下列名,只是要注意。数据:
# lines <- readLines("some_filename.dat")
fulltext <- 'Authors:
# Pallavi Subhraveti
# Quang Ong
# Please see the license agreement regarding the use of and distribution of this file.
# The format of this file is defined at http://bioinformatics.ai.sri.com
# Version: 21.5
# File Name: compounds.dat
# Date and time generated: October 24, 2017, 14:52:45
# Attributes:
# UNIQUE-ID
# TYPES
[Data Chunk 1]
UNIQUE-ID - CPD0-1108
TYPES - D-Ribofuranose
COMMON-NAME - β-D-ribofuranose
DO-NOT-CARE - 42
[Data Chunk 2]
// something out of place here?
UNIQUE-ID - URIDINE
TYPES - Pyrimidine
COMMON-NAME - β-D-ribofuranose or something
DO-NOT-CARE - 43
'
lines <- strsplit(fulltext, '[\r\n]+')[[1]]https://stackoverflow.com/questions/51289972
复制相似问题