我有两个大的非结构化的文本文件,无法装入内存。我想找出他们之间的常用语。
什么是最有效的(时间和空间)方式?
谢谢
发布于 2016-02-11 18:38:06
我给了这两份文件
pi_poem
Now I will a rhyme construct
By chosen words the young instruct
I do not like green eggs and ham
I do not like them Sam I ampi_prose
The thing I like best about pi is the magic it does with circles.
Even young kids can have fun with the simple integer approximations.代码很简单。第一个循环逐行读取第一个文件,将单词插入字典集。第二个循环读取第二个文件;它在第一个文件的词典中找到的每个单词都会进入一组常用的单词。
这能满足你的需要吗?您将需要将其调整为标点符号,并且您可能希望在更改后删除额外的打印。
lexicon = set()
with open("pi_poem", 'r') as text:
for line in text.readlines():
for word in line.split():
if not word in lexicon:
lexicon.add(word)
print lexicon
common = set()
with open("pi_prose", 'r') as text:
for line in text.readlines():
for word in line.split():
if word in lexicon:
common.add(word)
print common输出:
set(['and', 'am', 'instruct', 'ham', 'chosen', 'young', 'construct', 'Now', 'By', 'do', 'them', 'I', 'eggs', 'rhyme', 'words', 'not', 'a', 'like', 'Sam', 'will', 'green', 'the'])
set(['I', 'the', 'like', 'young'])https://stackoverflow.com/questions/35313051
复制相似问题