我使用nltk从句子中首先删除给定的停止词来生成n个字元.然而,nltk.pos_tag()在我的CPU上占用了0.6秒的时间(英特尔i7),速度非常慢。
产出:
['The first time I went, and was completely taken by the live jazz band and atmosphere, I ordered the Lobster Cobb Salad.']
0.620481014252
["It's simply the best meal in NYC."]
0.640982151031
['You cannot go wrong at the Red Eye Grill.']
0.644664049149守则:
for sentence in source:
nltk_ngrams = None
if stop_words is not None:
start = time.time()
sentence_pos = nltk.pos_tag(word_tokenize(sentence))
print time.time() - start
filtered_words = [word for (word, pos) in sentence_pos if pos not in stop_words]
else:
filtered_words = ngrams(sentence.split(), n)这真的很慢还是我做错了什么?
发布于 2015-11-12 16:58:33
使用pos_tag_sents标记多个句子:
>>> import time
>>> from nltk.corpus import brown
>>> from nltk import pos_tag
>>> from nltk import pos_tag_sents
>>> sents = brown.sents()[:10]
>>> start = time.time(); pos_tag(sents[0]); print time.time() - start
0.934092998505
>>> start = time.time(); [pos_tag(s) for s in sents]; print time.time() - start
9.5061340332
>>> start = time.time(); pos_tag_sents(sents); print time.time() - start
0.939551115036发布于 2016-10-04 07:58:05
nltk pos_tag is defined as:
from nltk.tag.perceptron import PerceptronTagger
def pos_tag(tokens, tagset=None):
tagger = PerceptronTagger()
return _pos_tag(tokens, tagset, tagger)因此,每次对pos_tag的调用都实例化感知器模块,该模块占用大量的计算量,time.You可以通过直接调用tagger.tag来节省这一时间:
from nltk.tag.perceptron import PerceptronTagger
tagger=PerceptronTagger()
sentence_pos = tagger.tag(word_tokenize(sentence))发布于 2015-11-20 07:51:48
如果您正在寻找另一个在Python中具有快速性能的POS标签,您可能需要尝试RDRPOSTagger。例如,在英语POS标记中,使用Core2Duo2.4GHz的计算机,Python中的单个线程实现的标记速度为8K字/秒。通过使用多线程模式,您可以获得更快的标记速度。与最先进的taggers相比,RDRPOSTagger获得了非常具有竞争力的精确性,并且现在支持40种语言的预培训模型。见本论文的实验结果。
https://stackoverflow.com/questions/33676526
复制相似问题