首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >下载什么才能使nltk.tokenize.word_tokenize工作?

下载什么才能使nltk.tokenize.word_tokenize工作?
EN

Stack Overflow用户
提问于 2016-05-08 14:49:14
回答 4查看 61.3K关注 0票数 22

我将在集群上使用nltk.tokenize.word_tokenize,在这个集群中,我的帐户受到空间配额的限制。在国内,我通过nltk下载了所有的nltk.download()资源,但是,正如我所发现的,它需要2.5GB。

我觉得这有点过头了。您能建议nltk.tokenize.word_tokenize的最小(或几乎最小)依赖项是什么吗?到目前为止,我已经看到了nltk.download('punkt'),但我不确定它是否足够,大小是多少。我到底该怎么跑才能让它正常工作呢?

EN

回答 4

Stack Overflow用户

回答已采纳

发布于 2016-05-08 15:46:31

你是正确的。你需要普纳克托肯器模型。它有13 MB,nltk.download('punkt')应该能做到这一点。

票数 40
EN

Stack Overflow用户

发布于 2016-05-09 08:44:13

In

代码语言:javascript
复制
nltk.download('punkt')

就够了。

In long

如果要使用NLTk进行标记化,则不需要下载NLTk中可用的所有模型和语料库。

实际上,如果您只是在使用word_tokenize(),那么您就不会真正需要来自nltk.download()的任何资源。如果我们查看代码,默认的word_tokenize() (基本上是TreebankWordTokenizer )不应该使用任何额外的资源:

代码语言:javascript
复制
alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data/
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import word_tokenize
>>> from nltk.tokenize import TreebankWordTokenizer
>>> tokenizer = TreebankWordTokenizer()
>>> tokenizer.tokenize('This is a sentence.')
['This', 'is', 'a', 'sentence', '.']

但是:

代码语言:javascript
复制
alvas@ubi:~$ ls nltk_data/
chunkers  corpora  grammars  help  models  stemmers  taggers  tokenizers
alvas@ubi:~$ mv nltk_data/ tmp_move_nltk_data
alvas@ubi:~$ python
Python 2.7.11+ (default, Apr 17 2016, 14:00:29) 
[GCC 5.3.1 20160413] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from nltk import sent_tokenize
>>> sent_tokenize('This is a sentence. This is another.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************

>>> from nltk import word_tokenize
>>> word_tokenize('This is a sentence.')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 106, in word_tokenize
    return [token for sent in sent_tokenize(text, language)
  File "/usr/local/lib/python2.7/dist-packages/nltk/tokenize/__init__.py", line 90, in sent_tokenize
    tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 801, in load
    opened_resource = _open(resource_url)
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 919, in _open
    return find(path_, path + ['']).open()
  File "/usr/local/lib/python2.7/dist-packages/nltk/data.py", line 641, in find
    raise LookupError(resource_not_found)
LookupError: 
**********************************************************************
  Resource u'tokenizers/punkt/english.pickle' not found.  Please
  use the NLTK Downloader to obtain the resource:  >>>
  nltk.download()
  Searched in:
    - '/home/alvas/nltk_data'
    - '/usr/share/nltk_data'
    - '/usr/local/share/nltk_data'
    - '/usr/lib/nltk_data'
    - '/usr/local/lib/nltk_data'
    - u''
**********************************************************************

但如果我们看一下.py#L93,情况就不是这样了。word_tokenize似乎隐式地调用了sent_tokenize(),这需要punkt模型。

我不确定这是一个bug还是一个特性,但考虑到当前的代码,似乎旧的成语可能已经过时了:

代码语言:javascript
复制
>>> from nltk import sent_tokenize, word_tokenize
>>> sentences = 'This is a foo bar sentence. This is another sentence.'
>>> tokenized_sents = [word_tokenize(sent) for sent in sent_tokenize(sentences)]
>>> tokenized_sents
[['This', 'is', 'a', 'foo', 'bar', 'sentence', '.'], ['This', 'is', 'another', 'sentence', '.']]

可以简单地说:

代码语言:javascript
复制
>>> word_tokenize(sentences)
['This', 'is', 'a', 'foo', 'bar', 'sentence', '.', 'This', 'is', 'another', 'sentence', '.']

但是我们看到,word_tokenize()将字符串列表简化为单个字符串列表。

或者,您可以尝试使用一个新的令牌器,它是基于不需要预先训练模型的toktok.py而添加到NLTK https://github.com/jonsafari/tok-tok中的。

票数 13
EN

Stack Overflow用户

发布于 2021-08-08 15:04:20

如果在lambda中有大量的NLTK泡菜,代码编辑器将无法编辑。使用Lambda层。您可以上传NLTK数据,并将数据包括在代码中,如下所示。

代码语言:javascript
复制
nltk.data.path.append("/opt/tmp_nltk")
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/37101114

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档