我尝试过几种加载google新闻word2vec向量(https://code.google.com/archive/p/word2vec/)的方法:
en_nlp = spacy.load('en',vector=False)
en_nlp.vocab.load_vectors_from_bin_loc('GoogleNews-vectors-negative300.bin')上述各点如下:
MemoryError: Error assigning 18446744072820359357 bytes我也尝试过使用.gz打包的向量;或者用gensim加载并保存它们,以一种新的格式:
from gensim.models.word2vec import Word2Vec
model = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model.save_word2vec_format('googlenews2.txt')然后,该文件包含每行上的单词及其单词向量。我试着装上:
en_nlp.vocab.load_vectors('googlenews2.txt')但它返回"0“。
正确的方法是什么?
更新:
我可以将自己创建的文件加载到spacy中。我使用一个带有“String0.00.0.”的test.txt文件。在每条线上。然后用.bzip2将这个txt压缩到test.txt.bz2。然后创建一个与spacy兼容的二进制文件:
spacy.vocab.write_binary_vectors('test.txt.bz2', 'test.bin')我可以装进香水里:
nlp.vocab.load_vectors_from_bin_loc('test.bin')这行得通!但是,当我对googlenews2.txt执行相同的过程时,会得到以下错误:
lib/python3.6/site-packages/spacy/cfile.pyx in spacy.cfile.CFile.read_into (spacy/cfile.cpp:1279)()
OSError: 发布于 2017-02-08 14:09:07
对于spacy 1.x,将Google新闻向量加载到gensim并转换为新格式( .txt中的每一行包含一个向量: string,vec):
from gensim.models.word2vec import Word2Vec
from gensim.models import KeyedVectors
model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
model.wv.save_word2vec_format('googlenews.txt')删除.txt的第一行:
tail -n +2 googlenews.txt > googlenews.new && mv -f googlenews.new googlenews.txt将txt压缩为.bz2:
bzip2 googlenews.txt创建一个SpaCy兼容的二进制文件:
spacy.vocab.write_binary_vectors('googlenews.txt.bz2','googlenews.bin')将/lib/python/site-packages/spacy/data/en_google-1.0.0/vocab/googlenews.bin移动到googlenews.bin环境中。
然后加载字向量:
import spacy
nlp = spacy.load('en',vectors='en_google')或者稍后加载它们:
nlp.vocab.load_vectors_from_bin_loc('googlenews.bin')发布于 2018-04-29 20:48:11
我知道这个问题已经得到了回答,但我要提出一个更简单的解决办法。此解决方案将将google新闻向量加载到空白的spacy nlp对象中。
import gensim
import spacy
# Path to google news vectors
google_news_path = "path\to\google\news\\GoogleNews-vectors-negative300.bin.gz"
# Load google news vecs in gensim
model = gensim.models.KeyedVectors.load_word2vec_format(gn_path, binary=True)
# Init blank english spacy nlp object
nlp = spacy.blank('en')
# Loop through range of all indexes, get words associated with each index.
# The words in the keys list will correspond to the order of the google embed matrix
keys = []
for idx in range(3000000):
keys.append(model.index2word[idx])
# Set the vectors for our nlp object to the google news vectors
nlp.vocab.vectors = spacy.vocab.Vectors(data=model.syn0, keys=keys)
>>> nlp.vocab.vectors.shape
(3000000, 300)发布于 2018-10-03 14:10:52
使用gensim api来下载word2vec压缩模型要容易得多,它将存储在/home/"your_username"/gensim-data/word2vec-google-news-300/中。加载向量并玩球。我有16 to的内存,足以处理这个模型。
import gensim.downloader as api
model = api.load("word2vec-google-news-300") # download the model and return as object ready for use
word_vectors = model.wv #load the vectors from the modelhttps://stackoverflow.com/questions/42094180
复制相似问题