我想要计算tensorflow-hub模型(如ELMo或Universal Sentence Encoder)中存在的数据集单词的百分位数。对于GloVe这样的本地模型,我使用了一种朴素的方法:读取本地模型,将其转换为set,然后计算百分位数如下:
f = open('../glove.6B.100d.txt', encoding="utf8")
#Read all the word into a list
...
intersect_words = set(dataset_words).intersect(glove_words)
percentile = len(intersect_words)/len(dataset_words)*100对于Tenorflow-hub模型,有什么方法可以这样做吗?
发布于 2021-08-04 09:04:13
对于某些模型,词汇表是在SavedModel协议缓冲区中序列化的(比如用于使用和ELMo),因此必须在SavedModel中手动找到词汇表并提取它(我已经使用逻辑从这里中提取了词汇表):
import tensorflow_hub as hub
from tensorflow.python.saved_model.loader_impl import parse_saved_model
# This caches the model at `model_path`.
hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
model_path = '/tmp/tfhub_modules/063d866c06683311b44b4992fd46003be952409c/'
saved_model = parse_saved_model(model_path)
# The location of the tensor holding the vocab is model-specific.
graph = saved_model.meta_graphs[0].graph_def
function_ = graph.library.function
embedding_node = function_[5].node_def[1] # Node name is "Embedding_words".
words_tensor = embedding_node.attr.get("value").tensor
word_list = [s.decode('utf-8') for s in words_tensor.string_val]
word_list[100:105] # ['best', ',▁but', 'no', 'any', 'more']对于像google/Wiki-words 500/2这样的其他型号,我们更幸运的是,它已经导出到assets/目录中:
hub.load("https://tfhub.dev/google/Wiki-words-500/2")
!head /tmp/tfhub_modules/bf115a5fe517f019bebae05b433eaeee6415f5bf/assets/tokens.txt -n 40000 | tail
# Antisense
# Antiseptic
# Antisepticshttps://stackoverflow.com/questions/68535744
复制相似问题