首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >我如何从Huggingface(在sagemaker中)获得嵌入而不是特性?

我如何从Huggingface(在sagemaker中)获得嵌入而不是特性?
EN

Stack Overflow用户
提问于 2022-02-18 19:32:28
回答 2查看 548关注 0票数 0

我有一个文本分类器模型,它依赖于某个拥抱面模型的嵌入。

代码语言:javascript
复制
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')
encodings = model.encode("guckst du bundesliga")

它的形状为(768,)

tldr:是否有一种简单明了的方法可以在sagemaker (希望使用它提供的图像)上做到这一点?

上下文:查看这个拥抱面模型的文档,我看到的惟一的sagemaker选项是特征提取。

代码语言:javascript
复制
from sagemaker.huggingface import HuggingFaceModel
import sagemaker

role = sagemaker.get_execution_role()
# Hub Model configuration. https://huggingface.co/models
hub = {
    'HF_MODEL_ID':'T-Systems-onsite/cross-en-de-roberta-sentence-transformer',
    'HF_TASK':'feature-extraction'
}

# create Hugging Face Model Class
huggingface_model = HuggingFaceModel(
    transformers_version='4.6.1',
    pytorch_version='1.7.1',
    py_version='py36',
    env=hub,
    role=role, 
)

# deploy model to SageMaker Inference
predictor = huggingface_model.deploy(
    initial_instance_count=1, # number of instances
    instance_type='ml.m5.xlarge' # ec2 instance type
)

predictor.predict({
    'inputs': "Today is a sunny day and I'll get some ice cream."
})

这给了我的特征,有一个形状(9,768)

这两个值之间有一个连接,从另一个代码示例中可以看到。

代码语言:javascript
复制
from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def embeddings(feature_envelope, attention_mask):
    features = feature_envelope[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(features.size()).float()
    sum_embeddings = torch.sum(features * input_mask_expanded, 1)
    sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
    return sum_embeddings / sum_mask

#Sentences we want sentence embeddings for
sentences = ['guckst du bundesliga']

#Load AutoModel from huggingface model repository
tokenizer = AutoTokenizer.from_pretrained('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')
model = AutoModel.from_pretrained('T-Systems-onsite/cross-en-de-roberta-sentence-transformer')

#Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=128, return_tensors='pt')

#Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)
#     print(model_output)

#Perform pooling. In this case, mean pooling
sentence_embeddings = embeddings(model_output, encoded_input['attention_mask'])
sentence_embeddings.shape, sentence_embeddings

但是,正如您所看到的,不能只导出给定的特性。

EN

回答 2

Stack Overflow用户

发布于 2022-03-02 23:35:44

您可以考虑通过使用inference.py文件来定义自己的“用户定义代码”。

https://huggingface.co/docs/sagemaker/inference#user-defined-code-and-modules

票数 1
EN

Stack Overflow用户

发布于 2022-10-14 23:31:58

我不是蟒蛇,也不是ML人,所以带点盐吃这个吧。我在部署推理端点时遇到了同样的问题。以下摘录了我相信你正在寻找的数据。

代码语言:javascript
复制
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

cos = torch.nn.CosineSimilarity(dim=1, eps=1e-6)

all_sentence_combinations = []
for i in range(len(sentence_embeddings) - 1):
    for j in range(i + 1, len(sentence_embeddings)):
        opt = cos(sentence_embeddings[i].unsqueeze(0), sentence_embeddings[j].unsqueeze(0))
        all_sentence_combinations.append([opt.item(), i, j])

arr = []
for score, i, j in all_sentence_combinations:
    arr.append([sentences[i], sentences[j], score])
    print("{} \t {} \t {:.4f}".format(sentences[i], sentences[j], score))
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/71178934

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档