我有一个训练有素的Yolo模型,并且采用model.pt格式,我能够上传模型以创建一个mlflow中的工件。但是,当我查看yaml文件时,它列出了一些依赖项。我确信我装货的方式不对。
频道:
dependencies:
任何人,请让我知道如何使用预先训练过的模型将其推送到mlflow以创建工件,然后将依赖项(Docker)容器化以推送到AWS ECR。
发布于 2022-05-07 17:46:13
底盘( https://www.chassis.ml )是一个开源项目,它可以做你想做的事情。
提供CHassis您的PyTorch文件。它将它包装在一个mlflow模型中,然后是一个grpc服务器,创建一个容器,然后为您将所有东西放到Docker中心。然后,您可以从码头枢纽拉,并推动到ECR自己。
PyTorch的例子有:https://github.com/modzy/chassis/tree/main/chassisml_sdk/examples/pytorch,虽然Yolo不在,但示例中包含了一个快捷笔记本,您可以根据您的需要进行修改。
您将需要访问底盘服务器。您可以按照底盘网站上的说明在本地设置一个(https://chassis.ml/getting-started/deploy-manual/),或者通过在https://chassis.modzy.com注册使用公开托管的一个
基本代码如下:
#import modules
import chassisml
import pickle
import cv2
import torch
import getpass
import numpy as np
import torchvision.models as models
from torchvision import transforms
#provide docker crednetials
dockerhub_user = getpass.getpass('docker hub username')
dockerhub_pass = getpass.getpass('docker hub password')
#pull model and define pre / post processing of data
model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)
model.eval()
COCO_INSTANCE_CATEGORY_NAMES = [
'__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]
transform = transforms.Compose([
transforms.ToPILImage(),
transforms.ToTensor(),
])
device = 'cpu'
def preprocess(input_bytes):
decoded = cv2.imdecode(np.frombuffer(input_bytes, np.uint8), -1)
img_t = transform(decoded)
batch_t = torch.unsqueeze(img_t, 0).to(device)
return batch_t
def postprocess(num_detections, predictions):
inference_result = {
"detections": [
{
"xMin": predictions["boxes"][i].detach().cpu().numpy().tolist()[0],
"xMax": predictions["boxes"][i].detach().cpu().numpy().tolist()[2],
"yMin": predictions["boxes"][i].detach().cpu().numpy().tolist()[1],
"yMax": predictions["boxes"][i].detach().cpu().numpy().tolist()[3],
"class": labels[predictions["labels"][i].detach().cpu().item()],
"classProbability": predictions["scores"][i].detach().cpu().item(),
} for i in range(num_detections)
]
}
structured_output = {
"data": {
"result": inference_result,
"explanation": None,
"drift": None,
}
}
return structured_output
def process(input_bytes):
# preprocess
batch_t = preprocess(input_bytes)
# run inference
predictions = model(batch_t)[0]
num_detections = len(predictions["boxes"])
# postprocess
structured_output = postprocess(num_detections, predictions)
return structured_output
#create chassis client
chassis_client = chassisml.ChassisClient("<chassis_server_url>:<chassis service port>")
#convert pytorch model to mlflow model
# create Chassis model
chassis_model = chassis_client.create_model(process_fn=process)
# test Chassis model (can pass filepath, bufferedreader, bytes, or text here):
sample_filepath = './data/airplane.jpg'
results = chassis_model.test(sample_filepath)
print(results)
# have chassis containerize model
response = chassis_model.publish(
model_name="PyTorch Faster R-CNN Object Detection",
model_version="0.0.2",
registry_user=dockerhub_user,
registry_pass=dockerhub_pass
)
# wait for packaging to complete.
job_id = response.get('job_id')
final_status = chassis_client.block_until_complete(job_id)发布于 2022-05-16 07:30:55
在创建模型和了解依赖关系时,为什么不传递手动建模所需的库列表呢?
conda_env ={
"channels": ["conda-forge"],
"dependencies" : [
"python=<your-python-version>",
"pip",
{
"pip":[
"<your-pip-dependency>==<version>"
],
},
],
"name": "mlflow-env"
}conda_env应该作为参数传递给您的日志模型。
mlflow.pytorch.log_model(
artifact_path="",
conda_env= conda_env
)https://stackoverflow.com/questions/69203653
复制相似问题