我需要使用TensorFlow 对象检测API进行远程在线预测。我正在尝试使用谷歌人工智能平台。当我在AI平台上对对象检测模型进行在线预测时,我会得到一个类似于:
HttpError 400 Tensor name: num_proposals has inconsistent batch size: 1 expecting: 49152当我在本地执行预测(例如result = model(image))时,我得到了所需的结果。
此错误发生在各种目标检测模型中--蒙版-RCNN和MobileNet。错误发生在我训练过的对象检测模型上,以及直接从目标检测模型动物园(v2)加载的模型上。我使用相同的代码获得了成功的结果,但是一个部署在AI平台上的模型并不是对象检测。
签名信息
模型输入signature-def似乎是正确的:
!saved_model_cli show --dir {MODEL_DIR_GS}
!saved_model_cli show --dir {MODEL_DIR_GS} --tag_set serve
!saved_model_cli show --dir {MODEL_DIR_GS} --tag_set serve --signature_def serving_default给予:
The given SavedModel contains the following tag-sets:
serve
The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:
SignatureDef key: "__saved_model_init_op"
SignatureDef key: "serving_default"
The given SavedModel SignatureDef contains the following input(s):
inputs['input_tensor'] tensor_info:
dtype: DT_UINT8
shape: (1, -1, -1, 3)
name: serving_default_input_tensor:0
The given SavedModel SignatureDef contains the following output(s):
outputs['anchors'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 4)
name: StatefulPartitionedCall:0
outputs['box_classifier_features'] tensor_info:
dtype: DT_FLOAT
shape: (300, 9, 9, 1536)
name: StatefulPartitionedCall:1
outputs['class_predictions_with_background'] tensor_info:
dtype: DT_FLOAT
shape: (300, 2)
name: StatefulPartitionedCall:2
outputs['detection_anchor_indices'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:3
outputs['detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 4)
name: StatefulPartitionedCall:4
outputs['detection_classes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:5
outputs['detection_masks'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 33, 33)
name: StatefulPartitionedCall:6
outputs['detection_multiclass_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100, 2)
name: StatefulPartitionedCall:7
outputs['detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 100)
name: StatefulPartitionedCall:8
outputs['final_anchors'] tensor_info:
dtype: DT_FLOAT
shape: (1, 300, 4)
name: StatefulPartitionedCall:9
outputs['image_shape'] tensor_info:
dtype: DT_FLOAT
shape: (4)
name: StatefulPartitionedCall:10
outputs['mask_predictions'] tensor_info:
dtype: DT_FLOAT
shape: (100, 1, 33, 33)
name: StatefulPartitionedCall:11
outputs['num_detections'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:12
outputs['num_proposals'] tensor_info:
dtype: DT_FLOAT
shape: (1)
name: StatefulPartitionedCall:13
outputs['proposal_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 300, 4)
name: StatefulPartitionedCall:14
outputs['proposal_boxes_normalized'] tensor_info:
dtype: DT_FLOAT
shape: (1, 300, 4)
name: StatefulPartitionedCall:15
outputs['raw_detection_boxes'] tensor_info:
dtype: DT_FLOAT
shape: (1, 300, 4)
name: StatefulPartitionedCall:16
outputs['raw_detection_scores'] tensor_info:
dtype: DT_FLOAT
shape: (1, 300, 2)
name: StatefulPartitionedCall:17
outputs['refined_box_encodings'] tensor_info:
dtype: DT_FLOAT
shape: (300, 1, 4)
name: StatefulPartitionedCall:18
outputs['rpn_box_encodings'] tensor_info:
dtype: DT_FLOAT
shape: (1, 12288, 4)
name: StatefulPartitionedCall:19
outputs['rpn_objectness_predictions_with_background'] tensor_info:
dtype: DT_FLOAT
shape: (1, 12288, 2)
name: StatefulPartitionedCall:20
Method name is: tensorflow/serving/predict复制步骤
!gcloud config set project $PROJECT
!gcloud beta ai-platform models create $MODEL --regions=us-central1 %%bash -s $PROJECT $MODEL $VERSION $MODEL_DIR_GS
gcloud ai-platform versions create $3 \
--project $1 \
--model $2 \
--origin $4 \
--runtime-version=2.1 \
--framework=tensorflow \
--python-version=3.7 \
--machine-type=n1-standard-2 \
--accelerator type=nvidia-tesla-t4import googleapiclient
import numpy as np
import socket
img_np = np.zeros((100, 100,3), dtype=np.uint8)
img_list = img_np.to_list()
instances = [img_list]
socket.setdefaulttimeout(600) # set timeout to 10 minutes
service = googleapiclient.discovery.build('ml', 'v1', cache_discovery=False, )
model_version_string = 'projects/{}/models/{}/versions/{}'.format(PROJECT, MODEL, VERSION)
print(model_version_string)
response = service.projects().predict(
name=model_version_string,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(f'Success. # keys={response.keys()}')我得到一个类似于以下错误的错误:
HttpError: <HttpError 400 when requesting
https://ml.googleapis.com/v1/projects/gcp_project/models/error_demo/versions/mobilenet:predict?alt=json
returned "{ "error": "Tensor name: refined_box_encodings has inconsistent batch size: 300
expecting: 1"}}>更多信息
instances变量从instances = [img_list]更改为instances = [{'input_tensor':img_list}],代码就会失败。(1, 100, 100, 2)或(100, 100, 2) ),则会得到一个响应,说明输入形状不正确。invalidArgument -- The value for one of fields in the request body was invalid.gcloud运行该进程import json
x = {"instances":[
[
[
[0, 0, 0],
[0, 0, 0]
],
[
[0, 0, 0],
[0, 0, 0]
]
]
]
}
with open('test.json', 'w') as f:
json.dump(x, f)
!gcloud ai-platform predict --model $MODEL --json-request=./test.json 我得到了一个INVALID_ARGUMENT错误。
ERROR: (gcloud.ai-platform.predict) HTTP request failed. Response: {
"error": {
"code": 400,
"message": "{ \"error\": \"Tensor name: anchors has inconsistent batch size: 49152 expecting: 1\" }",
"status": "INVALID_ARGUMENT"
}
}Version Details屏幕的Version Details选项卡或Method: Projects.predict上的AI平台预测JSON文档 )提交相同的JSON数据,则会得到相同的错误我启用了日志记录(包括常规日志记录和控制台日志记录),但没有提供任何其他信息。
我已经将复制所需的细节放置在Colab中。
提前谢谢。我花了一天时间在这上面工作,我真的被困住了!
发布于 2021-05-04 12:29:23
根据https://github.com/tensorflow/serving/issues/1047,当请求使用instances密钥时,TensorFlow服务确保输出的所有组件具有相同的批处理大小。解决方法是使用inputs关键字。
例如。
inputs = [img_list]
...
response = service.projects().predict(
name=model_version_string,
body={'inputs': inputs}https://stackoverflow.com/questions/64708006
复制相似问题