我试着用TF服务来服务我的TF模型。下面是我对模型的输入:
raw_feature_spec = {
'x': tf.io.VarLenFeature(tf.string),
'y': tf.io.VarLenFeature(tf.string),
'z': tf.io.FixedLenFeature([], tf.string)
}然后用TF变换将输入对象变换成x:(None,20,100),y:(None,20,5),z:(None,3)等形状的对象,它们适用于不含transform_funс(变换图)的初始模型。然后我将我的模型导出:
import tensorflow as tf
import tensorflow_transform as tft
tf_transform_output = tft.TFTransformOutput('saved_transform_graph_folder')
estimator = tf.keras.estimator.model_to_estimator(keras_model_path='model_folder')
estimator.export_saved_model('OUTPUT_MODEL_NAME', make_serving_input_fn(tf_transform_output))
def make_serving_input_fn(tf_transform_output):
raw_feature_spec = {
'x': tf.io.VarLenFeature(tf.string),
'y': tf.io.VarLenFeature(tf.string),
'z': tf.io.FixedLenFeature([], tf.string)
}
def serving_input_fn():
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(raw_feature_spec)
raw_reatures = raw_input_fn().features
features = {
'x': tf.sparse.to_dense(raw_reatures["x"]),
'y': tf.sparse.to_dense(raw_reatures["y"]),
'z': raw_reatures["z"]
}
# Apply the transform function that was used to generate the materialized data
transformed_features = tf_transform_output.transform_raw_features(raw_reatures)
return tf.estimator.export.ServingInputReceiver(transformed_features, features)
return serving_input_fn其中,transform_func是一些函数,它将输入张量重新定义为所需的变量,并包含在tf_transform_output对象中。因此,当我使用来自Docker Hub的TFS映像为导出的模型提供服务时,将HTTP请求发送到/ model /元数据:
{
"model_spec": {
"name": "newModel",
"signature_name": "",
"version": "1579786077"
},
"metadata": {
"signature_def": {
"signature_def": {
"serving_default": {
"inputs": {
"x": {
"dtype": "DT_STRING",
"tensor_shape": {
"dim": [
{
"size": "-1",
"name": ""
},
{
"size": "-1",
"name": ""
}
],
"unknown_rank": false
},
"name": "SparseToDense_1:0"
},
"y": {
"dtype": "DT_STRING",
"tensor_shape": {
"dim": [
{
"size": "-1",
"name": ""
},
{
"size": "-1",
"name": ""
}
],
"unknown_rank": false
},
"name": "SparseToDense:0"
},
"z": {
"dtype": "DT_STRING",
"tensor_shape": {
"dim": [
{
"size": "-1",
"name": ""
}
],
"unknown_rank": false
},
"name": "ParseExample/ParseExample:6"
}
},
"outputs": {
"main_output": {
"dtype": "DT_FLOAT",
"tensor_shape": {
"dim": [
{
"size": "-1",
"name": ""
},
{
"size": "20",
"name": ""
}
],
"unknown_rank": false
},
"name": "main_output/Softmax:0"
}
},
"method_name": "tensorflow/serving/predict"
}
}
}
}因此输入是正确的(尽管如此,我在导出时使用了tf.sparse.to_dense作为VarLenFeature的大小写)。但是,当我向/model:predict发送HTTP请求时:
{
"instances":
[
{
"x": ["text","text","text","text","text","text"],
"y": ["test","test","test","test","test","test"],
"z": "str"
}
]
}我搞错了
{
"error": "You must feed a value for placeholder tensor \'input_example_tensor\' with dtype string and shape [?]\n\t [[{{node input_example_tensor}}]]"
}有没有人知道我做错了什么,或者如何正确地创建变量输入?我需要张量形状,就像我现在在元数据中所做的那样,所以我不需要通过序列化的proto示例来访问API,只需要原始张量。TF版本: 2.0,TF服务和TF转换-最后版本。
P.S.,我也尝试用tf.keras.backend.placeholder导出一个带有build_raw_serving_input_receiver_fn调用的模型,这样在serving_input_fn中就不会有从稀疏张量到密集张量的转换,但是结果是一样的。
发布于 2020-08-19 00:52:09
在input_example_tensor中有一个名为transformed_features的占位符。而且TensorFlow的稀疏占位符也不起作用。
通过将每个稀疏张量表示为接收机中的3个稠密张量,即指数、值和形状,然后将它们转化为特征上的稀疏张量,从而解决了这个问题。对于您的情况,您需要将serving_input_fn定义为:
def serving_input_fn():
inputs = {
"x_indices": tf.placeholder(tf.int64, [None, 2]),
"x_vals": tf.placeholder(tf.string, [None, 2]),
"x_shape": tf.placeholder(tf.int64, [2]),
"y_indices": tf.placeholder(tf.int64, [None, 2]),
"y_vals": tf.placeholder(tf.string, [None, 2]),
"y_shape": tf.placeholder(tf.int64, [2]),
"z": tf.placeholder(tf.string, [None, 1])
}
fvs = {
"x": tf.SparseTensor(
inputs["x_indices"],
inputs["x_vals"],
inputs[x_shape]
),
"y": tf.SparseTensor(
inputs["y_indices"],
inputs["y_vals"],
inputs[y_shape]
),
"z": inputs["z"]
}
return tf.estimator.export.ServingInputReceiver(fvs, inputs)我还想问你一个问题,你是如何训练出一个有VarLenFeature和TensorFlow Keras的模型的,你能和我分享这个部分吗?目前,我正在用TensorFlow估计训练这类模型。
https://stackoverflow.com/questions/59880686
复制相似问题