首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow服务客户端问题

Tensorflow服务客户端问题
EN

Stack Overflow用户
提问于 2018-03-08 13:45:31
回答 1查看 817关注 0票数 0

我已经使用Keras训练并保存了TF模型,并且能够将Keras模型(Hdf5)转换为Tensorflow服务。

设置详细信息,模型是在哪里训练和保存的: Python版本: 2.7.5 Keras版本: 2.1.4使用TensorFlow后端。Tensorflow CPU : 1.5.0

当我尝试使用sample_client.py运行客户端时,我有用于Tensorflow服务的Docker镜像,它失败了,错误如下。有没有人能建议我如何继续下去?

代码语言:javascript
复制
/root/.local/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning:         
Conversion of the second argument of issubdtype from `float` to 
`np.floating` is deprecated. In future, it will be treated as `np.float64 == 
 np.dtype(float).type`.
 from ._conv import register_converters as _register_converters
 Using TensorFlow backend.
(60000, 28, 28)
6
Traceback (most recent call last):
File "sample_client.py", line 62, in <module>
main()
File "sample_client.py", line 54, in main
result = stub.Predict(request, request_timeout)
File "/usr/lib64/python2.7/site-packages/grpc/beta/_client_adaptations.py", 
line 309, in __call__
self._request_serializer, self._response_deserializer)
File "/usr/lib64/python2.7/site-packages/grpc/beta/_client_adaptations.py", 
line 195, in _blocking_unary_unary
raise _abortion_error(rpc_error_call)
grpc.framework.interfaces.face.face.AbortionError: 
AbortionError(code=StatusCode.INVALID_ARGUMENT, details="NodeDef mentions 
attr 'dilations' not in Op<name=Conv2D; signature=input:T, filter:T -> 
output:T; attr=T:type,allowed=[DT_HALF, DT_FLOAT]; attr=strides:list(int); 
attr=use_cudnn_on_gpu:bool,default=true; attr=padding:string,allowed=
["SAME", "VALID"]; attr=data_format:string,default="NHWC",allowed=["NHWC", 
"NCHW"]>; NodeDef: conv2d_1/convolution = Conv2D[T=DT_FLOAT, _output_shapes=
 [[?,26,26,32]], data_format="NHWC", dilations=[1, 1, 1, 1], 
 padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, 
_device="/job:localhost/replica:0/task:0/cpu:0"](_arg_conv2d_1_input_0_0, 
conv2d_1/kernel/read). (Check whether your GraphDef-interpreting binary is 
up to date with your GraphDef-generating binary.).
     [[Node: conv2d_1/convolution = Conv2D[T=DT_FLOAT, _output_shapes=
[[?,26,26,32]], data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", 
strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, 
_device="/job:localhost/replica:0/task:0/cpu:0"](_arg_conv2d_1_input_0_0, 
conv2d_1/kernel/read)]]")


sample_client.py:
import numpy
from keras.datasets import mnist
from grpc.beta import implementations
import tensorflow as tf
from predict_client import predict_pb2
from predict_client import prediction_service_pb2
tf.app.flags.DEFINE_string("host", "0.0.0.0", "gRPC server host")
tf.app.flags.DEFINE_integer("port", 9000, "gRPC server port")
tf.app.flags.DEFINE_string("model_name", "mnist", "TensorFlow model name")
tf.app.flags.DEFINE_integer("model_version", -1, "TensorFlow model version")
tf.app.flags.DEFINE_float("request_timeout", 10.0, "Timeout of gRPC     request")
FLAGS = tf.app.flags.FLAGS

(x_train, y_train), (x_test, y_test) = mnist.load_data()

print(numpy.shape(x_train))
idx = 2324
img = x_train[idx,:,:]
label = y_train[idx]
img = numpy.resize(img, (1, 28, 28, 1))
print(label)


def main():
  host = FLAGS.host
  port = FLAGS.port
  model_name = FLAGS.model_name
  model_version = FLAGS.model_version
  request_timeout = FLAGS.request_timeout

  # Generate inference data
  keys = numpy.asarray([1, 2, 3])
  keys_tensor_proto = tf.contrib.util.make_tensor_proto(keys, dtype=tf.int32)
  features_tensor_proto = tf.contrib.util.make_tensor_proto(img,
                                                        dtype=tf.float32)

  # Create gRPC client and request
  channel = implementations.insecure_channel(host, port)
  stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
  request = predict_pb2.PredictRequest()
  request.model_spec.name = model_name
  if model_version > 0:
    request.model_spec.version.value = model_version
  request.inputs['inputs'].CopyFrom(features_tensor_proto)
  request.model_spec.signature_name = 'predict'
  #request.inputs['features'].CopyFrom(features_tensor_proto)

  # Send request
  result = stub.Predict(request, request_timeout)
  response = numpy.array(result.outputs['outputs'].float_val)
  prediction = numpy.argmax(response)

  print(prediction)


if __name__ == '__main__':
  main()

提前感谢

关于Latha

EN

回答 1

Stack Overflow用户

发布于 2018-03-28 18:57:23

该错误是由于tensorflow_model_server中的TensorFlow版本只是早于1.5,因此不知道如何实例化该运算符。

可以尝试在足够旧的TensorFlow版本上重新构建模型,也可以尝试使用较新的TensorFlow版本创建tensorflow_model_server。

只需遵循此处给出的步骤,即可使用Tensorflow服务的最新源代码构建docker。https://www.tensorflow.org/serving/docker

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/49166240

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档