我已经训练了一个模型,使用这里的野生ML实现,并将其部署到Google平台上。我现在试图向模型发送一个JSON预测请求,但是我得到了以下错误:
Traceback (most recent call last):
File "C:/Users/XXX/PycharmProjects/CNN-Prediction/prediction.py", line 73, in <module>
print(predict_json(project, model, [json_request], version="TestV2"))
File "C:/Users/XXX/PycharmProjects/CNN-Prediction/prediction.py", line 63, in predict_json
raise RuntimeError(response['error'])
RuntimeError: Prediction failed: Error during model execution: AbortionError(code=StatusCode.INVALID_ARGUMENT, details="Shape [-1,11] has negative dimensions
[[Node: input_y = Placeholder[_output_shapes=[[-1,11]], dtype=DT_FLOAT, shape=[?,11], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]")我发现解释这个错误是一个挑战,但我的印象可能是,我正在将JSON数据发送到我的模型中,但是模型接受一个整数数组,这些整数可以在下面的TextCNN类中看到。
问题:如何在代码中实现修改,允许我将JSON输入请求转换为模型可以操作的格式?
class TextCNN(object):
"""
A CNN for text classification.
Uses an embedding layer, followed by a convolutional, max-pooling and softmax layer.
"""
# Constructor - sequence length = no. grams in complaint, num_classes = no. categories, vocab_size, embedding_size = dimensions of embedding
def __init__(
self, sequence_length, num_classes, vocab_size,
embedding_size, filter_sizes, num_filters, l2_reg_lambda=0.0):
# Placeholders for input, output and dropout
self.input_x = tf.placeholder(tf.int32, [None, sequence_length],
name="input_x") # NN interface to take in complaints
self.input_y = tf.placeholder(tf.float32, [None, num_classes],
name="input_y") # NN interface to take in complaint labels
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
# Keeping track of l2 regularization loss (optional)
l2_loss = tf.constant(0.0)
# Embedding layer - maps vocab word indices into low-dimensional vector representations (basically LU table)
# name_scope - adds all operations into top-level node called 'embedding' - nice hierarchy when visualising in TB
# Embedding layer
with tf.device('/cpu:0'), tf.name_scope("embedding"):
self.W = tf.Variable(
tf.random_uniform([vocab_size, embedding_size], -1.0, 1.0), trainable=False,
name="W")
self.embedded_chars = tf.nn.embedding_lookup(self.W,
self.input_x) # uses weight matrix to map word indices in complaints
self.embedded_chars_expanded = tf.expand_dims(self.embedded_chars,
-1) # expand dimensions of tensor so that we can use conv2d
# Create a convolution + maxpool layer for each filter size
# Since we have different size filters each convolution produces tensors of different shapes, so we need to iterate through them,
pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layerc
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filter_sizes)
print(pooled_outputs)
self.h_pool = tf.concat(pooled_outputs, axis=3)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Add dropout
with tf.name_scope("dropout"):
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
# Final (unnormalized) scores and predictions
with tf.name_scope("output"):
W = tf.get_variable(
"W",
shape=[num_filters_total, num_classes],
initializer=tf.contrib.layers.xavier_initializer())
b = tf.Variable(tf.constant(0.1, shape=[num_classes]), name="b")
l2_loss += tf.nn.l2_loss(W)
l2_loss += tf.nn.l2_loss(b)
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
# CalculateMean cross-entropy loss
with tf.name_scope("loss"):
print(self.scores)
print(self.input_y)
losses = tf.nn.softmax_cross_entropy_with_logits(logits=self.scores, labels=self.input_y)
self.loss = tf.reduce_mean(losses) + l2_reg_lambda * l2_loss
# Accuracy
with tf.name_scope("accuracy"):
correct_predictions = tf.equal(self.predictions, tf.argmax(self.input_y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float"), name="accuracy")发布于 2018-02-24 05:06:46
这个问题与train.py和text_cnn.py无关。他们建立了你的模型。在构建模型之后,在eval.py代码中进行以下修改。
首先,可以使用参数库获取JSON文件。
import argparse
parser = argparser.ArgumentParser()
# Input Arguments
parser.add_argument(
'--eval-file',
help='local paths to evaluation data',
nargs='+',
required=True
)
args = parser.parse_args()然后,您可以执行如下代码:
python eval.py --eval-file YourJSONFile
然后使用,
import json
json.loads(Data) 要从args获取数据或使用dict库将数据转换为以下数组格式:
x_raw = ["key", "Value"]
x_test = np.array(list(vocab_processor.transform(x_raw)))在将数据转换为x_raw之后,上面的代码将您的数据转换为TensorFlow的适当格式。
https://stackoverflow.com/questions/48812250
复制相似问题