首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow tf.reshape()的行为似乎与numpy.reshape()不同。

Tensorflow tf.reshape()的行为似乎与numpy.reshape()不同。
EN

Stack Overflow用户
提问于 2016-09-24 08:18:20
回答 2查看 2.5K关注 0票数 2

我正在尝试训练一个LSTM网络,它以一种方式训练成功,但在另一种方式中抛出错误。在第一个示例中,我使用numpy reshape对输入数组X进行整形,而在另一种情况下,我使用tensorflow对其进行整形。

工作正常:

代码语言:javascript
复制
import numpy as np
import tensorflow as tf
import tensorflow.contrib.learn as learn


# Parameters
learning_rate = 0.1
training_steps = 3000
batch_size = 128

# Network Parameters
n_input = 4
n_steps = 10
n_hidden = 128
n_classes = 6

X = np.ones([1770,4])
y = np.ones([177])

# NUMPY RESHAPE OUTSIDE RNN_MODEL
X = np.reshape(X, (-1, n_steps, n_input))

def rnn_model(X, y):

  # TENSORFLOW RESHAPE INSIDE RNN_MODEL
  #X = tf.reshape(X, [-1, n_steps, n_input])  # (batch_size, n_steps, n_input)

  # # permute n_steps and batch_size
  X = tf.transpose(X, [1, 0, 2])

  # # Reshape to prepare input to hidden activation
  X = tf.reshape(X, [-1, n_input])  # (n_steps*batch_size, n_input)
  # # Split data because rnn cell needs a list of inputs for the RNN inner loop
  X = tf.split(0, n_steps, X)  # n_steps * (batch_size, n_input)

  # Define a GRU cell with tensorflow
  lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
  # Get lstm cell output
  _, encoding = tf.nn.rnn(lstm_cell, X, dtype=tf.float32)

  return learn.models.logistic_regression(encoding, y)


classifier = learn.TensorFlowEstimator(model_fn=rnn_model, n_classes=n_classes,
                                       batch_size=batch_size,
                                       steps=training_steps,
                                       learning_rate=learning_rate)

classifier.fit(X,y)

不起作用:

代码语言:javascript
复制
import numpy as np
import tensorflow as tf
import tensorflow.contrib.learn as learn


# Parameters
learning_rate = 0.1
training_steps = 3000
batch_size = 128

# Network Parameters
n_input = 4
n_steps = 10
n_hidden = 128
n_classes = 6

X = np.ones([1770,4])
y = np.ones([177])

# NUMPY RESHAPE OUTSIDE RNN_MODEL
#X = np.reshape(X, (-1, n_steps, n_input))

def rnn_model(X, y):

  # TENSORFLOW RESHAPE INSIDE RNN_MODEL
  X = tf.reshape(X, [-1, n_steps, n_input])  # (batch_size, n_steps, n_input)

  # # permute n_steps and batch_size
  X = tf.transpose(X, [1, 0, 2])

  # # Reshape to prepare input to hidden activation
  X = tf.reshape(X, [-1, n_input])  # (n_steps*batch_size, n_input)
  # # Split data because rnn cell needs a list of inputs for the RNN inner loop
  X = tf.split(0, n_steps, X)  # n_steps * (batch_size, n_input)

  # Define a GRU cell with tensorflow
  lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
  # Get lstm cell output
  _, encoding = tf.nn.rnn(lstm_cell, X, dtype=tf.float32)

  return learn.models.logistic_regression(encoding, y)


classifier = learn.TensorFlowEstimator(model_fn=rnn_model, n_classes=n_classes,
                                       batch_size=batch_size,
                                       steps=training_steps,
                                       learning_rate=learning_rate)

classifier.fit(X,y)

后者抛出以下错误:

代码语言:javascript
复制
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.BasicLSTMCell object at 0x7f1c67c6f750>: Using a concatenated state is slower and will soon be deprecated.  Use state_is_tuple=True.
Traceback (most recent call last):
  File "/home/blabla/test.py", line 47, in <module>
    classifier.fit(X,y)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/base.py", line 160, in fit
    monitors=monitors)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 484, in _train_model
    monitors=monitors)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py", line 328, in train
    reraise(*excinfo)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/graph_actions.py", line 254, in train
    feed_dict = feed_fn() if feed_fn is not None else None
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/io/data_feeder.py", line 366, in _feed_dict_fn
    out.itemset((i, self.y[sample]), 1.0)
IndexError: index 974 is out of bounds for axis 0 with size 177
EN

回答 2

Stack Overflow用户

发布于 2016-09-27 04:26:08

一些建议:*对fit使用input_fn而不是X,Y*使用learn.Estimator而不是learn.TensorFlowEstimator

由于你的数据很小,下面的方法应该是可行的。否则,您需要批量处理数据。` def _my_inputs():return tf.constant(np.ones(1770,4)),tf.constant(np.ones(177))

票数 0
EN

Stack Overflow用户

发布于 2016-09-27 06:30:20

我做了几个小改动就能让它工作起来:

代码语言:javascript
复制
# Parameters
learning_rate = 0.1
training_steps = 10
batch_size = 8

# Network Parameters
n_input = 4
n_steps = 10
n_hidden = 128
n_classes = 6

X = np.ones([177, 10, 4])  # <---- Use shape [batch_size, n_steps, n_input] here.
y = np.ones([177])

def rnn_model(X, y):
  X = tf.transpose(X, [1, 0, 2])  #|
  X = tf.unpack(X)                #| These two lines do the same thing as your code, just a bit simpler ;)

  # Define a LSTM cell with tensorflow
  lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden)
  # Get lstm cell output
  outputs, _ = tf.nn.rnn(lstm_cell, X, dtype=tf.float64)  # <---- I think you want to use the first return value here.

  return tf.contrib.learn.models.logistic_regression(outputs[-1], y)  # <----uses just the last output for classification, as is typical with RNNs.


classifier = tf.contrib.learn.TensorFlowEstimator(model_fn=rnn_model,
                                                  n_classes=n_classes,
                                                  batch_size=batch_size,
                                                  steps=training_steps,
                                                  learning_rate=learning_rate)

classifier.fit(X,y)

我认为你的核心问题是X必须是形状批次,...当传递给fit(...)时。当您使用numpy在rnn_model()函数外部对其进行整形时,X具有此形状,因此训练有效。

我不能说这个解决方案将产生的模型的质量,但至少它可以运行!

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/39671253

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档