首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >Tensorflow 1.1 MultiRNNCell形状错误(Init_State相关)

Tensorflow 1.1 MultiRNNCell形状错误(Init_State相关)
EN

Stack Overflow用户
提问于 2017-06-05 03:55:35
回答 1查看 756关注 0票数 0

UPDATE__:我坚信该错误与创建并输入到tf.nn.dynamic_rnn(.)的init_state有关作为一种争论。那么问题就变成了,堆叠的RNN的初始状态的正确形状或构造方法是什么?

我正在尝试让MultiRNNCell定义在TensorFlow 1.1中工作。

下图定义了定义GRU单元格的助手函数,如下所示。其基本思想是将占位符x定义为数字数据的长字符串。通过整形,这些数据将被分解为等长帧,并在每个时间步骤中显示一个帧。然后,我想通过一个由两个单元格组成的堆栈来处理这个问题。

代码语言:javascript
复制
def gru_cell(state_size):
     cell = tf.contrib.rnn.GRUCell(state_size)
     return cell

graph = tf.Graph()
with graph.as_default():

     x = tf.placeholder(tf.float32, [batch_size, num_samples], name="Input_Placeholder")
     y = tf.placeholder(tf.int32, [batch_size, num_frames], name="Labels_Placeholder")

     init_state = tf.zeros([batch_size, state_size], name="Initial_State_Placeholder")

     rnn_inputs = tf.reshape(x, (batch_size, num_frames, frame_length))
     cell = tf.contrib.rnn.MultiRNNCell([gru_cell(state_size) for _ in range(2)], state_is_tuple=False)
     rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state) 

图定义是从那里开始的,包括丢失函数、优化器等。但是,这是它与下面的冗长错误一起分解的地方。

在错误的最后一部分,batch_size是10,而frame_length和state_size都是80,这将变得相关。

代码语言:javascript
复制
ValueError                                Traceback (most recent call last)
<ipython-input-30-4c48b596e055> in <module>()
     14     print(rnn_inputs)
     15     cell = tf.contrib.rnn.MultiRNNCell([gru_cell(state_size) for _ in range(2)], state_is_tuple=False)
---> 16     rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state)
     17 
     18     with tf.variable_scope('softmax'):

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in dynamic_rnn(cell, inputs, sequence_length, initial_state, dtype, parallel_iterations, swap_memory, time_major, scope)
    551         swap_memory=swap_memory,
    552         sequence_length=sequence_length,
--> 553         dtype=dtype)
    554 
    555     # Outputs of _dynamic_rnn_loop are always shaped [time, batch, depth].

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/rnn.pyc in _dynamic_rnn_loop(cell, inputs, initial_state, parallel_iterations, swap_memory, sequence_length, dtype)
    718       loop_vars=(time, output_ta, state),
    719       parallel_iterations=parallel_iterations,
--> 720       swap_memory=swap_memory)
    721 
    722   # Unpack final output if not using output tuples.

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in while_loop(cond, body, loop_vars, shape_invariants, parallel_iterations, back_prop, swap_memory, name)
   2621     context = WhileContext(parallel_iterations, back_prop, swap_memory, name)
   2622     ops.add_to_collection(ops.GraphKeys.WHILE_CONTEXT, context)
-> 2623     result = context.BuildLoop(cond, body, loop_vars, shape_invariants)
   2624     return result
   2625 

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in BuildLoop(self, pred, body, loop_vars, shape_invariants)
   2454       self.Enter()
   2455       original_body_result, exit_vars = self._BuildLoop(
-> 2456           pred, body, original_loop_vars, loop_vars, shape_invariants)
   2457     finally:
   2458       self.Exit()

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in _BuildLoop(self, pred, body, original_loop_vars, loop_vars, shape_invariants)
   2435     for m_var, n_var in zip(merge_vars, next_vars):
   2436       if isinstance(m_var, ops.Tensor):
-> 2437         _EnforceShapeInvariant(m_var, n_var)
   2438 
   2439     # Exit the loop.

/home/novak/anaconda/lib/python2.7/site-packages/tensorflow/python/ops/control_flow_ops.pyc in _EnforceShapeInvariant(merge_var, next_var)
    565           "Provide shape invariants using either the `shape_invariants` "
    566           "argument of tf.while_loop or set_shape() on the loop variables."
--> 567           % (merge_var.name, m_shape, n_shape))
    568   else:
    569     if not isinstance(var, (ops.IndexedSlices, sparse_tensor.SparseTensor)):

ValueError: The shape for rnn/while/Merge_2:0 is not an invariant for the loop. It enters the loop with shape (10, 80), but has shape (10, 160) after one iteration. Provide shape invariants using either the `shape_invariants` argument of tf.while_loop or set_shape() on the loop variables.

这几乎就像网络开始于2层80年代,然后以某种方式被转换成1层160的堆栈。有什么能帮上忙吗?我是否误解了MultiRNNCell的使用?

EN

回答 1

Stack Overflow用户

发布于 2017-06-05 21:18:56

根据以上艾伦·拉沃伊的评论,修正后的代码如下:

代码语言:javascript
复制
def gru_cell(state_size):
     cell = tf.contrib.rnn.GRUCell(state_size)
     return cell

num_layers = 2  # <---------
graph = tf.Graph()
with graph.as_default():

     x = tf.placeholder(tf.float32, [batch_size, num_samples], name="Input_Placeholder")
     y = tf.placeholder(tf.int32, [batch_size, num_frames], name="Labels_Placeholder")

     init_state = tf.zeros([batch_size, num_layer * state_size], name="Initial_State_Placeholder") # <---------

     rnn_inputs = tf.reshape(x, (batch_size, num_frames, frame_length))
     cell = tf.contrib.rnn.MultiRNNCell([gru_cell(state_size) for _ in range(num_layer)], state_is_tuple=False) # <---------
     rnn_outputs, final_state = tf.nn.dynamic_rnn(cell, rnn_inputs, initial_state=init_state) 

注:上述三项更改。还请注意,这些更改必须波及所有的init_state流,特别是当您将它们输入到feed_dict中时。

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/44361448

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档