首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >如何使用tensorflow的Dataset API Iterator作为(递归)神经网络的输入?

如何使用tensorflow的Dataset API Iterator作为(递归)神经网络的输入?
EN

Stack Overflow用户
提问于 2017-11-20 13:36:56
回答 1查看 4K关注 0票数 2

当使用tensorflow的Dataset API Iterator时,我的目标是定义一个对迭代器的get_next()张量操作的RNN作为它的输入(参见代码中的(1) )。

但是,简单地用dynamic_rnn作为输入定义get_next()会导致一个错误:ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer.

现在,我知道一种解决方法是为next_batch创建一个占位符,然后为张量创建一个占位符(因为您不能传递张量本身),然后使用feed_dict传递它(参见代码中的X(2) )。然而,如果我正确理解它,这不是一个有效的解决方案,因为我们首先评估,然后重新初始化张量。

有没有办法:

  1. 直接在Iterator输出的顶部定义dynamic_rnn

或者:

  1. 以某种方式将现有的get_next()张量直接传递给占位符,即dynamic_rnn的输入。

完整的示例;(1)版本是我想要工作的,但它不工作,而(2)是起作用的解决方案。

代码语言:javascript
复制
import tensorflow as tf

from tensorflow.contrib.rnn import BasicLSTMCell
from tensorflow.python.data import Iterator

data = [ [[1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ]
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.batch(2)
iterator = Iterator.from_structure(dataset.output_types,
                                   dataset.output_shapes)
next_batch = iterator.get_next()
iterator_init = iterator.make_initializer(dataset)

# (2):
X = tf.placeholder(tf.float32, shape=(None, 3, 1))

cell = BasicLSTMCell(num_units=8)

# (1):
# outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, next_batch, dtype=tf.float32)

# (2):
outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    sess.run(iterator_init)

    # (1):
    # o, s = sess.run([outputs, states])
    # o, s = sess.run([outputs, states])

    # (2):
    o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
    o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})

(使用tensorflow 1.4.0,Python3.6)

(非常感谢:)

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2017-11-21 09:25:03

原来这个神秘的错误很可能是tensorflow中的一个错误,参见https://github.com/tensorflow/tensorflow/issues/14729。更具体地说,错误实际上来自于输入错误的数据类型(在我前面的示例中,data数组包含int32值,但它应该包含浮点数)。

而不是获取ValueError: Initializer for variable rnn/basic_lstm_cell/kernel/ is from inside a control-flow construct错误,

tensorflow应返回:

TypeError: Tensors in list passed to 'values' of 'ConcatV2' Op have types [int32, float32] that don't all match. (见1)。

要解决这个问题,只需更改

data = [ [[1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ]

data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)

然后,下列代码应正常工作:

代码语言:javascript
复制
import tensorflow as tf
import numpy as np

from tensorflow.contrib.rnn import BasicLSTMCell
from tensorflow.python.data import Iterator

data = np.array([[ [1], [2], [3]], [[4], [5], [6]], [[1], [2], [3]] ], dtype=np.float32)
dataset = tf.data.Dataset.from_tensor_slices(data)
dataset = dataset.batch(2)
iterator = Iterator.from_structure(dataset.output_types,
                                   dataset.output_shapes)
next_batch = iterator.get_next()
iterator_init = iterator.make_initializer(dataset)

# (2):
# X = tf.placeholder(tf.float32, shape=(None, 3, 1))

cell = BasicLSTMCell(num_units=8)

# (1):
outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, next_batch, dtype=tf.float32)

# (2):
# outputs, states = lstm_outputs, lstm_states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)

init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    sess.run(iterator_init)

    # (1):
    o, s = sess.run([outputs, states])
    o, s = sess.run([outputs, states])

    # (2):
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
    # o, s = sess.run([outputs, states], feed_dict={X: next_batch.eval()})
票数 5
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/47393356

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档