首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >AttributeError:模块'tensorflow.contrib.seq2seq‘没有特性'DynamicAttentionWrapperState’

AttributeError:模块'tensorflow.contrib.seq2seq‘没有特性'DynamicAttentionWrapperState’
EN

Stack Overflow用户
提问于 2019-01-01 03:25:35
回答 2查看 1.9K关注 0票数 1

我在使用tensorflow 1.11.0时收到此错误消息

代码语言:javascript
复制
[['model', '300000']]
Jan 01 03:24 test.py[line:53] INFO Test model/model.ckpt-300000. 
Jan 01 03:24 test.py[line:57] INFO Test data/test.1.txt with beam_size = 1
Jan 01 03:24 data_util.py[line:17] INFO Try load dict from data/doc_dict.txt.
Jan 01 03:24 data_util.py[line:33] INFO Load dict data/doc_dict.txt with 30000 words.
Jan 01 03:24 data_util.py[line:17] INFO Try load dict from data/sum_dict.txt.
Jan 01 03:24 data_util.py[line:33] INFO Load dict data/sum_dict.txt with 30000 words.
Jan 01 03:24 data_util.py[line:172] INFO Load test document from data/test.1.txt.
Jan 01 03:24 data_util.py[line:178] INFO Load 1 testing documents.
Jan 01 03:24 data_util.py[line:183] INFO Doc dict covers 75.61% words.
2019-01-01 03:24:51.426388: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Jan 01 03:24 summarization.py[line:195] INFO Creating 1 layers of 400 units.
Traceback (most recent call last):
  File "src/summarization.py", line 241, in <module>
    tf.app.run()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "src/summarization.py", line 229, in main
    decode()
  File "src/summarization.py", line 196, in decode
    model = create_model(sess, True)
  File "src/summarization.py", line 75, in create_model
    dtype=dtype)
  File "/TensorFlow-Summarization/src/bigru_model.py", line 89, in __init__
    wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapperState'
Jan 01 03:24 test.py[line:57] INFO Test data/test.1.txt with beam_size = 10
Jan 01 03:25 data_util.py[line:17] INFO Try load dict from data/doc_dict.txt.
Jan 01 03:25 data_util.py[line:33] INFO Load dict data/doc_dict.txt with 30000 words.
Jan 01 03:25 data_util.py[line:17] INFO Try load dict from data/sum_dict.txt.
Jan 01 03:25 data_util.py[line:33] INFO Load dict data/sum_dict.txt with 30000 words.
Jan 01 03:25 data_util.py[line:172] INFO Load test document from data/test.1.txt.
Jan 01 03:25 data_util.py[line:178] INFO Load 1 testing documents.
Jan 01 03:25 data_util.py[line:183] INFO Doc dict covers 75.61% words.
2019-01-01 03:25:02.643185: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Jan 01 03:25 summarization.py[line:195] INFO Creating 1 layers of 400 units.
Traceback (most recent call last):
  File "src/summarization.py", line 241, in <module>
    tf.app.run()
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run
    _sys.exit(main(argv))
  File "src/summarization.py", line 229, in main
    decode()
  File "src/summarization.py", line 196, in decode
    model = create_model(sess, True)
  File "src/summarization.py", line 75, in create_model
    dtype=dtype)
  File "/TensorFlow-Summarization/src/bigru_model.py", line 89, in __init__
    wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
AttributeError: module 'tensorflow.contrib.seq2seq' has no attribute 'DynamicAttentionWrapperState'

代码

代码语言:javascript
复制
wrapper_state = tf.contrib.seq2seq.DynamicAttentionWrapperState(
                    self.init_state, self.prev_att)
EN

回答 2

Stack Overflow用户

发布于 2019-01-01 04:05:24

这可能是因为根据documentation,API 1.11.0的tf.contrib.seq2seq中没有DynamicAttentionWrapper

他们在release 1.3.0中添加了单调的注意力包装器

票数 0
EN

Stack Overflow用户

发布于 2019-01-02 20:26:53

不推荐使用问题,请改用tf.contrib.seq2seq.AttentionWrapper。我猜你从https://github.com/thunlp/TensorFlow-Summarization/blob/master/src/bigru_model.py那里借了一些代码。

代码语言:javascript
复制
attention = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer, memory = encoder_out, memory_sequence_length = seq_len))

decoder_cell = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(reuse) for _ in range(num_layers)]), attention_mechanism = attention, attention_layer_size = size_layer)
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/53990810

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档