首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >运行seq2seq模型时的流量误差

运行seq2seq模型时的流量误差
EN

Stack Overflow用户
提问于 2015-11-18 05:59:46
回答 2查看 2.3K关注 0票数 0

在运行RNN教程示例时,在读取数据行语句后会出现以下错误:

代码语言:javascript
复制
reading data line 22500000

W tensorflow/core/common_runtime/executor.cc:1052] 0x3ef81ae60 Compute status: Not found: ./checkpoints_directory/translate.ckpt-200.tempstate15092134273276121938
         [[Node: save/save = SaveSlices[T=[DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT
_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOA
T, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/save/tensor_names, save/save/shapes_and_slices, Variable, Variable_1, embedding_attention_seq2seq/RNN/EmbeddingWrappe
r/embedding, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2se
q/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Line
ar/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/Mu
ltiRNNCell/Cell1/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/M
atrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/embedding_at
tention_decoder/attention_decoder/Attention_0/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Attention_0/Linear/Matrix, embedding_attention_seq2seq/embedding_attenti
on_decoder/attention_decoder/AttnOutputProjection/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnOutputProjection/Linear/Matrix, embedding_attention_seq2seq/embe
dding_attention_decoder/attention_decoder/AttnV_0, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnW_0, embedding_attention_seq2seq/embedding_attention_decoder/attention_decod
er/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell0/GRUCell
/Candidate/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder
/attention_decoder/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Matrix, embedding_attentio
n_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell1/GRUCel
l/Candidate/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/at
tention_decoder/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/Bias, embedding_attenti
on_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell2/GRU
Cell/Gates/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/MultiRNNCell/Cell2/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/emb
edding, proj_b, proj_w)]]
global step 200 learning rate 0.5000 step-time 14.56 perplexity 2781.37
Traceback (most recent call last):
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/tran
slate/translate.py", line 264, in <module>
    tf.app.run()
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/platform
/default/_app.py", line 15, in run
    sys.exit(main(sys.argv))
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/tran
slate/translate.py", line 261, in main
    train()
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/tran
slate/translate.py", line 180, in train
    model.saver.save(sess, checkpoint_path, global_step=model.global_step)
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/training
/saver.py", line 847, in save
    self._save_tensor_name, {self._filename_tensor_name: checkpoint_file})
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/client/s
ession.py", line 401, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/client/s
ession.py", line 477, in _do_run 
    e.code)
tensorflow.python.framework.errors.NotFoundError: ./checkpoints_directory/translate.ckpt-200.tempstate15092134273276121938
         [[Node: save/save = SaveSlices[T=[DT_FLOAT, DT_INT32, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/save/tensor_names, save/save/shapes_and_slices, Variable, Variable_1, embedding_attention_seq2seq/RNN/EmbeddingWrapper/embedding, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell0/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell1/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Candidate/Linear/Matrix, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Gates/Linear/Bias, embedding_attention_seq2seq/RNN/MultiRNNCell/Cell2/GRUCell/Gates/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Attention_0/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/Attention_0/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnOutputProjection/Linear/Bias, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnOutputProjection/Linear/Matrix, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnV_0, embedding_attention_seq2seq/embedding_attention_decoder/attention_decoder/AttnW_0, embedding_attention_seq2seq/embedding_attention_decoder/attention_decod

/ sys.exit(main(sys.argv))文件中的/_app.py“,第15行,第261行,在主列车()文件"/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/translate/translate.py",130号线中,列车模型= create_model(sess,文件"/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/translate/translate.py",第109行,在create_model forward_only=forward_only中)

文件"/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/models/rnn/translate/seq2seq_model.py",第153行,在"/home/temp_user/.cache/bazel/_bazel_temp_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/training/saver.py",文件init self.saver = tf.train.Saver(tf.all_variables())中,(在init restore_sequentially=restore_sequentially中)文件第411行,在构建中

代码语言:javascript
复制
 save\_tensor = self.\_AddSaveOps(filename\_tensor, vars\_to\_save)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/training/saver.py", line 114, in \_AddSaveOps     save = self.save\_op(filename\_tensor, vars\_to\_save)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/training/saver.py", line 68, in save\_op
代码语言:javascript
复制
 tensor\_slices=[vs.slice\_spec for vs in vars\_to\_save])   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/ops/io\_ops.py", line 149, in \_save     tensors, name=name)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/ops/gen\_io\_ops.py", line 343, in \_save\_slices     name=name)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/ops/op\_def\_library.py", line 646, in apply\_op     op\_def=op\_def)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/framework/ops.py", line 1767, in create\_op     original\_op=self.\_default\_original\_op, op\_def=op\_def)   File "/home/temp\_user/.cache/bazel/\_bazel\_temp\_user/7cf40d683d56020fae2d5abbde7f9f05/tensorflow/bazel-out/local\_linux-opt/bin/tensorflow/models/rnn/translate/translate.runfiles/tensorflow/python/framework/ops.py", line 1008, in **init**     self.\_traceback = \_extract\_stack()

错误:非零返回代码' 1‘从命令:进程退出状态1。

那么,造成这个问题的原因是什么,因为另一个语言模型示例正在工作,并且已经建立了库。根据注释,我创建了检查点目录,仍然抛出相同的错误: tensorflow/core/common_runtime/executor.cc:1052] 0x400d2bbe0计算状态: Not:./checkpoints_directory/translate.ckpt-200.tempstate9246663217899500702

EN

回答 2

Stack Overflow用户

发布于 2015-11-19 02:22:49

我认为这是当以前的检查点没有正确保存时出现的问题之一。您可以按照以下步骤更正它。

1.您可以删除所有检查点文件并重新启动培训。

代码语言:javascript
复制
rm checkpoint
rm translate-ckpt-*

现在,重新开始你的训练。

或者,您可以删除最新的检查点并从前一个检查点启动它。

1.转到目录并删除最新的检查点,在这种情况下是:

代码语言:javascript
复制
rm translate-ckpt-200

2.现在编辑检查点文件。你可能会看到

代码语言:javascript
复制
model_checkpoint_path: "data/translate.ckpt-200"
all_model_checkpoint_paths: "data/translate.ckpt-170"
all_model_checkpoint_paths: "data/translate.ckpt-180"
all_model_checkpoint_paths: "data/translate.ckpt-190"
all_model_checkpoint_paths: "data/translate.ckpt-200"

3.删除最后一行并将检查点设置为前一阶段。

代码语言:javascript
复制
model_checkpoint_path: "data/translate.ckpt-190"
all_model_checkpoint_paths: "data/translate.ckpt-170"
all_model_checkpoint_paths: "data/translate.ckpt-180"
all_model_checkpoint_paths: "data/translate.ckpt-190"

4.重新开始训练。

票数 2
EN

Stack Overflow用户

发布于 2016-12-06 18:12:56

我有同样的问题运行序列-序列模型。并在运行代码之前创建检查点目录解决了问题!

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/33772819

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档