首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >tensorflow培训玩具LSTM失败

tensorflow培训玩具LSTM失败
EN

Stack Overflow用户
提问于 2018-03-26 15:37:27
回答 1查看 127关注 0票数 1

我正在尝试熟悉tensorflow中的递归网络,使用一个用于序列分类的玩具问题。

数据:

代码语言:javascript
复制
half_len = 500
pos_ex = [1, 2, 3, 4, 5] # Positive sequence.
neg_ex = [1, 2, 3, 4, 6] # Negative sequence.
num_input = len(pos_ex)
data = np.concatenate((np.stack([pos_ex]*half_len), np.stack([neg_ex]*half_len)), axis=0)
labels = np.asarray([0, 1] * half_len + [1, 0] * half_len).reshape((2 * half_len, -1))

型号:

代码语言:javascript
复制
_, x_width = data.shape
X = tf.placeholder("float", [None, x_width])
Y = tf.placeholder("float", [None, num_classes])

weights = tf.Variable(tf.random_normal([num_input, n_hidden]))
bias = tf.Variable(tf.random_normal([n_hidden]))


def lstm_model():
    from tensorflow.contrib import rnn
    x = tf.split(X, num_input, 1)
    rnn_cell = rnn.BasicLSTMCell(n_hidden)
    outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)
    return tf.matmul(outputs[-1], weights) + bias

培训:

代码语言:javascript
复制
logits = lstm_model()
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Train...

我的训练精度在0.5左右,这让我感到困惑,因为问题很简单。

代码语言:javascript
复制
Step 1, Minibatch Loss = 82.2726, Training Accuracy = 0.453
Step 25, Minibatch Loss = 6.7920, Training Accuracy = 0.547
Step 50, Minibatch Loss = 0.8528, Training Accuracy = 0.500
Step 75, Minibatch Loss = 0.6989, Training Accuracy = 0.500
Step 100, Minibatch Loss = 0.6929, Training Accuracy = 0.516

将玩具数据更改为:

代码语言:javascript
复制
pos_ex = [1, 2, 3, 4, 5]
neg_ex = [1, 2, 3, 4, 100]

有谁能解释一下为什么这个网络在这么简单的任务上失败了?谢谢。

以上代码是基于本教程的。

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2018-03-27 11:07:15

你试过降低学习率吗?

在第二个例子中,在最后一个坐标上的分离值较大,这应该没有什么区别,但是对学习速率的选择有影响。

如果要规范化数据(设置-1和1之间的每个坐标的域),并找到适当的步骤大小,则应该在相同的步骤数中解决这两个问题。

编辑:玩你的玩具例子一点,下面的工作,即使没有正常化

代码语言:javascript
复制
import tensorflow as tf
import numpy as np
from tensorflow.contrib import rnn

# Meta parameters

n_hidden = 10
num_classes = 2
learning_rate = 1e-2
input_dim = 5
num_input = 5



# inputs
X = tf.placeholder("float", [None, input_dim])
Y = tf.placeholder("float", [None, num_classes])

# Model
def lstm_model():
    # input layer
    x = tf.split(X, num_input, 1)

    # LSTM layer
    rnn_cell = rnn.BasicLSTMCell(n_hidden)
    outputs, states = rnn.static_rnn(rnn_cell, x, dtype=tf.float32)

    # final layer - softmax
    weights = tf.Variable(tf.random_normal([n_hidden, num_classes]))
    bias = tf.Variable(tf.random_normal([num_classes]))
    return tf.matmul(outputs[-1], weights) + bias

# logits and prediction
logits = lstm_model()
prediction = tf.nn.softmax(logits)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)


# -----------
# Train func
# -----------
def train(data,labels):

    with tf.Session() as session:
        session.run(tf.global_variables_initializer())
        for i in range(1000):
            _, loss, onehot_pred = session.run([train_op, loss_op, prediction], feed_dict={X: data, Y: labels})
            acc = np.mean(np.argmax(onehot_pred,axis=1) == np.argmax(labels,axis=1))
            print('Iteration {} accuracy: {}'.format(i,acc))
            if acc == 1:
                print('---> Finised after {} iterations'.format(i+1))
                break

# -----------
# Train 1
# -----------
# data generation
half_len = 500
pos_ex = [1, 2, 3, 4, 5] # Positive sequence.
neg_ex = [1, 2, 3, 4, 6] # Negative sequence.



data = np.concatenate((np.stack([pos_ex]*half_len), np.stack([neg_ex]*half_len)), axis=0)
labels = np.asarray([0, 1] * half_len + [1, 0] * half_len).reshape((2 * half_len, -1))

train(data,labels)

# -----------
# Train 2
# -----------
# data generation
half_len = 500
pos_ex = [1, 2, 3, 4, 5] # Positive sequence.
neg_ex = [1, 2, 3, 4, 100] # Negative sequence.


data = np.concatenate((np.stack([pos_ex]*half_len), np.stack([neg_ex]*half_len)), axis=0)
labels = np.asarray([0, 1] * half_len + [1, 0] * half_len).reshape((2 * half_len, -1))

train(data,labels)

产出如下:

代码语言:javascript
复制
Iteration 0 accuracy: 0.5
Iteration 1 accuracy: 0.5
Iteration 2 accuracy: 0.5
Iteration 3 accuracy: 0.5
Iteration 4 accuracy: 0.5
Iteration 5 accuracy: 0.5
Iteration 6 accuracy: 0.5
Iteration 7 accuracy: 0.5
Iteration 8 accuracy: 0.5
Iteration 9 accuracy: 0.5
Iteration 10 accuracy: 1.0
---> Finised after 11 iterations

Iteration 0 accuracy: 0.5
Iteration 1 accuracy: 0.5
Iteration 2 accuracy: 0.5
Iteration 3 accuracy: 0.5
Iteration 4 accuracy: 0.5
Iteration 5 accuracy: 0.5
Iteration 6 accuracy: 0.5
Iteration 7 accuracy: 0.5
Iteration 8 accuracy: 0.5
Iteration 9 accuracy: 1.0
---> Finised after 10 iterations

祝好运!

票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/49495364

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档