我正在尝试编写一个Keras模型(使用Tensorflow后端),它使用LSTM来预测序列的标签,就像您在词性标注任务中所做的那样。我编写的模型将nan作为所有训练时期和所有标签预测的损失返回。我怀疑我的模型配置不正确,但我不知道我做错了什么。
完整的程序在这里。
from random import shuffle, sample
from typing import Tuple, Callable
from numpy import arange, zeros, array, argmax, newaxis
def sequence_to_sequence_model(time_steps: int, labels: int, units: int = 16):
from keras import Sequential
from keras.layers import LSTM, TimeDistributed, Dense
model = Sequential()
model.add(LSTM(units=units, input_shape=(time_steps, 1), return_sequences=True))
model.add(TimeDistributed(Dense(labels)))
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
def labeled_sequences(n: int, sequence_sampler: Callable[[], Tuple[array, array]]) -> Tuple[array, array]:
"""
Create training data for a sequence-to-sequence labeling model.
The features are an array of size samples * time steps * 1.
The labels are a one-hot encoding of time step labels of size samples * time steps * number of labels.
:param n: number of sequence pairs to generate
:param sequence_sampler: a function that returns two numeric sequences of equal length
:return: feature and label sequences
"""
from keras.utils import to_categorical
xs, ys = sequence_sampler()
assert len(xs) == len(ys)
x = zeros((n, len(xs)), int)
y = zeros((n, len(ys)), int)
for i in range(n):
xs, ys = sequence_sampler()
x[i] = xs
y[i] = ys
x = x[:, :, newaxis]
y = to_categorical(y)
return x, y
def digits_with_repetition_labels() -> Tuple[array, array]:
"""
Return a random list of 10 digits from 0 to 9. Two of the digits will be repeated. The rest will be unique.
Along with this list, return a list of 10 labels, where the label is 0 if the corresponding digits is unique and 1
if it is repeated.
:return: digits and labels
"""
n = 10
xs = arange(n)
ys = zeros(n, int)
shuffle(xs)
i, j = sample(range(n), 2)
xs[j] = xs[i]
ys[i] = ys[j] = 1
return xs, ys
def main():
# Train
x, y = labeled_sequences(1000, digits_with_repetition_labels)
model = sequence_to_sequence_model(x.shape[1], y.shape[2])
model.summary()
model.fit(x, y, epochs=20, verbose=2)
# Test
x, y = labeled_sequences(5, digits_with_repetition_labels)
y_ = model.predict(x, verbose=0)
x = x[:, :, 0]
for i in range(x.shape[0]):
print(' '.join(str(n) for n in x[i]))
print(' '.join([' ', '*'][int(argmax(n))] for n in y[i]))
print(y_[i])
if __name__ == '__main__':
main()我的特征序列是从0到9的10位数的数组。我对应的标签序列是10个0和1的数组,其中0表示唯一的数字,1表示重复的数字。(我们的想法是创建一个简单的分类任务,其中包含远距离依赖项。)
训练看起来像这样
Epoch 1/20
- 1s - loss: nan
Epoch 2/20
- 0s - loss: nan
Epoch 3/20
- 0s - loss: nan所有的标签数组预测都是这样的
[[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]
[nan nan]]所以很明显有什么地方不对劲。
传递给model.fit的特征矩阵的维度为samples×time steps×1。标签矩阵的维度为samples×time steps×2,其中2来自标签0和1的一次热编码。
我正在使用time-distributed dense layer来预测序列,遵循Keras文档和this和this等帖子。据我所知,上面sequence_to_sequence_model中定义的模型拓扑是正确的。模型摘要如下所示
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 10, 16) 1152
_________________________________________________________________
time_distributed_1 (TimeDist (None, 10, 2) 34
=================================================================
Total params: 1,186
Trainable params: 1,186
Non-trainable params: 0
_________________________________________________________________像this这样的堆栈溢出问题听起来就像nan结果是数值问题的指示器:失控的梯度等等。然而,由于我正在处理一个很小的数据集,并且从我的模型返回的每个数字都是一个nan,我怀疑我看到的不是一个数字问题,而是我如何构建模型的问题。
上面的代码是否具有用于序列到序列学习的正确模型/数据形状?如果是这样,为什么到处都是nans?
发布于 2019-01-09 08:17:35
默认情况下,Dense层没有激活。如果指定一个,则nan将消失。在上面的代码中更改以下行。
model.add(TimeDistributed(Dense(labels, activation='softmax')))https://stackoverflow.com/questions/54101286
复制相似问题