我有一个多层LSTM模型;我的问题是第一层具有与输入形状不同的output_shape (不同数量的特征)。正因为如此,我无法拟合这个模型;一个错误正在抛出。你能解释为什么会发生这种情况吗?任何解决方案都会非常感谢。
trainingModel = keras.Sequential()
print('training_batch_size : ',training_batch_size, 'DataX.shape[1] : ',trainingDataX.shape[1],'DataX.shape[2] : ', trainingDataX.shape[2])
trainingModel.add(keras.layers.LSTM(numberOfNeurons
, batch_input_shape=(training_batch_size, trainingDataX.shape[1], trainingDataX.shape[2])
, return_sequences=True
, stateful=True
, dropout = keyDropOut))
for idx in range(numberOfLSTMLayers - 1):
trainingModel.add(keras.layers.LSTM(
numberOfNeurons
, return_sequences= True
, dropout = keyDropOut * (idx +1)
))
trainingModel.compile(optimizer='adam',loss='mean_squared_error')#,metrics=['accuracy'])
#Model Layer Shapes ========================
for layer in trainingModel.layers:
print('Input shape', layer.input_shape)
print('Output shape', layer.output_shape)
Output
===============
training_batch_size : 96 trainingDataX.shape[1] : 10 trainingDataX.shape[2] : 4
Model Layer Shapes
Input shape (96, 10, 4)
Output shape (96, 10, 5) *<<<THIS IS MY PROBLEM
Input shape (96, 10, 5)
Output shape (96, 10, 5)
Input shape (96, 10, 5)
Output shape (96, 10, 5)
Finally when I fit the model, it trhows error like:
ValueError: A target array with shape (2880, 10, 4) was passed for an output of shape (96, 10, 5) while using as loss `mean_squared_error`. This loss expects targets to have the same shape as the output发布于 2019-06-01 21:06:20
回答我自己的问题时,我解决了这个问题: LSTM的第一层应该有神经元的数量=特征的数量;也就是说,在第一层中应该只有4个神经元,而我使用了5个。
https://stackoverflow.com/questions/56403540
复制相似问题