我正在处理一个timeseries seq2seq问题。对于我的方法,我使用LSTM seq2seq RNN和老师强制。正如您已经知道的,为了任务的目的,应该训练一个模型,然后使用经过训练的层,建立一个推理模型来处理任务(即共享层)。
下面是我定义共享层的代码:
# Define the shared layers for the train and inference models
encoder_lstm = LSTM(latent_dim, return_state=True, name='encoder_lstm')
# Define the shared layers for the train and inference models
encoder_lstm = LSTM(latent_dim, return_state=True, name='encoder_lstm')
decoder_lstm = LSTM(latent_dim, return_sequences=True,
return_state=True, name='decoder_lstm')
decoder_dense = Dense(decoder_output_dim,
activation='linear', name='decoder_dense')
decoder_reshape = Reshape((decoder_output_dim, ), name='decoder_reshape')接下来,我使用共享层定义了列车模型。
# Define an input for the encoder
encoder_inputs = Input(shape=(Tx, encoder_input_dim), name='encoder_input')
# We discard output and keep the states only.
_, h, c = encoder_lstm(encoder_inputs)
# Define an input for the decoder
decoder_inputs = Input(shape=(Ty, decoder_input_dim), name='decoder_input')
# Obtain all the outputs from the decoder (return_sequences = True)
decoder_outputs, _, _ = decoder_lstm(decoder_inputs, initial_state=[h, c])
# Apply dense layer to each output
decoder_outputs = decoder_dense(decoder_outputs)
train_model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=decoder_outputs)这是公平的,在这一点上,我正在使用一个自定义损失函数,这基本上是均方误差,但我掩盖了某些条目。
def masked_mse(y_true, y_pred):
return K.mean(
K.mean(((y_true[:,:,0] - y_pred[:,:,0])**2)*(1-y_true[:,:,1]),
axis=0),
axis=0)经过几个时期的训练后,输出如下所示:
Train on 67397 samples, validate on 3389 samples
Epoch 1/10
67397/67397 [==============================] - 36s 536us/sample - loss: 0.1981 - val_loss: 0.0713
Epoch 2/10
67397/67397 [==============================] - 34s 499us/sample - loss: 0.0755 - val_loss: 0.0535
Epoch 3/10
67397/67397 [==============================] - 31s 456us/sample - loss: 0.0633 - val_loss: 0.0494
Epoch 4/10
67397/67397 [==============================] - 29s 429us/sample - loss: 0.0595 - val_loss: 0.0478

我们注意到验证集的损失在0.045左右。
现在,我创建了从上面的共享层派生的推理模型:
# Define an input for the encoder
encoder_inputs = Input(shape=(Tx, encoder_input_dim), name='encoder_input')
# We discard output and keep the states only.
_, h, c = encoder_lstm(encoder_inputs)
# Define an input for the decoder
decoder_input = Input(shape=(1, decoder_input_dim), name='decoder_input')
current_input = decoder_input
# Obtain the outputs for each of the Ty timesteps
decoder_outputs = []
for _ in range(Ty):
# apply a single step of recurrence
out, h, c = decoder_lstm(current_input, initial_state=[h, c])
# pass the LSTM output through a dense layer
out = decoder_dense(out)
# The input in the next timestep (its shape is (?, 1, 1))
current_input = out
# reshape the decoder output as (?, 1) for convenience
out = decoder_reshape(out)
# append the output to the model's outputs
decoder_outputs.append(out)
inference_model = Model(inputs=[encoder_inputs, decoder_input], outputs=decoder_outputs)使用此推理模型,我尝试在培训期间使用的相同的验证集上对其进行评估,以便重新创建最后的结果:
# The input for the first timestep in the decoder is -1,
# (consistently, the same was applied during training)
decoder_input = -1 * np.ones((len(X_valid), 1, 1))
# Obtain the predictions, the resulting shape is (Ty, ?, 1)
y_pred = np.array(inference_model.predict([X_valid, decoder_input]))
# Reshape the output in the shape (?, Ty, 1)
y_pred = np.swapaxes(y_pred, axis1=0, axis2=1)
loss = masked_mse(K.constant(y_valid), K.constant(y_pred))
K.eval(loss)损失评估结果为0.1637。继续训练,从未降至0.14以下。
这是非常奇怪的,因为我使用相同的验证集来进行评估。我怀疑这个错误可能是在如何建立推理模型的某个地方,但我不确定。
你的想法是什么?
发布于 2020-04-17 20:37:07
如果您的推理模型与经过训练的模型没有任何变化,则不需要复制任何内容。您可以在现有模型上直接使用train_model.predict(...)。
如果要执行多个培训阶段(如迁移学习),复制层非常重要,但不需要使用经过训练的模型进行推理。
但是回到您的自定义循环,您的LSTM递归应该在应用Dense层之前发生。
decoder_outputs = []
for _ in range(Ty):
out, h, c = decoder_lstm(current_input, initial_state=[h, c])
# This line moved to before the decoder_dense call.
current_input = out
out = decoder_dense(out)
out = decoder_reshape(out)
decoder_outputs.append(out)https://stackoverflow.com/questions/61273702
复制相似问题