我不知道为什么我会得到这么好的结果。
Epoch 3/10 2937/2937 [==============================] - 12s 4ms/step -
loss: 0.2836 - acc: 0.4679 - val_loss: 0.1937 - val_acc: 0.1980
Epoch 4/10 2937/2937 [==============================] - 12s 4ms/step -
loss: 0.1355 - acc: 0.4679 - val_loss: 0.0866 - val_acc: 0.1980
>Epoch
5/10 2937/2937 [==============================] - 13s 4ms/step - loss:
0.0580 - acc: 0.4679 - val_loss: 0.0342 - val_acc: 0.1980
Epoch 6/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0223 - acc: 0.4679 - val_loss: 0.0120 - val_acc: 0.1980
Epoch 7/10 2937/2937 [==============================] - 14s 5ms/step -
loss: 0.0082 - acc: 0.4679 - val_loss: 0.0040 - val_acc: 0.1980我的培训和标签集是范围-0.05,0.05中的浮点数数组,我正在使用Keras.sequential.model.lstm。为什么会发生这种事?以前,我在这里遇到了相反的问题:损失/损失额_损失在减少,但在LSTM中的准确性是一样的!,但是我无法理解这个问题。
编辑:我将代码更改为:
model.compile(optimizer = 'adam', loss = 'mean_square_error', metrics=['accuracy'])至:
model.compile(optimizer = 'adam', loss = 'mean_absolute_error', metrics=['accuracy'])但结果是一样的。
然后,我将代码的上面一行更改为:
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['mean_squared_error'])但它没有奏效,其结果如下:
Train on 2937 samples, validate on 735 samples Epoch 1/10 2937/2937
[==============================] - 90s 31ms/step - loss: 1.6645 -
mean_squared_error: 0.0019 - val_loss: 0.7620 -
val_mean_squared_error: 0.0010
Epoch 2/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.5503 - mean_squared_error: 0.0019 - val_loss: 0.3890 -
val_mean_squared_error: 0.0010
Epoch 3/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.2837 - mean_squared_error: 0.0019 - val_loss: 0.1938 -
val_mean_squared_error: 0.0010
Epoch 4/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.1355 - mean_squared_error: 0.0019 - val_loss: 0.0866 -
val_mean_squared_error: 0.0010
Epoch 5/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0580 - mean_squared_error: 0.0019 - val_loss: 0.0342 -
val_mean_squared_error: 0.0010
Epoch 6/10 2937/2937 [==============================] - 13s 4ms/step -
loss: 0.0223 - mean_squared_error: 0.0019 - val_loss: 0.0120 -
val_mean_squared_error: 0.0010
Epoch 7/10 2937/2937 [==============================] - 13s 5ms/step -
loss: 0.0082 - mean_squared_error: 0.0019 - val_loss: 0.0040 -
val_mean_squared_error: 0.0010
Epoch 8/10 2937/2937 [==============================] - 14s 5ms/step -
loss: 0.0035 - mean_squared_error: 0.0019 - val_loss: 0.0017 -
val_mean_squared_error: 0.0010
Epoch 9/10 2937/2937 [==============================] - 13s 5ms/step -
loss: 0.0022 - mean_squared_error: 0.0019 - val_loss: 0.0011 -
val_mean_squared_error: 0.0010
Epoch 10/10 2937/2937 [==============================] - 13s 5ms/step
- loss: 0.0019 - mean_squared_error: 0.0019 - val_loss: 0.0010 - val_mean_squared_error: 0.0010发布于 2019-04-13 15:01:54
在您的代码中:
model.compile(optimizer = 'adam', loss = 'mean_absolute_error', metrics=['accuracy'])你用精确度作为衡量标准,metrics=['accuracy']。当您正在进行回归(您正在做的)时,这是不起作用的。回归是指有连续(浮动)标签的时候。
因此,您的代码应该如下所示:
model.compile(optimizer = 'adam', loss = 'mean_squared_error', metrics=['mean_absolute_error'])然后,当你训练的时候,你应该在训练中同时寻找损失和你的度量都在减少。如果他们是,那么你的网络是正常的行为。
发布于 2019-04-14 01:28:53
准确性是衡量分类性能的一种标准。
平均绝对误差和均方误差是衡量回归性能的指标。
给定您预测的值范围-0.05,0.05,您正在执行回归。准确性是回归的一种毫无意义的衡量标准,应予以忽视。
https://datascience.stackexchange.com/questions/49233
复制相似问题