我正在用Keras做实验,我成功地构建了一个简单的CNN来区分蓝色图像(300x300张相同蓝色的图像)和红色图像(同样大小,只是红色)。这是一个虚拟的问题,我以为NN会立即解决,但情况似乎并非如此。事实上,即使在20+年代之后,准确率仍然精确到50%。
我想有很多事情我可以尝试不同的方式去做,但是我在这里所做的有什么特别的错误会导致在这么简单的任务上表现这么差吗?
# Create a Keras model.
model = keras.Sequential()
model.add(
keras.layers.Conv2D(
input_shape=(300, 300, 3),
filters=64,
kernel_size=(3, 3),
activation='relu',
)
)
model.add(
keras.layers.Conv2D(
filters=64,
kernel_size=(3, 3),
activation='relu',
)
)
model.add(
keras.layers.MaxPooling2D(
pool_size=(2, 2),
strides=(2, 2),
)
)
model.add(keras.layers.Flatten())
model.add(
keras.layers.Dense(
units=128,
activation='relu'
)
)
model.add(keras.layers.Dropout(0.5))
model.add(
keras.layers.Dense(
units=1,
activation='sigmoid',
)
)
# Train the model.
sgd = keras.optimizers.SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(
optimizer=sgd,
loss='mean_squared_error',
metrics=['binary_accuracy']
)
model.fit(images, labels, epochs=20, batch_size=5)其中产出:
20/20 [==============================] - 18s 887ms/step - loss: 0.5954 - binary_accuracy: 0.4000
Epoch 2/20
20/20 [==============================] - 16s 794ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 3/20
20/20 [==============================] - 16s 781ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 4/20
20/20 [==============================] - 17s 853ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 5/20
20/20 [==============================] - 18s 877ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 6/20
20/20 [==============================] - 18s 891ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 7/20
20/20 [==============================] - 17s 825ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 8/20
20/20 [==============================] - 17s 861ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 9/20
20/20 [==============================] - 17s 846ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 10/20
20/20 [==============================] - 17s 835ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 11/20
20/20 [==============================] - 16s 800ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 12/20
20/20 [==============================] - 16s 806ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 13/20
20/20 [==============================] - 16s 811ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 14/20
20/20 [==============================] - 17s 827ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 15/20
20/20 [==============================] - 16s 806ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 16/20
20/20 [==============================] - 16s 786ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 17/20
20/20 [==============================] - 16s 795ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 18/20
20/20 [==============================] - 16s 796ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 19/20
20/20 [==============================] - 16s 788ms/step - loss: 0.5000 - binary_accuracy: 0.5000
Epoch 20/20
20/20 [==============================] - 16s 794ms/step - loss: 0.5000 - binary_accuracy: 0.5000发布于 2018-07-20 19:51:37
如果你还记得这个理论的话,CNN的核心基本上是基于边缘检测的原理。所以,现在你的照片是纯蓝色或纯红色,没有边缘。我猜想这是这里的主要问题,因为我认为ML库初始化了适合于边缘检测的内核。为了更全面:
sum of red kernel * 255)和(sum of blue kernel * 255),这将大致相等。由于您使用的是一个不适当的损失函数,CNN变得不敏感的小变化,因为不同的总和的内核。我建议:
发布于 2018-07-20 19:48:55
有几件事:
https://datascience.stackexchange.com/questions/35809
复制相似问题