我正在尝试用Encog库训练一个神经网络。
在拆分(分为训练(60%)、交叉验证(20%)和测试(20%))之前的数据集(约7000个示例)是线性归一化的,因此它看起来像这样:
Min=-1.000000; Max=1.000000; Average=-0.077008目标(理想)数据集(也是线性归一化的)如下所示:
Min=0.201540; Max=0.791528; Average=0.477080我将网络初始化为:
mNetwork = new BasicNetwork();
mNetwork.addLayer(new BasicLayer(null, false, trainingDataSet.getInputSize()));
mNetwork.addLayer(new BasicLayer(new ActivationSigmoid(), true, numberOfNeurons));
mNetwork.addLayer(new BasicLayer(new ActivationSigmoid(), false, trainingDataSet.getIdealSize()));
mNetwork.getStructure().finalizeStructure();
mNetwork.reset();我使用ResilientPropagation trainer (也尝试过Backpropagation):
ResilientPropagation training = new ResilientPropagation(mNetwork, mTrainingDataSet);
for (int i = 0; i < mNumberOfIterations; ++i) {
training.iteration();
result.trainingErrors[i] = mNetwork.calculateError(mTrainingDataSet);
result.validationErrors[i] = mNetwork.calculateError(mValidationDataSet);
System.out.println(String.format("Iteration #%d: error=%.8f", i, training.getError()));
}
training.finishTraining();在训练过程中,训练者报告的错误通常是减少的。完成训练后,我会丢弃重量:
0.04274211002929323,-0.5481902707068103,0.28978635361541294,-0.203635994176051,22965.18656660482,22964.992410871928,22966.23882308963,22966.355722230965,22965.036733143017,22964.894030965166,22966.002332259202,22965.177650526788,22966.009842504238,22965.971560546248,22966.257180159628,22966.234150681423,-21348.311232865744,-21640.843082085466,-21057.13217475862,-21347.52051343582,-21347.988714647887,-21641.161098510198,-21057.27275747668,-21348.784123049118,-21347.719149090022,-21639.773689115867,-21057.095487328377,-21348.269878600076,22800.304816865206,23090.894751729396,22799.39388588725,22799.72408290791,22800.249806096508,22799.19823789763,22799.85510732227,22799.99965531053,22799.574773588192,22799.57945236908,22799.12542315293,22799.523065957797它们通常要么非常大,要么非常小。使用sigmoid,最终预测会收敛到某个数字,例如上面的权重(经过500次迭代)给我:
Min=0.532179; Max=0.532179; Average=0.532179网络或训练配置似乎有问题。如果我的网络受到低方差的影响,至少它会产生一个在目标范围内的结果。如果它受到高方差的影响,它将与目标匹配。现在,它完全没有达到目标。
即使预测结果相差很远,为什么误差会减少并变得相当低?有人在上面的例子中看到明显的错误吗?我对神经网络还不是很熟悉。
发布于 2014-11-22 21:07:49
在我看来,问题是你在-1和1之间进行归一化,并使用一个与0到1之间的数字工作的激活函数sigmoid。我建议你在0.1到0.9之间进行归一化,或者使用tanh激活函数重试。
我还将使用k-折交叉验证(请参阅此处http://www.heatonresearch.com/node/2000 )。
文森佐
https://stackoverflow.com/questions/25820399
复制相似问题