我尝试在stateful=True中使用LSTM网络,如下所示:
import numpy as np, pandas as pd, matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.callbacks import LambdaCallback
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
raw = np.sin(2*np.pi*np.arange(1024)/float(1024/2))
data = pd.DataFrame(raw)
window_size = 3
data_s = data.copy()
for i in range(window_size):
data = pd.concat([data, data_s.shift(-(i+1))], axis = 1)
data.dropna(axis=0, inplace=True)
print (data)
ds = data.values
n_rows = ds.shape[0]
ts = int(n_rows * 0.8)
train_data = ds[:ts,:]
test_data = ds[ts:,:]
train_X = train_data[:,:-1]
train_y = train_data[:,-1]
test_X = test_data[:,:-1]
test_y = test_data[:,-1]
print (train_X.shape)
print (train_y.shape)
print (test_X.shape)
print (test_y.shape)(816,3) (816,) (205,3) (205,
batch_size = 3
n_feats = 1
train_X = train_X.reshape(train_X.shape[0], batch_size, n_feats)
test_X = test_X.reshape(test_X.shape[0], batch_size, n_feats)
print(train_X.shape, train_y.shape)
regressor = Sequential()
regressor.add(LSTM(units = 64, batch_input_shape=(train_X.shape[0], batch_size, n_feats),
activation = 'sigmoid',
stateful=True, return_sequences=True))
regressor.add(Dense(units = 1))
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error')
resetCallback = LambdaCallback(on_epoch_begin=lambda epoch,logs: regressor.reset_states())
regressor.fit(train_X, train_y, batch_size=7, epochs = 1, callbacks=[resetCallback])
previous_inputs = test_X
regressor.reset_states()
previous_predictions = regressor.predict(previous_inputs).reshape(-1)
test_y = test_y.reshape(-1)
plt.plot(test_y, color = 'blue')
plt.plot(previous_predictions, color = 'red')
plt.show()然而,我得到了:
ValueError: Error when checking target: expected dense_1 to have 3 dimensions, but got array with shape (816, 1)发布于 2018-04-15 21:49:31
两个小虫子:
给你
regressor.add(LSTM(units = 64, batch_input_shape=(train_X.shape[0], batch_size, n_feats),
activation = 'sigmoid',
stateful=True, return_sequences=True))这个LSTM将返回一个3D向量,但你的y是2D,这会抛出一个值错误。您可以使用return_sequences=False修复这个问题。我不知道为什么最初在train_X.shape[0]中包含batch_input,整个集合中的样本数量不应该影响每个批的大小。
regressor.add(LSTM(units = 64, batch_input_shape=(1, batch_size, n_feats),
activation = 'sigmoid',
stateful=True, return_sequences=False))在这之后你有
regressor.fit(train_X, train_y, batch_size=7, epochs = 1, callbacks=[resetCallback])在有状态网络中,您只能输入一些可以划分批大小的输入。由于7不除以816,我们将其改为1:
regressor.fit(train_X, train_y, batch_size=1, epochs = 1, callbacks=[resetCallback])你的预测也是如此。您必须指定batch_size=1
previous_predictions = regressor.predict(previous_inputs, batch_size=1).reshape(-1)https://stackoverflow.com/questions/49842207
复制相似问题