我试着按照教程来实现这一点,但是我一直在LSTM层上获得维度错误。
ValueError: LSTM的输入0与图层不兼容:期望的ndim=3,找到的ndim=2。
import random
import numpy as np
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, Dense, DenseFeatures, Reshape
from sklearn.model_selection import train_test_split
def df_to_dataset(features, target, batch_size=32):
return tf.data.Dataset.from_tensor_slices((dict(features), target)).batch(batch_size)
# Reset randomization seeds
np.random.seed(0)
tf.random.set_random_seed(0)
random.seed(0)
# Assume 'frame' to be a dataframe with 3 columns: 'optimal_long_log_return', 'optimal_short_log_return' (independent variables) and 'equilibrium_log_return' (dependent variable)
X = frame[['optimal_long_log_return', 'optimal_short_log_return']][:-1]
Y = frame['equilibrium_log_return'].shift(-1)[:-1]
X_train, _X, y_train, _y = train_test_split(X, Y, test_size=0.5, shuffle=False, random_state=1)
X_validation, X_test, y_validation, y_test = train_test_split(_X, _y, test_size=0.5, shuffle=False, random_state=1)
train = df_to_dataset(X_train, y_train)
validation = df_to_dataset(X_validation, y_validation)
test = df_to_dataset(X_test, y_test)
feature_columns = [fc.numeric_column('optimal_long_log_return'), fc.numeric_column('optimal_short_log_return')]
model = Sequential()
model.add(DenseFeatures(feature_columns, name='Metadata'))
model.add(LSTM(256, name='LSTM'))
model.add(Dense(1, name='Output'))
model.compile(loss='logcosh', metrics=['mean_absolute_percentage_error'], optimizer='Adam')
model.fit(train, epochs=10, validation_data=validation, verbose=1)
loss, accuracy = model.evaluate(test, verbose=0)
print(f'Target Error: {accuracy}%')在其他地方看到这个问题后,我尝试设置input_shape=(None, *X_train.shape),input_shape=X_train.shape,这两种方法都不起作用。我还尝试在LSTM层之前插入一个重塑层model.add(Reshape(X_train.shape)),它修复了这个问题,但我发现了另一个问题:
InvalidArgumentError:输入整形是64个值的张量,但是所请求的形状有8000。
...and --我甚至不确定添加整形层是在做我认为它正在做的事情。毕竟,为什么要把数据重塑成自己的形状呢?我的数据出了点事,我就是不明白。
另外,我将此用于时间序列分析(股票回报),因此我认为LSTM模型应该是有状态的和时态的。在转换成张量之前,我是否需要将时间戳索引移到熊猫数据库中的自己的列中?
不幸的是,我有义务使用tensorflow v1.15,因为这是在QuantConnect平台上开发的,他们可能不会很快更新库。
编辑:我通过使用TimeseriesGenerator取得了一些进展,但是现在我得到了以下错误(在谷歌上没有返回结果):
KeyError:“未找到映射或原始密钥的密钥。映射密钥:[];原始密钥:[]”
下面的代码(我肯定我不正确地使用了input_shape参数):
train = TimeseriesGenerator(X_train, y_train, 1, batch_size=batch_size)
validation = TimeseriesGenerator(X_validation, y_validation, 1, batch_size=batch_size)
test = TimeseriesGenerator(X_test, y_test, 1, batch_size=batch_size)
model = Sequential(name='Expected Equilibrium Log Return')
model.add(LSTM(256, name='LSTM', stateful=True, batch_input_shape=(1, batch_size, X_train.shape[1]), input_shape=(1, X_train.shape[1])))
model.add(Dense(1, name='Output'))
model.compile(loss='logcosh', metrics=['mean_absolute_percentage_error'], optimizer='Adam', sample_weight_mode='temporal')
print(model.summary())
model.fit_generator(train, epochs=10, validation_data=validation, verbose=1)
loss, accuracy = model.evaluate_generator(test, verbose=0)
print(f'Model Accuracy: {accuracy}')发布于 2021-09-30 12:33:41
事实证明,这个特定的问题与量子连接为熊猫制作的一个补丁有关,该补丁干扰了较旧版本的tensorflow/keras。
https://stackoverflow.com/questions/69301586
复制相似问题