我正努力为我的大学深造课程利用灾害推特的自然语言处理竞赛做准备。
我试图使用一个多输入网络来解决这个问题,其中关键字和位置列由两个独立的Conv1D网络处理,文本列由一个TransformerEncoder处理。我让Conv1D网络正常工作,但是TransformerEncoder给了我标题上的错误。我正在使用word嵌入(尝试使用从零开始训练和使用GloVe嵌入,但都给出相同的错误)和位置编码,这些是基于实现 (TransformerEncoder和PositionalEncoding类)在,第二版书。
我是这样处理数据集的:
train_text = data.Dataset.from_tensor_slices((train_data['text'].values.astype(str), train_data['target'].values.astype(bool)))
train_keywords = data.Dataset.from_tensor_slices((train_data['keyword'].values.astype(str), train_data['target'].values.astype(bool)))
train_loc = data.Dataset.from_tensor_slices((train_data['location'].values.astype(str), train_data['target'].values.astype(bool)))
val_text = data.Dataset.from_tensor_slices((validation_data['text'].values.astype(str), validation_data['target'].values.astype(bool)))
val_keywords = data.Dataset.from_tensor_slices((validation_data['keyword'].values.astype(str), validation_data['target'].values.astype(bool)))
val_loc = data.Dataset.from_tensor_slices((validation_data['location'].values.astype(str), validation_data['target'].values.astype(bool)))我也尝试过使用更类似于装载熊猫DataFrame中方法的方法,但是我得到了相同的结果。
text_vectorization = TextVectorization(
max_tokens=MAX_TOKENS_TEXT,
output_sequence_length=max_text_length,
standardize=standardize_text,
output_mode='int'
)
keyword_vectorization = TextVectorization(
max_tokens=MAX_TOKENS_KEYWORDS,
output_sequence_length=MAX_KEYWORD_LENGTH,
standardize=standardize_keywords,
output_mode='int'
)
loc_vectorization = TextVectorization(
max_tokens=MAX_TOKENS_KEYWORDS,
output_sequence_length=MAX_LOC_LENGTH,
standardize=standardize_loc,
output_mode='int'
)
text_vectorization.adapt(train_text.map(lambda x, y: x))
keyword_vectorization.adapt(train_keywords.map(lambda x, y: x))
loc_vectorization.adapt(train_loc.map(lambda x, y: x))
train_text_vectorized = train_text.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=-1 # According to the documentation, -1 means auto
).batch(BATCH_SIZE)
train_loc_vectorized = train_loc.map(
lambda x, y: (loc_vectorization(x), y),
num_parallel_calls=-1
).batch(BATCH_SIZE)
train_keywords_vectorized = train_keywords.map(
lambda x, y: (keyword_vectorization(x), y),
num_parallel_calls=-1
).batch(BATCH_SIZE)
val_text_vectorized = val_text.map(
lambda x, y: (text_vectorization(x), y),
num_parallel_calls=-1
).batch(BATCH_SIZE)
val_loc_vectorized = val_loc.map(
lambda x, y: (loc_vectorization(x), y),
num_parallel_calls=-1
).batch(BATCH_SIZE)
val_keywords_vectorized = val_keywords.map(
lambda x, y: (keyword_vectorization(x), y),
num_parallel_calls=-1
).batch(BATCH_SIZE)在这里,我还尝试了以下几点,并取得了同样的结果:
def dataset_zipper(loc, text, keyword):
return (loc[0], text[0], keyword[0]), text[1]
train_full_vectorized = data.Dataset.zip((train_loc_vectorized, train_text_vectorized, train_keywords_vectorized))
train_full_vectorized = train_full_vectorized.map(dataset_zipper, num_parallel_calls=-1)
val_full_vectorized = data.Dataset.zip((val_loc_vectorized, val_text_vectorized, val_keywords_vectorized))
val_full_vectorized = val_full_vectorized.map(dataset_zipper, num_parallel_calls=-1)现在我建立了网络: 1。
loc_input = Input(shape=(MAX_TOKENS_KEYWORDS,), dtype='int64', name='location')
keyword_input = Input(shape=(MAX_TOKENS_KEYWORDS,), dtype='int64', name='keyword')
text_input = Input(shape=(MAX_TOKENS_TEXT,), dtype="int64", name='text')
full_network = concatenate([
generate_convnet(loc=True, input_layer=loc_input),
generate_transformer(input_layer=text_input),
generate_convnet(loc=False, input_layer=keyword_input)
])
full_network = Dropout(0.3)(full_network)
full_network = Dense(1, activation='sigmoid')(full_network) # This is the classifier - since this is binary classification, I will use sigmoid activationmodel = Model(inputs=[loc_input, text_input, keyword_input], outputs=full_network)
model.compile(loss='binary_crossentropy',
optimizer=Adam(learning_rate=0.001),
metrics=['binary_accuracy'])callbacks = [
ModelCheckpoint('twitter_disasters_v1.h5', save_best_only=True),
EarlyStopping(monitor='val_loss', patience=5, mode='min')]
results = model.fit(x=train_full_vectorized, validation_data=val_full_vectorized, class_weight=class_weights, callbacks=callbacks, epochs=100)这就是我得到错误的地方:
Node: 'IteratorGetNext'
2 root error(s) found.
(0) INVALID_ARGUMENT: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [5], [batch]: [0]
[[{{node IteratorGetNext}}]]
[[gradient_tape/model_1/transformer_encoder_1/multi_head_attention_1/query/einsum/Einsum/_144]]
(1) INVALID_ARGUMENT: Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [5], [batch]: [0]
[[{{node IteratorGetNext}}]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_117129]发布于 2022-11-14 15:38:43
不管怎么说,应该做更多的实验。将.batch函数从第3步移到第4步(在这里我执行dataset压缩)并将批处理大小设置为1已经成功,网络现在正在培训中,尽管我愿意接受更好的建议(如果有的话)。
现在我要解决的事实是失去是NaN.
https://stackoverflow.com/questions/74433941
复制相似问题