所以..。我查看了一些关于这个问题的帖子(应该有很多我还没有检查过,但我认为现在找一个问题来帮助是合理的),但我没有找到适合我情况的解决方案。
这个OOM错误消息总是出现在第二轮的任何折叠训练循环中,以及在第一次运行后再次运行训练代码时(没有任何一个例外)。因此,这可能是一个与这篇文章相关的问题:查找(),但我不确定我的问题在于哪个功能。
我的NN是一个具有两个图形卷积层的GCN,我在一个服务器上运行代码,服务器上有几个10 GB的Nvidia P 102-100 GPU。已将batch_size设置为1,但没有发生任何更改。此外,我也在使用木星笔记本,而不是使用命令运行python脚本,因为在命令行中,我甚至不能运行一次.顺便问一下,有人知道为什么在命令行中弹出OOM时,某些代码可以在木星上运行而没有问题吗?我觉得有点奇怪。
更新:用GlobalMaxPool()替换Flatten()之后,错误消失了,我可以顺利地运行代码。但是,如果我进一步添加一个GC层,第一轮就会出现错误。所以,我想核心问题还在.
UPDATE2:试图用tf.SparseTensor代替tf.Tensor。成功但没有用。还试图设置ML_Engine的答案中提到的镜像策略,但是看起来GPU中的一个占用最多,而OOM仍然出现。也许它是一种“数据并行”,因为我已经将batch_size设置为1,因此无法解决我的问题?
Code (改编自GCNG):
from keras import Input, Model
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.layers import Dense, Flatten
from keras.optimizers import Adam
from keras.regularizers import l2
import tensorflow as tf
#from spektral.datasets import mnist
from spektral.layers import GraphConv
from spektral.layers.ops import sp_matrix_to_sp_tensor
from spektral.utils import normalized_laplacian
from keras.utils import plot_model
from sklearn import metrics
import numpy as np
import gc
l2_reg = 5e-7 # Regularization rate for l2
learning_rate = 1*1e-6 # Learning rate for SGD
batch_size = 1 # Batch size
epochs = 1 # Number of training epochs
es_patience = 50 # Patience fot early stopping
# DATA IMPORTING & PREPROCESSING OMITTED
# this part of adjacency matrix calculation is not important...
fltr = self_connection_normalized_adjacency(adj)
test = fltr.toarray()
t = tf.convert_to_tensor(test)
A_in = Input(tensor=t)
del fltr, test, t
gc.collect()
# Here comes the issue.
for test_indel in range(1,11):
# SEVERAL LINES OMITTED (get X_train, y_train, X_val, y_val, X_test, y_test)
# Build model
N = X_train.shape[-2] # Number of nodes in the graphs
F = X_train.shape[-1] # Node features dimensionality
n_out = y_train.shape[-1] # Dimension of the target
X_in = Input(shape=(N, F))
graph_conv = GraphConv(32,activation='elu',kernel_regularizer=l2(l2_reg),use_bias=True)([X_in, A_in])
graph_conv = GraphConv(32,activation='elu',kernel_regularizer=l2(l2_reg),use_bias=True)([graph_conv, A_in])
flatten = Flatten()(graph_conv)
fc = Dense(512, activation='relu')(flatten)
output = Dense(n_out, activation='sigmoid')(fc)
model = Model(inputs=[X_in, A_in], outputs=output)
optimizer = Adam(lr=learning_rate)
model.compile(optimizer=optimizer,loss='binary_crossentropy',metrics=['acc'])
model.summary()
save_dir = current_path+'/'+str(test_indel)+'_self_connection_Ycv_LR_as_nega_rg_5-7_lr_1-6_e'+str(epochs)
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
early_stopping = EarlyStopping(monitor='val_acc', patience=es_patience, verbose=0, mode='auto')
checkpoint1 = ModelCheckpoint(filepath=save_dir + '/weights.{epoch:02d}-{val_loss:.2f}.hdf5', monitor='val_loss',verbose=1, save_best_only=False, save_weights_only=False, mode='auto', period=1)
checkpoint2 = ModelCheckpoint(filepath=save_dir + '/weights.hdf5', monitor='val_acc', verbose=1,save_best_only=True, mode='auto', period=1)
callbacks = [checkpoint2, early_stopping]
# Train model
validation_data = (X_val, y_val)
print('batch size = '+str(batch_size))
history = model.fit(X_train,y_train,batch_size=batch_size,validation_data=validation_data,epochs=epochs,callbacks=callbacks)
# Prediction and write-file code omitted
del X_in, X_data_train,Y_data_train,gene_pair_index_train,count_setx_train,X_data_test, Y_data_test,gene_pair_index_test,trainX_index,validation_index,train_index, X_train, y_train, X_val, y_val, X_test, y_test, validation_data, graph_conv, flatten, fc, output, model, optimizer, history
gc.collect()模型摘要
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_2 (InputLayer) (None, 13129, 2) 0
__________________________________________________________________________________________________
input_1 (InputLayer) (13129, 13129) 0
__________________________________________________________________________________________________
graph_conv_1 (GraphConv) (None, 13129, 32) 96 input_2[0][0]
input_1[0][0]
__________________________________________________________________________________________________
graph_conv_2 (GraphConv) (None, 13129, 32) 1056 graph_conv_1[0][0]
input_1[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 420128) 0 graph_conv_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 512) 215106048 flatten_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 1) 513 dense_1[0][0]
==================================================================================================
Total params: 215,107,713
Trainable params: 215,107,713
Non-trainable params: 0
__________________________________________________________________________________________________
batch size = 1错误消息(请注意,此消息在重新启动和清除输出后的第一轮中从未出现):
Train on 2953 samples, validate on 739 samples
Epoch 1/1
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-5-943385df49dc> in <module>()
62 mem = psutil.virtual_memory()
63 print("current mem " + str(round(mem.percent))+'%')
---> 64 history = model.fit(X_train,y_train,batch_size=batch_size,validation_data=validation_data,epochs=epochs,callbacks=callbacks)
65 mem = psutil.virtual_memory()
66 print("current mem " + str(round(mem.percent))+'%')
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
1237 steps_per_epoch=steps_per_epoch,
1238 validation_steps=validation_steps,
-> 1239 validation_freq=validation_freq)
1240
1241 def evaluate(self,
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/keras/engine/training_arrays.py in fit_loop(model, fit_function, fit_inputs, out_labels, batch_size, epochs, verbose, callbacks, val_function, val_inputs, shuffle, initial_epoch, steps_per_epoch, validation_steps, validation_freq)
194 ins_batch[i] = ins_batch[i].toarray()
195
--> 196 outs = fit_function(ins_batch)
197 outs = to_list(outs)
198 for l, o in zip(out_labels, outs):
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/keras/backend.py in __call__(self, inputs)
3290
3291 fetched = self._callable_fn(*array_vals,
-> 3292 run_metadata=self.run_metadata)
3293 self._call_fetch_callbacks(fetched[-len(self._fetches):])
3294 output_structure = nest.pack_sequence_as(
/public/workspace/miniconda3/envs/ST/lib/python3.6/site-packages/tensorflow/python/client/session.py in __call__(self, *args, **kwargs)
1456 ret = tf_session.TF_SessionRunCallable(self._session._session,
1457 self._handle, args,
-> 1458 run_metadata_ptr)
1459 if run_metadata:
1460 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training_1/Adam/mul_23}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[metrics_1/acc/Identity/_323]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[420128,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node training_1/Adam/mul_23}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.发布于 2021-04-20 13:20:03
https://stackoverflow.com/questions/67178061
复制相似问题