首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >DenseNet与Keras UnknownError:未能得到卷积算法

DenseNet与Keras UnknownError:未能得到卷积算法
EN

Stack Overflow用户
提问于 2020-10-17 15:36:11
回答 1查看 100关注 0票数 0

我想在GPU中运行这段代码

tensorflow-gpu:2.3.1

Cuda版本: 10.2

结果: UnknownError:未能得到卷积算法。这可能是因为cuDNN未能初始化,所以尝试查看上面是否打印了警告日志消息。[节点泛函_1/conv/conv2D(定义为:129) ]

函数调用堆栈: train_function

你能帮帮我吗?

先谢谢你

代码语言:javascript
复制
def build_model():
#include_top: whether to include the fully-connected layer at the top of the network.
#weights: one of None (random initialization), 'imagenet' (pre-training on ImageNet), or the path to the weights file to be loaded.
    base_model = densenet.DenseNet121(input_shape= (128, 128, 3),
                                     weights=None, 
                                     include_top=True,
                                     pooling='avg', classes=3,)
    #Layers & models also feature a boolean attribute trainable. Its value can be changed. Setting layer.trainable to False moves all the layer's weights from trainable to non-trainable. This is called "freezing" the layer: the state of a frozen layer won't be updated during training (either when
    #training with fit() or when training with any custom loop that relies on trainable_weights to apply gradient updates)
    #pour modifier les poids lors de trainement 
    for layer in base_model.layers:
        layer.trainable = True 
        
# définir les parametres nécessaires 
#La régularisation est une technique qui apporte de légères modifications à l'algorithme d'apprentissage de sorte que le 
#modèle se généralise mieux. Cela améliore également les performances du modèle sur les données invisibles.
    x = base_model.output
    x = Dense(50, kernel_regularizer=regularizers.l1_l2(0.00001), activity_regularizer=regularizers.l2(0.00001))(x)
    x = Activation('relu')(x)
    x = Dense(25, kernel_regularizer=regularizers.l1_l2(0.00001), activity_regularizer=regularizers.l2(0.00001))(x)
    x = Activation('relu')(x)
    predictions = Dense(n_classes, activation='softmax')(x)
    model = Model(inputs=base_model.input, outputs=predictions)
    return model

model = build_model()


# l'utlisation de Adam optimizer + sparese _sparse_categorical_crossentropy
keras.optimizers.Adam(learning_rate=0.001)

#optimizer = Adam(lr=0.01)
#model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
#Early stopping is a method that allows you to specify an arbitrary large number of training epochs and stop training once the
#model performance stops improving on a hold out validation dataset
early_stop = EarlyStopping(monitor='val_loss', patience=8, verbose=2, min_delta=1e-3)
#Reduce learning rate when a metric has stopped improving.
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=4, verbose=1, min_delta=1e-3)
callbacks_list = [early_stop, reduce_lr]
print(" Build model --- %s seconds ---" % (time.time() - start_time))
print('###################### training step #############')
trainy = keras.utils.to_categorical(trainy)
yvalidation = keras.utils.to_categorical(yvalidation)


with tf.device('/device:GPU:0'):
    trainx = tf.constant(trainx)
    trainy = tf.constant(trainy)
    xvalidation = tf.constant(xvalidation)
    yvalidation = tf.constant(yvalidation)
    model_history = model.fit(trainx, trainy,
          validation_data=(xvalidation, yvalidation),
          batch_size=68, 
          epochs=7000,
          verbose=1)
EN

回答 1

Stack Overflow用户

发布于 2020-10-17 15:38:33

您的CuDNN+Cuda+TensorFlow版本不匹配。为了解决您的问题,请参考我的答案:Tensorflow 2.0 can't use GPU, something wrong in cuDNN? :Failed to get convolution algorithm. This is probably because cuDNN failed to initialize

在您的例子中,很可能TensorFlow 2.3.1不支持Cuda 10.2。您应该降级到Cuda 10.1,然后再试一次。

票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/64404229

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档