首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >skopt的gp_minimize()函数引发ValueError:数组不能包含infs或NaNs

skopt的gp_minimize()函数引发ValueError:数组不能包含infs或NaNs
EN

Stack Overflow用户
提问于 2020-05-29 04:44:36
回答 1查看 962关注 0票数 1

我目前正在使用skopt (scikit-optimize)包对神经网络进行超参数调整(我正在尝试最小化-1*精度)。它似乎运行良好(并成功打印到控制台)几次迭代,然后引发值错误:数组不能包含infs或NaNs。

造成这种情况的可能原因是什么?我的数据不包含infs或NaNs,我的搜索参数范围也不包含。神经网络代码相当长,所以为简洁起见,我将粘贴相关部分: Imports:

代码语言:javascript
复制
import pandas as pd

import numpy as np
from skopt import gp_minimize
from skopt.utils import use_named_args
from skopt.space import Real, Categorical, Integer
from tensorflow.python.framework import ops
from sklearn.model_selection import train_test_split

import tensorflow
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv1D, Dropout, MaxPooling1D, Flatten

from keras import backend as K

创建搜索参数:

代码语言:javascript
复制
dim_num_filters_L1 = Integer(low=1, high=50, name='num_filters_L1')
#dim_kernel_size_L1 = Integer(low=1, high=70, name='kernel_size_L1')
dim_activation_L1 = Categorical(categories=['relu', 'linear', 'softmax'], name='activation_L1')
dim_num_filters_L2 = Integer(low=1, high=50, name='num_filters_L2')
#dim_kernel_size_L2 = Integer(low=1, high=70, name='kernel_size_L2')
dim_activation_L2 = Categorical(categories=['relu', 'linear', 'softmax'], name='activation_L2')
dim_num_dense_nodes = Integer(low=1, high=28, name='num_dense_nodes')
dim_activation_L3 = Categorical(categories=['relu', 'linear', 'softmax'], name='activation_L3')
dim_dropout_rate = Real(low = 0, high = 0.5, name = 'dropout_rate')
dim_learning_rate = Real(low=1e-4, high=1e-2, name='learning_rate')

dimensions = [dim_num_filters_L1,
              #dim_kernel_size_L1,
              dim_activation_L1,
              dim_num_filters_L2,
             #dim_kernel_size_L2,
              dim_activation_L2,
              dim_num_dense_nodes,
              dim_activation_L3,
              dim_dropout_rate,
              dim_learning_rate,
             ]

创建要测试的所有模型的函数:

代码语言:javascript
复制
def create_model(num_filters_L1, #kernel_size_L1, 
                 activation_L1, 
                 num_filters_L2, #kernel_size_L2, 
                 activation_L2,
                 num_dense_nodes, activation_L3,
                 dropout_rate,
                 learning_rate):

    input_shape = (X_train.shape[1], 1)
    model = Sequential()
    model.add(Conv1D(num_filters_L1, kernel_size = 40, activation = activation_L1, input_shape = input_shape))
    model.add(MaxPooling1D(pool_size=2))
    model.add(Conv1D(num_filters_L2, kernel_size=20, activation=activation_L2))
    model.add(MaxPooling1D(pool_size=2))
    model.add(Flatten())
    model.add(Dense(num_dense_nodes, activation = activation_L3))
    model.add(Dropout(dropout_rate))
    model.add(Dense(y_train.shape[1], activation='linear'))
    adam = tensorflow.keras.optimizers.Adam(learning_rate = learning_rate)
    model.compile(optimizer=adam, loss='mean_squared_error', metrics=['accuracy'])

    return model

定义适应度函数:

代码语言:javascript
复制
@use_named_args(dimensions=dimensions)
def fitness(num_filters_L1, #kernel_size_L1, 
                 activation_L1, 
                 num_filters_L2, #kernel_size_L2, 
                 activation_L2,
                 num_dense_nodes, activation_L3,
                 dropout_rate,
                 learning_rate):

    model = create_model(num_filters_L1, #kernel_size_L1, 
                 activation_L1, 
                 num_filters_L2, #kernel_size_L2, 
                 activation_L2,
                 num_dense_nodes, activation_L3,
                 dropout_rate,
                 learning_rate)

    history_opt = model.fit(x=X_train,
                        y=y_train,
                        validation_data=(X_val,y_val), 
                        shuffle=True, 
                        verbose=2,
                        epochs=10
                        )

    #return the validation accuracy for the last epoch.
    accuracy_opt = model.evaluate(X_test,y_test)[1]

    # Print the classification accuracy:
    print("Experimental Model Accuracy: {0:.2%}".format(accuracy_opt))

    # Delete the Keras model with these hyper-parameters from memory:
    del model

    # Clear the Keras session, otherwise it will keep adding new models to the same TensorFlow graph each time we create model with a different set of hyper-parameters.
    K.clear_session()
    ops.reset_default_graph()

    # the optimizer aims for the lowest score, so return negative accuracy:
    return -accuracy # or sum(RMSE)? 

运行超参数搜索:

代码语言:javascript
复制
gp_result = gp_minimize(func=fitness,
                            dimensions=dimensions)

print("best accuracy was " + str(round(gp_result.fun *-100,2))+"%.")
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-06-29 02:26:36

您的激活函数不会在随机获取函数调用中收敛。我遇到了这个问题,并从搜索空间中删除了'relu‘函数。

票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/62074201

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档