首页
学习
活动
专区
圈层
工具
发布
社区首页 >问答首页 >GridSeachCV在tensorflow中寻找深度神经网络中最佳超参数的有效方法吗?

GridSeachCV在tensorflow中寻找深度神经网络中最佳超参数的有效方法吗?
EN

Stack Overflow用户
提问于 2020-08-08 15:11:03
回答 1查看 415关注 0票数 0

在我的图像分类实验中,我想为CNN模型寻找超视距,我用RandomizedSearchCV来寻找它可能的最佳超视距,但是我得到了如下的论证错误,但我不知道为什么。

TypeError回溯(最近一次调用) 在() 41 42模型中,pred = algorithm_pipeline(X_train,X_test,y_train,y_test,model,-> 43 param_grid,cv=5,scoring_fit='neg_log_loss') 44 45 print(model.best_score_) 32帧 167减速机= getattr(x,reduce_ex,None) 168如果减速器:-> 169 rv =减速机(4) 170其他: 171 reductor = getattr(x,“ reductor ”,(无) TypeError:不能对_thread.RLock对象进行分类

我在SO中研究了这种错误的可能解决方案,但是错误仍然没有消失。

下面是我的代码的一部分,包括这个实验室中的错误报告。我不明白我的尝试有什么问题。有人能指出如何消除这个错误吗?谢谢

我目前的尝试

代码语言:javascript
复制
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV

def algorithm_pipeline(X_train_data, X_test_data, y_train_data, y_test_data, 
                       model, param_grid, cv=2, scoring_fit='neg_mean_squared_error',
                       do_probabilities = False):
    gs = RandomizedSearchCV(
        estimator=model,
        param_distributions=param_grid, 
        cv=cv, 
        n_jobs=-1, 
        scoring=scoring_fit,
        verbose=2
    )
    fitted_model = gs.fit(X_train_data, y_train_data)
    
    if do_probabilities:
      pred = fitted_model.predict_proba(X_test_data)
    else:
      pred = fitted_model.predict(X_test_data)
    
    return fitted_model, pred

请参阅这个实验室中包括错误报告在内的全部代码

我不明白错误是从哪里来的。如何消除上述错误?有什么想法吗?

更新

我还尝试了使用scikit-optimize包进行贝叶斯优化的方法,这是我在这个实验室的尝试,但也出现了错误。

有人能帮我如何使GridSearchCVBayesian-optimization attempt这个要旨中工作吗?有任何可能的帮助使这可行吗?有什么想法吗?谢谢

EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-08-10 17:31:48

sklearn GridSearchCV不会直接在keras模型上工作。您必须使用keras scikit_learn包装器来使它与sklearn一起工作。例如,必须使用KerasClassifier包装keras模型。

样本码

内联记录的代码。

代码语言:javascript
复制
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
import numpy
from sklearn import datasets 

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Create a function that returns a model
# You can use additional parameters to this function,
# which will be passed by the GridSearchCV, for parameter tuning.
def get_model(opt, activation):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  model.compile(loss='sparse_categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
  return model

# Wrap it to make it sklearn compatible
model = KerasClassifier(build_fn=get_model)

# Create the parameter grid
param_grid = dict(epochs=[1,2,3], opt = ['rmsprop', 'adam'], activation=['relu', 'tanh'])

# finally run 
grid = GridSearchCV(estimator=model, param_grid=param_grid)
result = grid.fit(X, y)

# Get the gridsearch best parameters
print(f"Best score: {result.best_score_}, Parameters: {result.best_params_}")

输出:

代码语言:javascript
复制
4/4 [==============================] - 0s 2ms/step - loss: 1.2447 - accuracy: 0.4167
1/1 [==============================] - 0s 5ms/step - loss: 1.4246 - accuracy: 0.0000e+00
4/4 [==============================] - 0s 2ms/step - loss: 1.9505 - accuracy: 0.2500
1/1 [==============================] - 0s 1ms/step - loss: 0.8273 - accuracy: 0.6667
4/4 [==============================] - 0s 2ms/step - loss: 1.0976 - accuracy: 0.4167
............... LOG TRUNCATED........
Epoch 1/2
5/5 [==============================] - 0s 2ms/step - loss: 1.1047 - accuracy: 0.3333
Epoch 2/2
5/5 [==============================] - 0s 1ms/step - loss: 1.0931 - accuracy: 0.3333
Best score: 0.5000000089406967, Parameters: {'activation': 'relu', 'epochs': 2, 'opt': 'adam'}

或用于RandomizedSearchCV

代码语言:javascript
复制
grid = RandomizedSearchCV(estimator=model, param_distributions=param_grid)

编辑1

基于AX框架的bayes优化参数优化

代码是内联记录的。

代码语言:javascript
复制
from keras.models import Sequential
from keras.layers import Dense, Dropout
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn import datasets
from ax.service.managed_loop import optimize
from keras.optimizers import Adam, RMSprop


# Seed to for reproducible results
np.random.seed(3)

# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target

# Train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=3)

# Create a function that returns a model
def get_model(opt, activation, dropout, lr):
  model = Sequential()
  model.add(Dense(8, input_dim=4, activation=activation))
  model.add(Dropout(dropout))
  model.add(Dense(4, activation=activation))
  model.add(Dense(3, activation='softmax'))
  
  if opt == 'adam':
    optimizer = Adam(lr=lr)
  elif opt == 'rmsprop':
    optimizer = RMSprop(lr=lr)

  model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
  return model

# Method that creates a model and trains it based on the hyper parameters
# Once the model trained we evaluate it on test data and return the accuracy
# This accuracy value will be used by the Bayes optimization
# to identify the next set of hyper-parameters to be used. 
def train_evaluate(parameterization):
    acc = 0
    mymodel = get_model(opt=parameterization["opt"], activation=parameterization["activation"], dropout=parameterization["dropout"], lr=parameterization["lr"])
    mymodel.fit(X_train, y_train, epochs=parameterization["epochs"], verbose=0)
    acc = mymodel.evaluate(X_test, y_test)[1]
    print(parameterization, acc)
    del mymodel
    return acc

# Finally run the Bayes optimization
best_parameters, values, experiment, model = optimize(
     parameters=[
                 {"name": "opt", "type": "choice", "values": ['adam', 'rmsprop']},
                 {"name": "activation", "type": "choice", "values": ['relu', 'tanh']},
                 {"name": "dropout", "type": "choice", "values": [0.0, 0.25, 0.50, 0.75, 0.99]},
                 {"name": "epochs", "type": "choice", "values": [10, 50, 100]},
                 {"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True}
                ],
    evaluation_function=train_evaluate,
    objective_name="acc",
     # total trails. Based on the dateset size you can change this value. This value is used to determine the number of exploration and exploitation steps. 
    total_trials=30,
    )

# Get the best hyper parameters
data = experiment.fetch_data()
df = data.df
best_arm_name = df.arm_name[df["mean"] == df["mean"].max()].values[0]
best_arm = experiment.arms_by_name[best_arm_name]

print(best_parameters)
print(best_arm)

外置

代码语言:javascript
复制
[INFO 08-12 13:23:03] ax.modelbridge.dispatch_utils: Using Sobol generation strategy.
[INFO 08-12 13:23:03] ax.service.managed_loop: Started full optimization with 30 steps.
[INFO 08-12 13:23:03] ax.service.managed_loop: Running optimization trial 1...
2/2 [==============================] - 0s 2ms/step - loss: 0.8532 - accuracy: 0.6579
[INFO 08-12 13:23:04] ax.service.managed_loop: Running optimization trial 2...
{'opt': 'rmsprop', 'activation': 'tanh', 'dropout': 0.75, 'epochs': 50} 0.6578947305679321
2/2 [==============================] - 0s 3ms/step - loss: 1.2705 - accuracy: 0.2895
[INFO 08-12 13:23:05] ax.service.managed_loop: Running optimization trial 3...
{'opt': 'adam', 'activation': 'relu', 'dropout': 0.99, 'epochs': 10} 0.28947368264198303
2/2 [==============================] - 0s 2ms/step - loss: 0.3625 - accuracy: 0.9737
[INFO 08-12 13:23:06] ax.service.managed_loop: Running optimization trial 4...
............... LOG TRUNCATED, RUN for 3 minutes........
2/2 [==============================] - 0s 2ms/step - loss: 0.9861 - accuracy: 0.5000
[INFO 08-12 13:23:29] ax.service.managed_loop: Running optimization trial 29...
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 10} 0.5
2/2 [==============================] - 0s 2ms/step - loss: 0.9654 - accuracy: 0.3158
[INFO 08-12 13:23:30] ax.service.managed_loop: Running optimization trial 30...
2/2 [==============================] - 0s 3ms/step - loss: 1.1320 - accuracy: 0.6842
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.99, 'epochs': 10} 0.6842105388641357
{'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100}
Arm(name='8_0', parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})

从日志中可以看出,此示例的最佳超参数如下

代码语言:javascript
复制
parameters={'opt': 'adam', 'activation': 'tanh', 'dropout': 0.0, 'epochs': 100})

最后,您可以使用以下方法训练具有最佳参数的模型

代码语言:javascript
复制
train_evaluate(best_parameters)
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/63316939

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档